National Library of Energy BETA

Sample records for large scale testing

  1. Goethite Bench-scale and Large-scale Preparation Tests

    SciTech Connect (OSTI)

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the ferrous ion, Fe{sup 2+}-Fe{sup 2+} is oxidized to Fe{sup 3+} - in the presence of goethite seed particles. Rhenium does not mimic that process; it is not a strong enough reducing agent to duplicate the TcO{sub 4}{sup -}/Fe{sup 2+} redox reactions. Laboratory tests conducted in parallel with these scaled tests identified modifications to the liquid chemistry necessary to reduce ReO{sub 4}{sup -} and capture rhenium in the solids at levels similar to those achieved by Um (2010) for inclusion of Tc into goethite. By implementing these changes, Re was incorporated into Fe-rich solids for testing at VSL. The changes also changed the phase of iron that was in the slurry product: rather than forming goethite ({alpha}-FeOOH), the process produced magnetite (Fe{sub 3}O{sub 4}). Magnetite was considered by Pacific Northwest National Laboratory (PNNL) and VSL to probably be a better product to improve Re retention in the melter because it decomposes at a higher temperature than goethite (1538 C vs. 136 C). The feasibility tests at VSL were conducted using Re-rich magnetite. The tests did not indicate an improved retention of Re in the glass during vitrification, but they did indicate an improved melting rate (+60%), which could have significant impact on HLW processing. It is still to be shown whether the Re is a solid solution in the magnetite as {sup 99}Tc was determined to be in goethite.

  2. Large-Scale Spray Releases: Additional Aerosol Test Results

    SciTech Connect (OSTI)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used to mimic the relevant physical properties projected for actual WTP process streams.

  3. Large-Scale Industrial CCS Projects Selected for Continued Testing |

    Energy Savers [EERE]

    Large-Scale Federal Renewable Energy Projects Large-Scale Federal Renewable Energy Projects Renewable energy projects larger than 10 megawatts (MW), also known as utility-scale projects, are complex and typically require private-sector financing. The Federal Energy Management Program (FEMP) developed a guide to help federal agencies, and the developers and financiers that work with them, to successfully install these projects at federal facilities. FEMP's Large-Scale Renewable Energy Guide,

  4. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    SciTech Connect (OSTI)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i.e., Newtonian or non-Newtonian). The most important properties for testing with Newtonian slurries are the Archimedes number distribution and the particle concentration. For some test objectives, the shear strength is important. In the testing to collect data for CFD V and V and CFD comparison, the liquid density and liquid viscosity are important. In the high temperature testing, the liquid density and liquid viscosity are important. The Archimedes number distribution combines effects of particle size distribution, solid-liquid density difference, and kinematic viscosity. The most important properties for testing with non-Newtonian slurries are the slurry yield stress, the slurry consistency, and the shear strength. The solid-liquid density difference and the particle size are also important. It is also important to match multiple properties within the same simulant to achieve behavior representative of the waste. Other properties such as particle shape, concentration, surface charge, and size distribution breadth, as well as slurry cohesiveness and adhesiveness, liquid pH and ionic strength also influence the simulant properties either directly or through other physical properties such as yield stress.

  5. Large-Scale Spray Releases: Initial Aerosol Test Results

    SciTech Connect (OSTI)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and rectangular slots. The round holes ranged in size from 0.2 to 4.46 mm. The slots ranged from (width × length) 0.3 × 5 to 2.74 × 76.2 mm. Most slots were oriented longitudinally along the pipe, but some were oriented circumferentially. In addition, a limited number of multi-hole test pieces were tested in an attempt to assess the impact of a more complex breach. Much of the testing was conducted at pressures of 200 and 380 psi, but some tests were conducted at 100 psi. Testing the largest postulated breaches was deemed impractical because of the large size of some of the WTP equipment. The purpose of this report is to present the experimental results and analyses for the aerosol measurements obtained in the large-scale test stand. The report includes a description of the simulants used and their properties, equipment and operations, data analysis methodology, and test results. The results of tests investigating the role of slurry particles in plugging of small breaches are reported in Mahoney et al. 2012a. The results of the aerosol measurements in the small-scale test stand are reported in Mahoney et al. (2012b).

  6. Self-consistency tests of large-scale dynamics parameterizations...

    Office of Scientific and Technical Information (OSTI)

    large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG ... Journal Name: Journal of Advances in Modeling Earth Systems ...

  7. Self-consistency tests of large-scale dynamics parameterizations for

    Office of Scientific and Technical Information (OSTI)

    single-column modeling (Journal Article) | SciTech Connect Self-consistency tests of large-scale dynamics parameterizations for single-column modeling Citation Details In-Document Search Title: Self-consistency tests of large-scale dynamics parameterizations for single-column modeling Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the

  8. Development of explosive event scale model testing capability at Sandia`s large scale centrifuge facility

    SciTech Connect (OSTI)

    Blanchat, T.K.; Davie, N.T.; Calderone, J.J.

    1998-02-01

    Geotechnical structures such as underground bunkers, tunnels, and building foundations are subjected to stress fields produced by the gravity load on the structure and/or any overlying strata. These stress fields may be reproduced on a scaled model of the structure by proportionally increasing the gravity field through the use of a centrifuge. This technology can then be used to assess the vulnerability of various geotechnical structures to explosive loading. Applications of this technology include assessing the effectiveness of earth penetrating weapons, evaluating the vulnerability of various structures, counter-terrorism, and model validation. This document describes the development of expertise in scale model explosive testing on geotechnical structures using Sandia`s large scale centrifuge facility. This study focused on buried structures such as hardened storage bunkers or tunnels. Data from this study was used to evaluate the predictive capabilities of existing hydrocodes and structural dynamics codes developed at Sandia National Laboratories (such as Pronto/SPH, Pronto/CTH, and ALEGRA). 7 refs., 50 figs., 8 tabs.

  9. Testing coupled dark energy with large scale structure observation

    SciTech Connect (OSTI)

    Yang, Weiqiang; Xu, Lixin, E-mail: d11102004@mail.dlut.edu.cn, E-mail: lxxu@dlut.edu.cn [Institute of Theoretical Physics, School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian, 116024 (China)

    2014-08-01

    The coupling between the dark components provides a new approach to mitigate the coincidence problem of cosmological standard model. In this paper, dark energy is treated as a fluid with a constant equation of state, whose coupling with dark matter is Q-bar =3H?{sub x}?-bar {sub x}. In the frame of dark energy, we derive the evolution equations for the density and velocity perturbations. According to the Markov Chain Monte Carlo method, we constrain the model by currently available cosmic observations which include cosmic microwave background radiation, baryon acoustic oscillation, type Ia supernovae, and f?{sub 8}(z) data points from redshift-space distortion. The results show the interaction rate in ? regions: ?{sub x}=0.00328{sub -0.00328-0.00328-0.00328}{sup +0.000736+0.00549+0.00816}, which means that the recently cosmic observations favor a small interaction rate which is up to the order of 10{sup -2}, meanwhile, the measurement of redshift-space distortion could rule out the large interaction rate in the ? region.

  10. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less

  11. Nonlinear Seismic Correlation Analysis of the JNES/NUPEC Large-Scale Piping System Tests.

    SciTech Connect (OSTI)

    Nie,J.; DeGrassi, G.; Hofmayer, C.; Ali, S.

    2008-06-01

    The Japan Nuclear Energy Safety Organization/Nuclear Power Engineering Corporation (JNES/NUPEC) large-scale piping test program has provided valuable new test data on high level seismic elasto-plastic behavior and failure modes for typical nuclear power plant piping systems. The component and piping system tests demonstrated the strain ratcheting behavior that is expected to occur when a pressurized pipe is subjected to cyclic seismic loading. Under a collaboration agreement between the US and Japan on seismic issues, the US Nuclear Regulatory Commission (NRC)/Brookhaven National Laboratory (BNL) performed a correlation analysis of the large-scale piping system tests using derailed state-of-the-art nonlinear finite element models. Techniques are introduced to develop material models that can closely match the test data. The shaking table motions are examined. The analytical results are assessed in terms of the overall system responses and the strain ratcheting behavior at an elbow. The paper concludes with the insights about the accuracy of the analytical methods for use in performance assessments of highly nonlinear piping systems under large seismic motions.

  12. Aerodynamic force measurement on a large-scale model in a short duration test facility

    SciTech Connect (OSTI)

    Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.

    2005-03-01

    A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3 m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350 {mu}s is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1 ms.

  13. Testing the big bang: Light elements, neutrinos, dark matter and large-scale structure

    SciTech Connect (OSTI)

    Schramm, D.N. Fermi National Accelerator Lab., Batavia, IL )

    1991-06-01

    In this series of lectures, several experimental and observational tests of the standard cosmological model are examined. In particular, detailed discussion is presented regarding nucleosynthesis, the light element abundances and neutrino counting; the dark matter problems; and the formation of galaxies and large-scale structure. Comments will also be made on the possible implications of the recent solar neutrino experimental results for cosmology. An appendix briefly discusses the 17 keV thing'' and the cosmological and astrophysical constraints on it. 126 refs., 8 figs., 2 tabs.

  14. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running Large Scale Jobs Running Large Scale Jobs Users face various challenges with running and scaling large scale jobs on peta-scale production systems. For example, certain applications may not have enough memory per core, the default environment variables may need to be adjusted, or I/O dominates run time. This page lists some available programming and run time tuning options and tips users can try on their large scale applications on Hopper for better performance. Try different compilers

  15. Re-evaluation of the 1995 Hanford Large Scale Drum Fire Test Results

    SciTech Connect (OSTI)

    Yang, J M

    2007-05-02

    A large-scale drum performance test was conducted at the Hanford Site in June 1995, in which over one hundred (100) 55-gal drums in each of two storage configurations were subjected to severe fuel pool fires. The two storage configurations in the test were pallet storage and rack storage. The description and results of the large-scale drum test at the Hanford Site were reported in WHC-SD-WM-TRP-246, ''Solid Waste Drum Array Fire Performance,'' Rev. 0, 1995. This was one of the main references used to develop the analytical methodology to predict drum failures in WHC-SD-SQA-ANAL-501, 'Fire Protection Guide for Waste Drum Storage Array,'' September 1996. Three drum failure modes were observed from the test reported in WHC-SD-WM-TRP-246. They consisted of seal failure, lid warping, and catastrophic lid ejection. There was no discernible failure criterion that distinguished one failure mode from another. Hence, all three failure modes were treated equally for the purpose of determining the number of failed drums. General observations from the results of the test are as follows: {lg_bullet} Trash expulsion was negligible. {lg_bullet} Flame impingement was identified as the main cause for failure. {lg_bullet} The range of drum temperatures at failure was 600 C to 800 C. This is above the yield strength temperature for steel, approximately 540 C (1,000 F). {lg_bullet} The critical heat flux required for failure is above 45 kW/m{sup 2}. {lg_bullet} Fire propagation from one drum to the next was not observed. The statistical evaluation of the test results using, for example, the student's t-distribution, will demonstrate that the failure criteria for TRU waste drums currently employed at nuclear facilities are very conservative relative to the large-scale test results. Hence, the safety analysis utilizing the general criteria described in the five bullets above will lead to a technically robust and defensible product that bounds the potential consequences from postulated fires in TRU waste facilities, the means of storage in which are the Type A, 55-gal drums.

  16. Lotung large-scale seismic test strong motion records. Volume 1, General description: Final report

    SciTech Connect (OSTI)

    Not Available

    1992-03-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4 scale and 1/12 scale) of a nuclear plant concrete containment structure at a seismically active site in Lotung, Taiwan. Extensive instrumentation was deployed to record both structural and ground responses during earthquakes. The experiment, generally referred to as the Lotung Large-Scale Seismic Test (LSST), was used to gather data for soil-structure interaction (SSI) analysis method evaluation and validation as well as for site ground response investigation. A number of earthquakes having local magnitudes ranging from 4.5 to 7.0 have been recorded at the LSST site since the completion of the test facility in September 1985. This report documents the earthquake data, both raw and processed, collected from the LSST experiment. Volume 1 of the report provides general information on site location, instrument types and layout, data acquisition and processing, and data file organization. The recorded data are described chronologically in subsequent volumes of the report.

  17. Test of the CLAS12 RICH large-scale prototype in the direct proximity focusing configuration

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Anefalos Pereira, S.; Baltzell, N.; Barion, L.; Benmokhtar, F.; Brooks, W.; Cisbani, E.; Contalbrigo, M.; El Alaoui, A.; Hafidi, K.; Hoek, M.; et al

    2016-02-11

    A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c up to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Laboratory. The adopted solution foresees a novel hybrid optics design based on aerogel radiator, composite mirrors and high-packed and high-segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). We report here the results of the tests of a large scale prototype of the RICH detector performed withmore » the hadron beam of the CERN T9 experimental hall for the direct detection configuration. As a result, the tests demonstrated that the proposed design provides the required pion-to-kaon rejection factor of 1:500 in the whole momentum range.« less

  18. Aerosols released during large-scale integral MCCI tests in the ACE Program

    SciTech Connect (OSTI)

    Fink, J.K.; Thompson, D.H.; Spencer, B.W.; Sehgal, B.R.

    1992-04-01

    As part of the internationally sponsored Advanced Containment Experiments (ACE) program, seven large-scale experiments on molten core concrete interactions (MCCIs) have been performed at Argonne National Laboratory. One of the objectives of these experiments is to collect and characterize all the aerosols released from the MCCIs. Aerosols released from experiments using four types of concrete (siliceous, limestone/common sand, serpentine, and limestone/limestone) and a range of metal oxidation for both BWR and PWR reactor core material have been collected and characterized. Release fractions were determined for UO{sup 2}, Zr, the fission-products: BaO, SrO, La{sub 2}O{sub 3}, CeO{sub 2}, MoO{sub 2}, Te, Ru, and control materials: Ag, In, and B{sub 4}C. Release fractions of UO{sub 2} and the fission products other than Te were small in all tests. However, release of control materials was significant.

  19. Aerosols released during large-scale integral MCCI tests in the ACE Program

    SciTech Connect (OSTI)

    Fink, J.K.; Thompson, D.H.; Spencer, B.W. ); Sehgal, B.R. )

    1992-01-01

    As part of the internationally sponsored Advanced Containment Experiments (ACE) program, seven large-scale experiments on molten core concrete interactions (MCCIs) have been performed at Argonne National Laboratory. One of the objectives of these experiments is to collect and characterize all the aerosols released from the MCCIs. Aerosols released from experiments using four types of concrete (siliceous, limestone/common sand, serpentine, and limestone/limestone) and a range of metal oxidation for both BWR and PWR reactor core material have been collected and characterized. Release fractions were determined for UO{sup 2}, Zr, the fission-products: BaO, SrO, La{sub 2}O{sub 3}, CeO{sub 2}, MoO{sub 2}, Te, Ru, and control materials: Ag, In, and B{sub 4}C. Release fractions of UO{sub 2} and the fission products other than Te were small in all tests. However, release of control materials was significant.

  20. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    try on their large scale applications on Hopper for better performance. Try different compilers and compiler options The available compilers on Hopper are PGI, Cray, Intel, GNU,...

  1. Active and passive acoustic imaging inside a large-scale polyaxial hydraulic fracture test

    SciTech Connect (OSTI)

    Glaser, S.D.; Dudley, J.W. II; Shlyapobersky, J.

    1999-07-01

    An automated laboratory hydraulic fracture experiment has been assembled to determine what rock and treatment parameters are crucial to improving the efficiency and effectiveness of field hydraulic fractures. To this end a large (460 mm cubic sample) polyaxial cell, with servo-controlled X,Y,Z, pore pressure, crack-mouth-opening-displacement, and bottom hole pressure, was built. Active imaging with embedded seismic diffraction arrays images the geometry of the fracture. Preliminary tests indicate fracture extent can be imaged to within 5%. Unique embeddible high-fidelity particle velocity AE sensors were designed and calibrated to allow determination of fracture source kinematics.

  2. VP 100: New Facility in Boston to Test Large-Scale Wind Blades

    Broader source: Energy.gov [DOE]

    Thanks in part to funding from the Recovery Act, the Wind Technology Testing Center in Massachusetts will be first in the U.S. to test wind turbine blades up to 300 feet in length -- creating 300 construction jobs and 30 permanent design jobs in the process.

  3. Results of Large-Scale Testing on Effects of Anti-Foam Agent on Gas Retention and Release

    SciTech Connect (OSTI)

    Stewart, Charles W.; Guzman-Leong, Consuelo E.; Arm, Stuart T.; Butcher, Mark G.; Golovich, Elizabeth C.; Jagoda, Lynette K.; Park, Walter R.; Slaugh, Ryan W.; Su, Yin-Fong; Wend, Christopher F.; Mahoney, Lenna A.; Alzheimer, James M.; Bailey, Jeffrey A.; Cooley, Scott K.; Hurley, David E.; Johnson, Christian D.; Reid, Larry D.; Smith, Harry D.; Wells, Beric E.; Yokuda, Satoru T.

    2008-01-03

    The U.S. Department of Energy (DOE) Office of River Protections Waste Treatment Plant (WTP) will process and treat radioactive waste that is stored in tanks at the Hanford Site. The waste treatment process in the pretreatment facility will mix both Newtonian and non-Newtonian slurries in large process tanks. Process vessels mixing non-Newtonian slurries will use pulse jet mixers (PJMs), air sparging, and recirculation pumps. An anti-foam agent (AFA) will be added to the process streams to prevent surface foaming, but may also increase gas holdup and retention within the slurry. The work described in this report addresses gas retention and release in simulants with AFA through testing and analytical studies. Gas holdup and release tests were conducted in a 1/4-scale replica of the lag storage vessel operated in the Pacific Northwest National Laboratory (PNNL) Applied Process Engineering Laboratory using a kaolin/bentonite clay and AZ-101 HLW chemical simulant with non-Newtonian rheological properties representative of actual waste slurries. Additional tests were performed in a small-scale mixing vessel in the PNNL Physical Sciences Building using liquids and slurries representing major components of typical WTP waste streams. Analytical studies were directed at discovering how the effect of AFA might depend on gas composition and predicting the effect of AFA on gas retention and release in the full-scale plant, including the effects of mass transfer to the sparge air. The work at PNNL was part of a larger program that included tests conducted at Savannah River National Laboratory (SRNL) that is being reported separately. SRNL conducted gas holdup tests in a small-scale mixing vessel using the AZ-101 high-level waste (HLW) chemical simulant to investigate the effects of different AFAs, their components, and of adding noble metals. Full-scale, single-sparger mass transfer tests were also conducted at SRNL in water and AZ-101 HLW simulant to provide data for PNNLs WTP gas retention and release modeling.

  4. Large scale tracking algorithms.

    SciTech Connect (OSTI)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  5. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    peta-scale production systems. For example, certain applications may not have enough memory per core, the default environment variables may need to be adjusted, or IO dominates...

  6. Large-Scale Testing of Effects of Anti-Foam Agent on Gas Holdup in Process Vessels in the Hanford Waste Treatment Plant - 8280

    SciTech Connect (OSTI)

    Mahoney, Lenna A.; Alzheimer, James M.; Arm, Stuart T.; Guzman-Leong, Consuelo E.; Jagoda, Lynette K.; Stewart, Charles W.; Wells, Beric E.; Yokuda, Satoru T.

    2008-06-03

    The Hanford Waste Treatment Plant (WTP) will vitrify the radioactive wastes stored in underground tanks. These wastes generate and retain hydrogen and other flammable gases that create safety concerns for the vitrification process tanks in the WTP. An anti-foam agent (AFA) will be added to the WTP process streams. Prior testing in a bubble column and a small-scale impeller-mixed vessel indicated that gas holdup in a high-level waste chemical simulant with AFA was up to 10 times that in clay simulant without AFA. This raised a concern that major modifications to the WTP design or qualification of an alternative AFA might be required to satisfy plant safety criteria. However, because the mixing and gas generation mechanisms in the small-scale tests differed from those expected in WTP process vessels, additional tests were performed in a large-scale prototypic mixing system with in situ gas generation. This paper presents the results of this test program. The tests were conducted at Pacific Northwest National Laboratory in a -scale model of the lag storage process vessel using pulse jet mixers and air spargers. Holdup and release of gas bubbles generated by hydrogen peroxide decomposition were evaluated in waste simulants containing an AFA over a range of Bingham yield stresses and gas gen geration rates. Results from the -scale test stand showed that, contrary to the small-scale impeller-mixed tests, gas holdup in clay without AFA is comparable to that in the chemical waste simulant with AFA. The test stand, simulants, scaling and data-analysis methods, and results are described in relation to previous tests and anticipated WTP operating conditions.

  7. Large-Scale Testing of Effects of Anti-Foam Agent on Gas Holdup in Process Vessels in the Hanford Waste Treatment Plant

    SciTech Connect (OSTI)

    Mahoney, L.A.; Alzheimer, J.M.; Arm, S.T.; Guzman-Leong, C.E.; Jagoda, L.K.; Stewart, C.W.; Wells, B.E.; Yokuda, S.T. [Pacific Northwest National Laboratory, Richland, WA (United States)

    2008-07-01

    The Hanford Waste Treatment and Immobilization Plant (WTP) will vitrify the radioactive wastes stored in underground tanks. These wastes generate and retain hydrogen and other flammable gases that create safety concerns for the vitrification process tanks in the WTP. An anti-foam agent (AFA) will be added to the WTP process streams. Previous testing in a bubble column and a small-scale impeller-mixed vessel indicated that gas holdup in a high-level waste chemical simulant with AFA was as much as 10 times higher than in clay simulant without AFA. This raised a concern that major modifications to the WTP design or qualification of an alternative AFA might be required to satisfy plant safety criteria. However, because the mixing and gas generation mechanisms in the small-scale tests differed from those expected in WTP process vessels, additional tests were performed in a large-scale prototypic mixing system with in situ gas generation. This paper presents the results of this test program. The tests were conducted at Pacific Northwest National Laboratory in a 1/4-scale model of the lag storage process vessel using pulse jet mixers and air spargers. Holdup and release of gas bubbles generated by hydrogen peroxide decomposition were evaluated in waste simulants containing an AFA over a range of Bingham yield stresses and gas generation rates. Results from the 1/4-scale test stand showed that, contrary to the small-scale impeller-mixed tests, holdup in the chemical waste simulant with AFA was not so greatly increased compared to gas holdup in clay without AFA. The test stand, simulants, scaling and data-analysis methods, and results are described in relation to previous tests and anticipated WTP operating conditions. (authors)

  8. Data Analysis, Pre-Ignition Assessment, and Post-Ignition Modeling of the Large-Scale Annular Cookoff Tests

    SciTech Connect (OSTI)

    G. Terrones; F.J. Souto; R.F. Shea; M.W.Burkett; E.S. Idar

    2005-09-30

    In order to understand the implications that cookoff of plastic-bonded explosive-9501 could have on safety assessments, we analyzed the available data from the large-scale annular cookoff (LSAC) assembly series of experiments. In addition, we examined recent data regarding hypotheses about pre-ignition that may be relevant to post-ignition behavior. Based on the post-ignition data from Shot 6, which had the most complete set of data, we developed an approximate equation of state (EOS) for the gaseous products of deflagration. Implementation of this EOS into the multimaterial hydrodynamics computer program PAGOSA yielded good agreement with the inner-liner collapse sequence for Shot 6 and with other data, such as velocity interferometer system for any reflector and resistance wires. A metric to establish the degree of symmetry based on the concept of time of arrival to pin locations was used to compare numerical simulations with experimental data. Several simulations were performed to elucidate the mode of ignition in the LSAC and to determine the possible compression levels that the metal assembly could have been subjected to during post-ignition.

  9. Proceedings of the Joint IAEA/CSNI Specialists` Meeting on Fracture Mechanics Verification by Large-Scale Testing held at Pollard Auditorium, Oak Ridge, Tennessee

    SciTech Connect (OSTI)

    Pugh, C.E.; Bass, B.R.; Keeney, J.A.

    1993-10-01

    This report contains 40 papers that were presented at the Joint IAEA/CSNI Specialists` Meeting Fracture Mechanics Verification by Large-Scale Testing held at the Pollard Auditorium, Oak Ridge, Tennessee, during the week of October 26--29, 1992. The papers are printed in the order of their presentation in each session and describe recent large-scale fracture (brittle and/or ductile) experiments, analyses of these experiments, and comparisons between predictions and experimental results. The goal of the meeting was to allow international experts to examine the fracture behavior of various materials and structures under conditions relevant to nuclear reactor components and operating environments. The emphasis was on the ability of various fracture models and analysis methods to predict the wide range of experimental data now available. The individual papers have been cataloged separately.

  10. Large-Scale Information Systems

    SciTech Connect (OSTI)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  11. Large-Scale Computational Fluid Dynamics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large-Scale Computational Fluid Dynamics - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management

  12. Large-Scale Renewable Energy Guide Webinar

    Broader source: Energy.gov [DOE]

    Webinar introduces the “Large Scale Renewable Energy Guide." The webinar will provide an overview of this important FEMP guide, which describes FEMP's approach to large-scale renewable energy projects and provides guidance to Federal agencies and the private sector on how to develop a common process for large-scale renewable projects.

  13. Performance of powder-filled evacuated panel insulation in a manufactured home roof cavity: Tests in the Large Scale Climate Simulator

    SciTech Connect (OSTI)

    Petrie, T.W.; Kosny, J.; Childs, P.W.

    1996-03-01

    A full-scale section of half the top of a single-wide manufactured home has been studied in the Large Scale Climate Simulator (LSCS) at the Oak Ridge National Laboratory. A small roof cavity with little room for insulation at the eaves is often the case with single-wide units and limits practical ways to improve thermal performance. The purpose of the current tests was to obtain steady-state performance data for the roof cavity of the manufactured home test section when the roof cavity was insulated with fiberglass batts, blown-in rock wool insulation or combinations of these insulations and powder-filled evacuated panel (PEP) insulation. Four insulation configurations were tested: (A) a configuration with two layers of nominal R{sub US}-7 h {center_dot} ft{sup 2} {center_dot} F/BTU (R{sub SI}-1.2 m{sup 2} {center_dot} K/W) fiberglass batts; (B) a layer of PEPs and one layer of the fiberglass batts; (C) four layers of the fiberglass batts; and (D) an average 4.1 in. (10.4 cm) thick layer of blown-in rock wool at an average density of 2.4 lb/ft{sup 3} (38 kg/m{sup 3}). Effects of additional sheathing were determined for Configurations B and C. With Configuration D over the ceiling, two layers of expanded polystyrene (EPS) boards, each about the same thickness as the PEPs, were installed over the trusses instead of the roof. Aluminum foils facing the attic and over the top layer of EPS were added. The top layer of EPS was then replaced by PEPs.

  14. Presentation on the Large-Scale Renewable Energy Guide | Department...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Presentation on the Large-Scale Renewable Energy Guide Presentation on the Large-Scale Renewable Energy Guide Presentation covers the Large-Scale RE Guide: Developing Renewable ...

  15. Revised Environmental Assessment Large-Scale, Open-Air Explosive

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Environmental Assessment Large-Scale, Open-Air Explosive Detonation, DIVINE STRAKE, at the Nevada Test Site May 2006 Prepared by Department of Energy National Nuclear Security Administration Nevada Site Office Environmental Assessment May 2006 Large-Scale, Open-Air Explosive Detonation, DIVINE STRAKE, at the Nevada Test Site TABLE OF CONTENTS 1.0 PURPOSE AND NEED FOR ACTION.....................................................1-1 1.1 Introduction and

  16. LARGE BLOCK TEST STATUS REPORT

    SciTech Connect (OSTI)

    Wilder, D. G.; Blair, S. C.; Buscheck, T.; Carloson, R. C.; Lee, K.; Meike, A.; Ramirez, J. L.; Sevougian, D.

    1997-08-26

    This report is intended to serve as a status report, which essentially transmits the data that have been collected to date on the Large Block Test (LBT). The analyses of data will be performed during FY98, and then a complete report will be prepared. This status report includes introductory material that is not needed merely to transmit data but is available at this time and therefore included. As such, this status report will serve as the template for the future report, and the information is thus preserved. The United States Department of Energy (DOE) is investigatinq the suitability of Yucca Mountain (YM) as a potential site for the nation's first high-level nuclear waste repository. As shown in Fig. 1-1, the site is located about 120 km northwest of Las Vegas, Nevada, in an area of uninhabited desert.

  17. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect (OSTI)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  18. Large-Scale PV Integration Study

    SciTech Connect (OSTI)

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  19. Large-Scale Renewable Energy Guide | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Renewable Energy Guide Large-Scale Renewable Energy Guide Presentation covers the Large-scale RE Guide: Developing Renewable Energy Projects Larger than 10 MWs at...

  20. Energy Department Applauds Nation's First Large-Scale Industrial...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Nation's First Large-Scale Industrial Carbon Capture and Storage Facility Energy Department Applauds Nation's First Large-Scale Industrial Carbon Capture and Storage Facility ...

  1. ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and Analyses of Automotive Engines Title ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and...

  2. Large-Scale Renewable Energy Guide: Developing Renewable Energy...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities Large-Scale Renewable Energy Guide: Developing Renewable Energy ...

  3. Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives PDF icon nanoparticulate-basedlubricati...

  4. Large-Scale Residential Energy Efficiency Programs Based on CFLs...

    Open Energy Info (EERE)

    Large-Scale Residential Energy Efficiency Programs Based on CFLs Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Large-Scale Residential Energy Efficiency Programs Based...

  5. Determination of Large-Scale Cloud Ice Water Concentration by...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Determination of Large-Scale Cloud Ice Water Concentration by Combining ... Title: Determination of Large-Scale Cloud Ice Water Concentration by Combining Surface ...

  6. The Effective Field Theory of Cosmological Large Scale Structures...

    Office of Scientific and Technical Information (OSTI)

    The Effective Field Theory of Cosmological Large Scale Structures Citation Details In-Document Search Title: The Effective Field Theory of Cosmological Large Scale Structures...

  7. DLFM library tools for large scale dynamic applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    applications DLFM library tools for large scale dynamic applications Large scale Python and other dynamic applications may spend huge time at startup. The DLFM library,...

  8. Large-scale quasi-geostrophic magnetohydrodynamics

    SciTech Connect (OSTI)

    Balk, Alexander M.

    2014-12-01

    We consider the ideal magnetohydrodynamics (MHD) of a shallow fluid layer on a rapidly rotating planet or star. The presence of a background toroidal magnetic field is assumed, and the 'shallow water' beta-plane approximation is used. We derive a single equation for the slow large length scale dynamics. The range of validity of this equation fits the MHD of the lighter fluid at the top of Earth's outer core. The form of this equation is similar to the quasi-geostrophic (Q-G) equation (for usual ocean or atmosphere), but the parameters are essentially different. Our equation also implies the inverse cascade; but contrary to the usual Q-G situation, the energy cascades to smaller length scales, while the enstrophy cascades to the larger scales. We find the Kolmogorov-type spectrum for the inverse cascade. The spectrum indicates the energy accumulation in larger scales. In addition to the energy and enstrophy, the obtained equation possesses an extra (adiabatic-type) invariant. Its presence implies energy accumulation in the 30° sector around zonal direction. With some special energy input, the extra invariant can lead to the accumulation of energy in zonal magnetic field; this happens if the input of the extra invariant is small, while the energy input is considerable.

  9. Investigation of CO2 plume behavior for a large-scale pilot test of geologic carbon storage in a saline formation

    SciTech Connect (OSTI)

    Doughty, C.

    2009-04-01

    The hydrodynamic behavior of carbon dioxide (CO{sub 2}) injected into a deep saline formation is investigated, focusing on trapping mechanisms that lead to CO{sub 2} plume stabilization. A numerical model of the subsurface at a proposed power plant with CO{sub 2} capture is developed to simulate a planned pilot test, in which 1,000,000 metric tons of CO{sub 2} is injected over a four-year period, and the subsequent evolution of the CO{sub 2} plume for hundreds of years. Key measures are plume migration distance and the time evolution of the partitioning of CO{sub 2} between dissolved, immobile free-phase, and mobile free-phase forms. Model results indicate that the injected CO{sub 2} plume is effectively immobilized at 25 years. At that time, 38% of the CO{sub 2} is in dissolved form, 59% is immobile free phase, and 3% is mobile free phase. The plume footprint is roughly elliptical, and extends much farther up-dip of the injection well than down-dip. The pressure increase extends far beyond the plume footprint, but the pressure response decreases rapidly with distance from the injection well, and decays rapidly in time once injection ceases. Sensitivity studies that were carried out to investigate the effect of poorly constrained model parameters permeability, permeability anisotropy, and residual CO{sub 2} saturation indicate that small changes in properties can have a large impact on plume evolution, causing significant trade-offs between different trapping mechanisms.

  10. Batteries for Large Scale Energy Storage

    SciTech Connect (OSTI)

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with ?-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  11. Supporting large-scale computational science

    SciTech Connect (OSTI)

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  12. Stimulated forward Raman scattering in large scale-length laser...

    Office of Scientific and Technical Information (OSTI)

    Stimulated forward Raman scattering in large scale-length laser-produced plasmas Citation Details In-Document Search Title: Stimulated forward Raman scattering in large ...

  13. Stimulated forward Raman scattering in large scale-length laser...

    Office of Scientific and Technical Information (OSTI)

    in large scale-length laser-produced plasmas Citation Details In-Document Search Title: Stimulated forward Raman scattering in large scale-length laser-produced plasmas You ...

  14. Locations of Smart Grid Demonstration and Large-Scale Energy...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Map of the United States ...

  15. SimFS: A Large Scale Parallel File System Simulator

    Energy Science and Technology Software Center (OSTI)

    2011-08-30

    The software provides both framework and tools to simulate a large-scale parallel file system such as Lustre.

  16. DLFM library tools for large scale dynamic applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    DLFM library tools for large scale dynamic applications DLFM library tools for large scale dynamic applications Large scale Python and other dynamic applications may spend huge time at startup. The DLFM library, developed by Mike Davis at Cray, Inc., is a set of functions that can be incorporated into a dynamically-linked application to provide improved performance during the loading of dynamic libraries when running the application at large scale on Edison. To access this library, do module

  17. Sensitivity technologies for large scale simulation.

    SciTech Connect (OSTI)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete version for error estimation. We investigate the advantages and disadvantages of continuous and discre

  18. Large-Scale Federal Renewable Energy Projects | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Federal Renewable Energy Projects Large-Scale Federal Renewable Energy Projects Renewable energy projects larger than 10 megawatts (MW), also known as utility-scale projects, are complex and typically require private-sector financing. The Federal Energy Management Program (FEMP) developed a guide to help federal agencies, and the developers and financiers that work with them, to successfully install these projects at federal facilities. FEMP's Large-Scale Renewable Energy Guide,

  19. Large-Scale Wind Training Program

    SciTech Connect (OSTI)

    Porter, Richard L.

    2013-07-01

    Project objective is to develop a credit-bearing wind technician program and a non-credit safety training program, train faculty, and purchase/install large wind training equipment.

  20. Massachusetts Large Blade Test Facility Final Report

    SciTech Connect (OSTI)

    Rahul Yarala; Rob Priore

    2011-09-02

    Project Objective: The Massachusetts Clean Energy Center (CEC) will design, construct, and ultimately have responsibility for the operation of the Large Wind Turbine Blade Test Facility, which is an advanced blade testing facility capable of testing wind turbine blades up to at least 90 meters in length on three test stands. Background: Wind turbine blade testing is required to meet international design standards, and is a critical factor in maintaining high levels of reliability and mitigating the technical and financial risk of deploying massproduced wind turbine models. Testing is also needed to identify specific blade design issues that may contribute to reduced wind turbine reliability and performance. Testing is also required to optimize aerodynamics, structural performance, encourage new technologies and materials development making wind even more competitive. The objective of this project is to accelerate the design and construction of a large wind blade testing facility capable of testing blades with minimum queue times at a reasonable cost. This testing facility will encourage and provide the opportunity for the U.S wind industry to conduct more rigorous testing of blades to improve wind turbine reliability.

  1. Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction |

    Energy Savers [EERE]

    Department of Energy Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction August 24, 2011 - 1:00pm Addthis Washington, DC - Construction activities have begun at an Illinois ethanol plant that will demonstrate carbon capture and storage. The project, sponsored by the U.S. Department of Energy's Office of Fossil Energy, is the first large-scale integrated carbon capture and storage (CCS) demonstration

  2. Large Scale Computing and Storage Requirements for Advanced Scientific

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Research: Target 2014 Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research An ASCR / NERSC Review January 5-6, 2011 Final Report Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research, Report of the Joint ASCR / NERSC Workshop conducted January 5-6, 2011 Goals This workshop is being

  3. Large Scale Computing and Storage Requirements for Basic Energy Sciences:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Target 2014 Large Scale Computing and Storage Requirements for Basic Energy Sciences: Target 2014 BESFrontcover.png Final Report Large Scale Computing and Storage Requirements for Basic Energy Sciences, Report of the Joint BES/ ASCR / NERSC Workshop conducted February 9-10, 2010 Workshop Agenda The agenda for this workshop is presented here: including presentation times and speaker information. Read More » Workshop Presentations Large Scale Computing and Storage Requirements for Basic

  4. Energy Department Applauds Nation's First Large-Scale Industrial Carbon

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Capture and Storage Facility | Department of Energy Nation's First Large-Scale Industrial Carbon Capture and Storage Facility Energy Department Applauds Nation's First Large-Scale Industrial Carbon Capture and Storage Facility August 24, 2011 - 6:23pm Addthis Washington, D.C. - The U.S. Department of Energy issued the following statement in support of today's groundbreaking for construction of the nation's first large-scale industrial carbon capture and storage (ICCS) facility in Decatur,

  5. Effects of Volcanism, Crustal Thickness, and Large Scale Faulting...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Effects of Volcanism, Crustal Thickness, and Large Scale Faulting on the Development and Evolution of Geothermal Systems: Collaborative Project in Chile Effects of Volcanism, ...

  6. Computational Fluid Dynamics & Large-Scale Uncertainty Quantification...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Computational Fluid Dynamics & Large-Scale Uncertainty Quantification for Wind Energy A team of Sandia experts in aerospace engineering, scientific computing, and mathematics ...

  7. Strategies to Finance Large-Scale Deployment of Renewable Energy...

    Open Energy Info (EERE)

    to Finance Large-Scale Deployment of Renewable Energy Projects: An Economic Development and Infrastructure Approach Jump to: navigation, search Tool Summary LAUNCH TOOL Name:...

  8. Large-Scale Hydropower Basics | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Renewable Energy » Hydropower » Large-Scale Hydropower Basics Large-Scale Hydropower Basics August 14, 2013 - 3:11pm Addthis Large-scale hydropower plants are generally developed to produce electricity for government or electric utility projects. These plants are more than 30 megawatts (MW) in size, and there is more than 80,000 MW of installed generation capacity in the United States today. Most large-scale hydropower projects use a dam and a reservoir to retain water from a river. When the

  9. Large Scale Computing and Storage Requirements for Fusion Energy...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Home Science at NERSC HPC Requirements Reviews Requirements Reviews: Target 2014 Fusion Energy Sciences (FES) Large Scale Computing and Storage Requirements for Fusion ...

  10. Overcoming the Barrier to Achieving Large-Scale Production -...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Overcoming the Barrier to Achieving Large-Scale Production - A Case Study This presentation summarizes the information given by Semprius during the Photovoltaic Validation and ...

  11. Optimizing Cluster Heads for Energy Efficiency in Large-Scale...

    Office of Scientific and Technical Information (OSTI)

    Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks Gu, Yi; Wu, Qishi; Rao, Nageswara S. V. Hindawi Publishing Corporation None...

  12. FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy...

    Broader source: Energy.gov (indexed) [DOE]

    jobs, and advancing national goals for energy security. The guide describes the fundamentals of deploying financially attractive, large-scale renewable energy projects and...

  13. A Model for Turbulent Combustion Simulation of Large Scale Hydrogen...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A Model for Turbulent Combustion Simulation of Large Scale Hydrogen Explosions Event Sponsor: Argonne Leadership Computing Facility Seminar Start Date: Oct 6 2015 - 10:00am...

  14. Optimizing Cluster Heads for Energy Efficiency in Large-Scale...

    Office of Scientific and Technical Information (OSTI)

    clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy...

  15. Large Scale Computing and Storage Requirements for Advanced Scientific...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ... This workshop is being organized by the Department of Energy's Office of ...

  16. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2013 Hilton Washington DCRockville Hotel and Executive Meeting Center 1750 Rockville Pike, Rockville, MD 20852-1699 Final Report Large Scale Computing and Storage Requirements...

  17. DOE's New Large Blade Test Facility in Massachusetts Completes...

    Office of Environmental Management (EM)

    DOE's New Large Blade Test Facility in Massachusetts Completes First Commercial Blade Tests DOE's New Large Blade Test Facility in Massachusetts Completes First Commercial Blade ...

  18. WETS - Azura Half Scale Testing MOIS Documentation

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Eric Nelson

    2015-05-30

    This submission includes documentation on the Modular Ocean Instrumentation System (MOIS) installation on the Azura 1/2 scale wave energy converter at the Marine Station Kaneohe Bay (MCBH). Data from the deployment will be uploaded over the course of the test. The instrumentation and data come from the NREL team participating in this testing.

  19. Scaling issues associated with thermal and structural modeling and testing

    SciTech Connect (OSTI)

    Thomas, R.K.; Moya, J.L.; Skocypec, R.D.

    1993-10-01

    Sandia National Laboratories (SNL) is actively engaged in research to characterize abnormal environments, and to improve our capability to accurately predict the response of engineered systems to thermal and structural events. Abnormal environments, such as impact and fire, are complex and highly nonlinear phenomena which are difficult to model by computer simulation. Validation of computer results with full scale, high fidelity test data is required. The number of possible abnormal environments and the range of initial conditions are very large. Because full-scale tests are very costly, only a minimal number have been conducted. Scale model tests are often performed to span the range of abnormal environments and initial conditions unobtainable by full-scale testing. This paper will discuss testing capabilities at SNL, issues associated with thermal and structural scaling, and issues associated with extrapolating scale model data to full-scale system response. Situated a few minutes from Albuquerque, New Mexico, are the unique test facilities of Sandia National Laboratories. The testing complex is comprised of over 40 facilities which occupy over 40 square miles. Many of the facilities have been designed and built by SNL to simulate complex problems encountered in engineering analysis and design. The facilities can provide response measurements, under closely controlled conditions, to both verify mathematical models of engineered systems and satisfy design specifications.

  20. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    SciTech Connect (OSTI)

    Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro

    2015-05-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).

  1. Towards a Large-Scale Recording System: Demonstration of Polymer...

    Office of Scientific and Technical Information (OSTI)

    of Polymer-Based Penetrating Array for Chronic Neural Recording Citation Details In-Document Search Title: Towards a Large-Scale Recording System: Demonstration of Polymer-Based ...

  2. Large-Scale First-Principles Molecular Dynamics Simulations on...

    Office of Scientific and Technical Information (OSTI)

    the BlueGeneL Platform using the Qbox Code Citation Details In-Document Search Title: Large-Scale First-Principles Molecular Dynamics Simulations on the BlueGeneL Platform using ...

  3. How Three Retail Buyers Source Large-Scale Solar Electricity

    Broader source: Energy.gov [DOE]

    Large-scale, non-utility solar power purchase agreements (PPAs) are still a rarity despite the growing popularity of PPAs across the country. In this webinar, participants will learn more about how...

  4. COLLOQUIUM: Liquid Metal Batteries for Large-scale Energy Storage...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    June 22, 2016, 4:15pm to 5:30pm Colloquia MBG Auditorium, PPPL (284 cap.) COLLOQUIUM: Liquid Metal Batteries for Large-scale Energy Storage Dr. Hojong Kim Pennsylvania State ...

  5. ARM - Evaluation Product - Vertical Air Motion during Large-Scale...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ProductsVertical Air Motion during Large-Scale Stratiform Rain ARM Data Discovery Browse ... Send us a note below or call us at 1-888-ARM-DATA. Send Evaluation Product : Vertical Air ...

  6. MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE USING

    Office of Scientific and Technical Information (OSTI)

    THE VARIABILITY-LUMINOSITY RELATION (Journal Article) | SciTech Connect MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE USING THE VARIABILITY-LUMINOSITY RELATION Citation Details In-Document Search Title: MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE USING THE VARIABILITY-LUMINOSITY RELATION We introduce a technique to measure gravitational lensing magnification using the variability of type I quasars. Quasars' variability amplitudes and luminosities

  7. Breakthrough Large-Scale Industrial Project Begins Carbon Capture and

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Utilization | Department of Energy Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization January 25, 2013 - 12:00pm Addthis Washington, DC - A breakthrough carbon capture, utilization, and storage (CCUS) project in Texas has begun capturing carbon dioxide (CO2) and piping it to an oilfield for use in enhanced oil recovery (EOR). Read the project factsheet The project at Air Products

  8. The Cielo Petascale Capability Supercomputer: Providing Large-Scale

    Office of Scientific and Technical Information (OSTI)

    Computing for Stockpile Stewardship (Conference) | SciTech Connect Conference: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship Citation Details In-Document Search Title: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information

  9. DOE Awards First Three Large-Scale Carbon Sequestration Projects |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy First Three Large-Scale Carbon Sequestration Projects DOE Awards First Three Large-Scale Carbon Sequestration Projects October 9, 2007 - 3:14pm Addthis U.S. Projects Total $318 Million and Further President Bush's Initiatives to Advance Clean Energy Technologies to Confront Climate Change WASHINGTON, DC - In a major step forward for demonstrating the promise of clean energy technology, U.S Deputy Secretary of Energy Clay Sell today announced that the Department of Energy

  10. Energy Department Announces Participation in Clean Line's Large-Scale

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Transmission Project | Department of Energy Participation in Clean Line's Large-Scale Energy Transmission Project Energy Department Announces Participation in Clean Line's Large-Scale Energy Transmission Project March 25, 2016 - 12:59pm Addthis News Media Contact 202-586-4940 DOENews@hq.doe.gov WASHINGTON - Building on the Department of Energy's (DOE) ongoing efforts to modernize the grid and accelerate the deployment of renewable energy, today U.S. Secretary of Energy Ernest Moniz

  11. North American extreme temperature events and related large scale

    Office of Scientific and Technical Information (OSTI)

    meteorological patterns: A review of statistical methods, dynamics, modeling, and trends (Journal Article) | SciTech Connect North American extreme temperature events and related large scale meteorological patterns: A review of statistical methods, dynamics, modeling, and trends Citation Details In-Document Search Title: North American extreme temperature events and related large scale meteorological patterns: A review of statistical methods, dynamics, modeling, and trends This paper reviews

  12. Large Scale Production Computing and Storage Requirements for Fusion Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences" is organized by the Department of Energy's Office of Fusion Energy Sciences (FES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to

  13. Large Scale Production Computing and Storage Requirements for High Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 HEPlogo.jpg The NERSC Program Requirements Review "Large Scale Computing and Storage Requirements for High Energy Physics" is organized by the Department of Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to characterize

  14. Cosmological Simulations for Large-Scale Sky Surveys | Argonne Leadership

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Facility Cosmological Simulations for Large-Scale Sky Surveys PI Name: Salman Habib PI Email: habib@anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 80 Million Year: 2016 Research Domain: Physics The focus of cosmology today is on its two mysterious pillars, dark matter and dark energy. Large-scale sky surveys are the current drivers of precision cosmology and have been instrumental in making fundamental discoveries in these

  15. The Cielo Petascale Capability Supercomputer: Providing Large-Scale

    Office of Scientific and Technical Information (OSTI)

    Computing for Stockpile Stewardship (Conference) | SciTech Connect Conference: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship Citation Details In-Document Search Title: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship Authors: Vigil, Benny Manuel [1] ; Doerfler, Douglas W. [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2013-03-11 OSTI Identifier:

  16. Understanding large scale HPC systems through scalable monitoring and

    Office of Scientific and Technical Information (OSTI)

    analysis. (Conference) | SciTech Connect Understanding large scale HPC systems through scalable monitoring and analysis. Citation Details In-Document Search Title: Understanding large scale HPC systems through scalable monitoring and analysis. As HPC systems grow in size and complexity, diagnosing problems and understanding system behavior, including failure modes, becomes increasingly difficult and time consuming. At Sandia National Laboratories we have developed a tool, OVIS, to facilitate

  17. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect (OSTI)

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  18. Large scale, urban decontamination; developments, historical examples and lessons learned

    SciTech Connect (OSTI)

    Demmer, R.L.

    2007-07-01

    Recent terrorist threats and actions have lead to a renewed interest in the technical field of large scale, urban environment decontamination. One of the driving forces for this interest is the prospect for the cleanup and removal of radioactive dispersal device (RDD or 'dirty bomb') residues. In response, the United States Government has spent many millions of dollars investigating RDD contamination and novel decontamination methodologies. The efficiency of RDD cleanup response will be improved with these new developments and a better understanding of the 'old reliable' methodologies. While an RDD is primarily an economic and psychological weapon, the need to cleanup and return valuable or culturally significant resources to the public is nonetheless valid. Several private companies, universities and National Laboratories are currently developing novel RDD cleanup technologies. Because of its longstanding association with radioactive facilities, the U. S. Department of Energy National Laboratories are at the forefront in developing and testing new RDD decontamination methods. However, such cleanup technologies are likely to be fairly task specific; while many different contamination mechanisms, substrate and environmental conditions will make actual application more complicated. Some major efforts have also been made to model potential contamination, to evaluate both old and new decontamination techniques and to assess their readiness for use. There are a number of significant lessons that can be gained from a look at previous large scale cleanup projects. Too often we are quick to apply a costly 'package and dispose' method when sound technological cleaning approaches are available. Understanding historical perspectives, advanced planning and constant technology improvement are essential to successful decontamination. (authors)

  19. Design advanced for large-scale, economic, floating LNG plant

    SciTech Connect (OSTI)

    Naklie, M.M.

    1997-06-30

    A floating LNG plant design has been developed which is technically feasible, economical, safe, and reliable. This technology will allow monetization of small marginal fields and improve the economics of large fields. Mobil`s world-scale plant design has a capacity of 6 million tons/year of LNG and up to 55,000 b/d condensate produced from 1 bcfd of feed gas. The plant would be located on a large, secure, concrete barge with a central moonpool. LNG storage is provided for 250,000 cu m and condensate storage for 650,000 bbl. And both products are off-loaded from the barge. Model tests have verified the stability of the barge structure: barge motions are low enough to permit the plant to continue operation in a 100-year storm in the Pacific Rim. Moreover, the barge is spread-moored, eliminating the need for a turret and swivel. Because the design is generic, the plant can process a wide variety of feed gases and operate in different environments, should the plant be relocated. This capability potentially gives the plant investment a much longer project life because its use is not limited to the life of only one producing area.

  20. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect (OSTI)

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ?CDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ?. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ?, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ?. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  1. DOE Completes Large-Scale Carbon Sequestration Project Awards | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy Large-Scale Carbon Sequestration Project Awards DOE Completes Large-Scale Carbon Sequestration Project Awards November 17, 2008 - 4:58pm Addthis Regional Partner to Demonstrate Safe and Permanent Storage of 2 Million Tons of CO2 at Wyoming Site WASHINGTON, DC - Completing a series of awards through its Regional Carbon Sequestration Partnership Program, the U.S. Department of Energy (DOE) today awarded $66.9 million to the Big Sky Regional Carbon Sequestration Partnership for the

  2. Large-Scale Algal Cultivation, Harvesting and Downstream Processing Workshop

    Broader source: Energy.gov [DOE]

    ATP3 (Algae Testbed Public-Private Partnership) is hosting the Large-Scale Algal Cultivation, Harvesting and Downstream Processing Workshop on November 2–6, 2015, at the Arizona Center for Algae Technology and Innovation in Mesa, Arizona. Topics will include practical applications of growing and managing microalgal cultures at production scale (such as methods for handling cultures, screening strains for desirable characteristics, identifying and mitigating contaminants, scaling up cultures for outdoor growth, harvesting and processing technologies, and the analysis of lipids, proteins, and carbohydrates). Related training will include hands-on laboratory and field opportunities.

  3. Large-scale anisotropy in stably stratified rotating flows

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  4. ANALYSIS OF TURBULENT MIXING JETS IN LARGE SCALE TANK

    SciTech Connect (OSTI)

    Lee, S; Richard Dimenna, R; Robert Leishear, R; David Stefanko, D

    2007-03-28

    Flow evolution models were developed to evaluate the performance of the new advanced design mixer pump for sludge mixing and removal operations with high-velocity liquid jets in one of the large-scale Savannah River Site waste tanks, Tank 18. This paper describes the computational model, the flow measurements used to provide validation data in the region far from the jet nozzle, the extension of the computational results to real tank conditions through the use of existing sludge suspension data, and finally, the sludge removal results from actual Tank 18 operations. A computational fluid dynamics approach was used to simulate the sludge removal operations. The models employed a three-dimensional representation of the tank with a two-equation turbulence model. Both the computational approach and the models were validated with onsite test data reported here and literature data. The model was then extended to actual conditions in Tank 18 through a velocity criterion to predict the ability of the new pump design to suspend settled sludge. A qualitative comparison with sludge removal operations in Tank 18 showed a reasonably good comparison with final results subject to significant uncertainties in actual sludge properties.

  5. UNIVERSITY OF CALIFORNIA The Future of Large Scale Visual Data

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CALIFORNIA The Future of Large Scale Visual Data Analysis Joint Facilities User Forum on Data Intensive Computing Oakland, CA E. Wes Bethel Lawrence Berkeley National Laboratory 16 June 2014 16 June 2014 The World that Was: Computational Architectures * Machine architectures - Single CPU, single core - Vector, then single-core MPPs - "Large" SMP platforms - Relatively well balanced: memory, FLOPS,I/O 16 June 2014 The World that Was: Software Architecture * Data Analysis and

  6. Test Plan: WIPP bin-scale CH TRU waste tests

    SciTech Connect (OSTI)

    Molecke, M.A.

    1990-08-01

    This WIPP Bin-Scale CH TRU Waste Test program described herein will provide relevant composition and kinetic rate data on gas generation and consumption resulting from TRU waste degradation, as impacted by synergistic interactions due to multiple degradation modes, waste form preparation, long-term repository environmental effects, engineered barrier materials, and, possibly, engineered modifications to be developed. Similar data on waste-brine leachate compositions and potentially hazardous volatile organic compounds released by the wastes will also be provided. The quantitative data output from these tests and associated technical expertise are required by the WIPP Performance Assessment (PA) program studies, and for the scientific benefit of the overall WIPP project. This Test Plan describes the necessary scientific and technical aspects, justifications, and rational for successfully initiating and conducting the WIPP Bin-Scale CH TRU Waste Test program. This Test Plan is the controlling scientific design definition and overall requirements document for this WIPP in situ test, as defined by Sandia National Laboratories (SNL), scientific advisor to the US Department of Energy, WIPP Project Office (DOE/WPO). 55 refs., 16 figs., 19 tabs.

  7. The effective field theory of cosmological large scale structures

    SciTech Connect (OSTI)

    Carrasco, John Joseph M.; Hertzberg, Mark P.; Senatore, Leonardo

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ? 106c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations ?(k) for all the observables. As an example, we calculate the correction to the power spectrum at order ?(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ? 0.24h Mpc1.

  8. LARGE-SCALE MOTIONS IN THE PERSEUS GALAXY CLUSTER

    SciTech Connect (OSTI)

    Simionescu, A.; Werner, N.; Urban, O.; Allen, S. W.; Fabian, A. C.; Sanders, J. S.; Mantz, A.; Nulsen, P. E. J.; Takei, Y.

    2012-10-01

    By combining large-scale mosaics of ROSAT PSPC, XMM-Newton, and Suzaku X-ray observations, we present evidence for large-scale motions in the intracluster medium of the nearby, X-ray bright Perseus Cluster. These motions are suggested by several alternating and interleaved X-ray bright, low-temperature, low-entropy arcs located along the east-west axis, at radii ranging from {approx}10 kpc to over a Mpc. Thermodynamic features qualitatively similar to these have previously been observed in the centers of cool-core clusters, and were successfully modeled as a consequence of the gas sloshing/swirling motions induced by minor mergers. Our observations indicate that such sloshing/swirling can extend out to larger radii than previously thought, on scales approaching the virial radius.

  9. Performance Health Monitoring of Large-Scale Systems

    SciTech Connect (OSTI)

    Rajamony, Ram

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­‐scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  10. Preliminary Scaling Estimate for Select Small Scale Mixing Demonstration Tests

    SciTech Connect (OSTI)

    Wells, Beric E.; Fort, James A.; Gauglitz, Phillip A.; Rector, David R.; Schonewill, Philip P.

    2013-09-12

    The Hanford Site double-shell tank (DST) system provides the staging location for waste that will be transferred to the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Specific WTP acceptance criteria for waste feed delivery describe the physical and chemical characteristics of the waste that must be met before the waste is transferred from the DSTs to the WTP. One of the more challenging requirements relates to the sampling and characterization of the undissolved solids (UDS) in a waste feed DST because the waste contains solid particles that settle and their concentration and relative proportion can change during the transfer of the waste in individual batches. A key uncertainty in the waste feed delivery system is the potential variation in UDS transferred in individual batches in comparison to an initial sample used for evaluating the acceptance criteria. To address this uncertainty, a number of small-scale mixing tests have been conducted as part of Washington River Protection Solutions Small Scale Mixing Demonstration (SSMD) project to determine the performance of the DST mixing and sampling systems.

  11. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect (OSTI)

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  12. Development of an integrated in-situ remediation technology. Topical report for task No. 12 and 13 entitled: Large scale field test of the Lasagna{trademark} process, September 26, 1994--May 25, 1996

    SciTech Connect (OSTI)

    Athmer, C.J.; Ho, Sa V.; Hughes, B.M.

    1997-04-01

    Contamination in low permeability soils poses a significant technical challenge to in-situ remediation efforts. Poor accessibility to the contaminants and difficulty in delivery of treatment reagents have rendered existing in-situ treatments such as bioremediation, vapor extraction, pump and treat rather ineffective when applied to low permeability soils present at many contaminated sites. This technology is an integrated in-situ treatment in which established geotechnical methods are used to instant degradation zones directly in the contaminated soil and electroosmosis is utilized to move the contaminants back and forth through those zones until the treatment is completed. This topical report summarizes the results of the field experiment conducted at the Paducah Gaseous Diffusion Plant in Paducah, KY. The test site covered 15 feet wide by 10 feet across and 15 feet deep with steel panels as electrodes and wickdrains containing granular activated carbon as treatment zone& The electrodes and treatment zones were installed utilizing innovative adaptation of existing emplacement technologies. The unit was operated for four months, flushing TCE by electroosmosis from the soil into the treatment zones where it was trapped by the activated carbon. The scale up from laboratory units to this field scale was very successful with respect to electrical parameters as weft as electroosmotic flow. Soil samples taken throughout the site before and after the test showed over 98% TCE removal, with most samples showing greater than 99% removal.

  13. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    SciTech Connect (OSTI)

    Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  14. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect (OSTI)

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  15. Electron drift in a large scale solid xenon

    SciTech Connect (OSTI)

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  16. Electron drift in a large scale solid xenon

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  17. Relic vector field and CMB large scale anomalies

    SciTech Connect (OSTI)

    Chen, Xingang; Wang, Yi E-mail: yw366@cam.ac.uk

    2014-10-01

    We study the most general effects of relic vector fields on the inflationary background and density perturbations. Such effects are observable if the number of inflationary e-folds is close to the minimum requirement to solve the horizon problem. We show that this can potentially explain two CMB large scale anomalies: the quadrupole-octopole alignment and the quadrupole power suppression. We discuss its effect on the parity anomaly. We also provide analytical template for more detailed data comparison.

  18. Robust, Multifunctional Joint for Large Scale Power Production Stacks -

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Energy Innovation Portal Robust, Multifunctional Joint for Large Scale Power Production Stacks Lawrence Berkeley National Laboratory Contact LBL About This Technology DIAGRAM OF BERKELEY LAB'S MULTIFUNCTIONAL JOINT DIAGRAM OF BERKELEY LAB'S MULTIFUNCTIONAL JOINT Technology Marketing SummaryBerkeley Lab scientists have developed a multifunctional joint for metal supported, tubular SOFCs that divides various joint functions so that materials and methods optimizing each function can be chosen

  19. Computational Fluid Dynamics & Large-Scale Uncertainty Quantification for

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Wind Energy Fluid Dynamics & Large-Scale Uncertainty Quantification for Wind Energy - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery

  20. Large Scale Production Computing and Storage Requirements for Advanced

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scientific Computing Research: Target 2017 Large Scale Production Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2017 ASCRLogo.png This is an invitation-only review organized by the Department of Energy's Office of Advanced Scientific Computing Research (ASCR) and NERSC. The general goal is to determine production high-performance computing, storage, and services that will be needed for ASCR to achieve its science goals through 2017. A specific focus

  1. Large Scale Production Computing and Storage Requirements for Basic Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Basic Energy Sciences: Target 2017 BES-Montage.png This is an invitation-only review organized by the Department of Energy's Office of Basic Energy Sciences (BES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The goal is to determine production high-performance computing, storage, and services that will be needed for BES to

  2. Large Scale Production Computing and Storage Requirements for Biological

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Environmental Research: Target 2017 Large Scale Production Computing and Storage Requirements for Biological and Environmental Research: Target 2017 BERmontage.gif September 11-12, 2012 Hilton Rockville Hotel and Executive Meeting Center 1750 Rockville Pike Rockville, MD, 20852-1699 TEL: 1-301-468-1100 Sponsored by: U.S. Department of Energy Office of Science Office of Advanced Scientific Computing Research (ASCR) Office of Biological and Environmental Research (BER) National Energy

  3. Large Scale Production Computing and Storage Requirements for Nuclear

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for Nuclear Physics: Target 2017 NPicon.png This invitation-only review is organized by the Department of Energy's Offices of Nuclear Physics (NP) and Advanced Scientific Computing Research (ASCR) and by NERSC. The goal is to determine production high-performance computing, storage, and services that will be needed for NP to achieve its science goals through 2017. The review brings together DOE Program Managers,

  4. Electrochemical cells for medium- and large-scale energy storage

    SciTech Connect (OSTI)

    Wang, Wei; Wei, Xiaoliang; Choi, Daiwon; Lu, Xiaochuan; Yang, G.; Sun, C.

    2014-12-12

    This is one of the chapters in the book titled “Advances in batteries for large- and medium-scale energy storage: Applications in power systems and electric vehicles” that will be published by the Woodhead Publishing Limited. The chapter discusses the basic electrochemical fundamentals of electrochemical energy storage devices with a focus on the rechargeable batteries. Several practical secondary battery systems are also discussed as examples

  5. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect (OSTI)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.

  6. Small-Scale Spray Releases: Additional Aerosol Test Results

    SciTech Connect (OSTI)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Kimura, Marcia L.; Brown, G. N.; Mahoney, Lenna A.; Tran, Diana N.; Burns, Carolyn A.; Kurath, Dean E.

    2013-08-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are largely absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale. The small-scale testing and resultant data are described in Mahoney et al. (2012b) and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used to mimic the relevant physical properties projected for actual WTP process streams.

  7. Nuclear-pumped lasers for large-scale applications

    SciTech Connect (OSTI)

    Anderson, R.E.; Leonard, E.M.; Shea, R.F.; Berggren, R.R.

    1989-05-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficiently short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system; to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to demonstrate the performance of large-scale optics and the beam quality that may be obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 8 figs., 5 tabs.

  8. Nuclear-pumped lasers for large-scale applications

    SciTech Connect (OSTI)

    Anderson, R.E.; Leonard, E.M.; Shea, R.E.; Berggren, R.R.

    1988-01-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficient short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system: to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to determine the performance of large-scale optics and the beam quality that may bo obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 7 figs., 5 tabs.

  9. Just enough inflation: power spectrum modifications at large scales

    SciTech Connect (OSTI)

    Cicoli, Michele [Dipartimento di Fisica ed Astronomia, Universit di Bologna, via Irnerio 46, 40126 Bologna (Italy); Downes, Sean [Leung Center for Cosmology and Particle Astrophysics, National Taiwan University, No. 1, Section 4, Roosevelt Road, Taipei 10617, Taiwan (China); Dutta, Bhaskar [Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843-4242 (United States); Pedro, Francisco G.; Westphal, Alexander, E-mail: mcicoli@ictp.it, E-mail: ssdownes@phys.ntu.edu.tw, E-mail: dutta@physics.tamu.edu, E-mail: francisco.pedro@desy.de, E-mail: alexander.westphal@desy.de [Deutsches Elektronen-Synchrotron DESY, Theory Group, D-22603 Hamburg (Germany)

    2014-12-01

    We show that models of 'just enough' inflation, where the slow-roll evolution lasted only 50- 60 e-foldings, feature modifications of the CMB power spectrum at large angular scales. We perform a systematic analytic analysis in the limit of a sudden transition between any possible non-slow-roll background evolution and the final stage of slow-roll inflation. We find a high degree of universality since most common backgrounds like fast-roll evolution, matter or radiation-dominance give rise to a power loss at large angular scales and a peak together with an oscillatory behaviour at scales around the value of the Hubble parameter at the beginning of slow-roll inflation. Depending on the value of the equation of state parameter, different pre-inflationary epochs lead instead to an enhancement of power at low ?, and so seem disfavoured by recent observational hints for a lack of CMB power at ??<40. We also comment on the importance of initial conditions and the possibility to have multiple pre-inflationary stages.

  10. Small-Scale Spray Releases: Orifice Plugging Test Results

    SciTech Connect (OSTI)

    Mahoney, Lenna A.; Gauglitz, Phillip A.; Blanchard, Jeremy; Kimura, Marcia L.; Kurath, Dean E.

    2012-09-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities, is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations published in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials present in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty introduced by extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches in which the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are largely absent. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine the aerosol release fractions and aerosol generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents (AFA) was assessed with most of the simulants. Orifices included round holes and rectangular slots. Much of the testing was conducted at pressures of 200 and 380 psi, but some tests were conducted at 100 psi. Testing the largest postulated breaches was deemed impractical because of the large size of some of the WTP equipment. The purpose of the study described in this report is to provide experimental data for the first key technical area, potential plugging of small breaches, by performing small-scale tests with a range of orifice sizes and orientations representative of the WTP conditions. The simulants used were chosen to represent the range of process stream properties in the WTP. Testing conducted after the plugging tests in the small- and large-scale test stands addresses the second key technical area, aerosol generation. The results of the small-scale aerosol generation tests are included in Mahoney et al. 2012. The area of spray generation from large breaches is covered by large-scale testing in Schonewill et al. 2012.

  11. Large-scale BAO signatures of the smallest galaxies

    SciTech Connect (OSTI)

    Dalal, Neal; Pen, Ue-Li; Seljak, Uros E-mail: pen@cita.utoronto.ca

    2010-11-01

    Recent work has shown that at high redshift, the relative velocity between dark matter and baryonic gas is typically supersonic. This relative velocity suppresses the formation of the earliest baryonic structures like minihalos, and the suppression is modulated on large scales. This effect imprints a characteristic shape in the clustering power spectrum of the earliest structures, with significant power on ∼ 100 Mpc scales featuring highly pronounced baryon acoustic oscillations. The amplitude of these oscillations is orders of magnitude larger at z ∼ 20 than previously expected. This characteristic signature can allow us to distinguish the effects of minihalos on intergalactic gas at times preceding and during reionization. We illustrate this effect with the example of 21 cm emission and absorption from redshifts during and before reionization. This effect can potentially allow us to probe physics on kpc scales using observations on 100 Mpc scales. We present sensitivity forecasts for FAST and Arecibo. Depending on parameters, this enhanced structure may be detectable by Arecibo at z ∼ 15−20, and with appropriate instrumentation FAST could measure the BAO power spectrum with high precision. In principle, this effect could also pose a serious challenge for efforts to constrain dark energy using observations of the BAO feature at low redshift.

  12. Detecting differential protein expression in large-scale population proteomics

    SciTech Connect (OSTI)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  13. Sub-scale Drum Test Memo | Department of Energy

    Office of Environmental Management (EM)

    Sub-scale Drum Test Memo Sub-scale Drum Test Memo This document was used to determine facts and conditions during the Department of Energy Accident Investigation Board's ...

  14. High Fidelity Simulations of Large-Scale Wireless Networks

    SciTech Connect (OSTI)

    Onunkwo, Uzoma; Benz, Zachary

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  15. Large scale obscuration and related climate effects open literature bibliography

    SciTech Connect (OSTI)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  16. Planning under uncertainty solving large-scale stochastic linear programs

    SciTech Connect (OSTI)

    Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  17. Small-Scale Spray Releases: Initial Aerosol Test Results

    SciTech Connect (OSTI)

    Mahoney, Lenna A.; Gauglitz, Phillip A.; Kimura, Marcia L.; Brown, Garrett N.; Kurath, Dean E.; Buchmiller, William C.; Smith, Dennese M.; Blanchard, Jeremy; Song, Chen; Daniel, Richard C.; Wells, Beric E.; Tran, Diana N.; Burns, Carolyn A.

    2012-11-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and rectangular slots. The round holes ranged in size from 0.2 to 4.46 mm. The slots ranged from (width × length) 0.3 × 5 to 2.74 × 76.2 mm. Most slots were oriented longitudinally along the pipe, but some were oriented circumferentially. In addition, a limited number of multi-hole test pieces were tested in an attempt to assess the impact of a more complex breach. Much of the testing was conducted at pressures of 200 and 380 psi, but some tests were conducted at 100 psi. Testing the largest postulated breaches was deemed impractical because of the large size of some of the WTP equipment. This report presents the experimental results and analyses for the aerosol measurements obtained in the small-scale test stand. It includes a description of the simulants used and their properties, equipment and operations, data analysis methodologies, and test results. The results of tests investigating the role of slurry particles in plugging small breaches are reported in Mahoney et al. (2012). The results of the aerosol measurements in the large-scale test stand are reported in Schonewill et al. (2012) along with an analysis of the combined results from both test scales.

  18. Cosmological implications of the CMB large-scale structure

    SciTech Connect (OSTI)

    Melia, Fulvio

    2015-01-01

    The Wilkinson Microwave Anisotropy Probe (WMAP) and Planck may have uncovered several anomalies in the full cosmic microwave background (CMB) sky that could indicate possible new physics driving the growth of density fluctuations in the early universe. These include an unusually low power at the largest scales and an apparent alignment of the quadrupole and octopole moments. In a ?CDM model where the CMB is described by a Gaussian Random Field, the quadrupole and octopole moments should be statistically independent. The emergence of these low probability features may simply be due to posterior selections from many such possible effects, whose occurrence would therefore not be as unlikely as one might naively infer. If this is not the case, however, and if these features are not due to effects such as foreground contamination, their combined statistical significance would be equal to the product of their individual significances. In the absence of such extraneous factors, and ignoring the biasing due to posterior selection, the missing large-angle correlations would have a probability as low as ?0.1% and the low-l multipole alignment would be unlikely at the ?4.9% level; under the least favorable conditions, their simultaneous observation in the context of the standard model could then be likely at only the ?0.005% level. In this paper, we explore the possibility that these features are indeed anomalous, and show that the corresponding probability of CMB multipole alignment in the R{sub h}=ct universe would then be ?710%, depending on the number of large-scale SachsWolfe induced fluctuations. Since the low power at the largest spatial scales is reproduced in this cosmology without the need to invoke cosmic variance, the overall likelihood of observing both of these features in the CMB is ?7%, much more likely than in ?CDM, if the anomalies are real. The key physical ingredient responsible for this difference is the existence in the former of a maximum fluctuation size at the time of recombination, which is absent in the latter because of inflation.

  19. Ferroelectric opening switches for large-scale pulsed power drivers.

    SciTech Connect (OSTI)

    Brennecka, Geoffrey L.; Rudys, Joseph Matthew; Reed, Kim Warren; Pena, Gary Edward; Tuttle, Bruce Andrew; Glover, Steven Frank

    2009-11-01

    Fast electrical energy storage or Voltage-Driven Technology (VDT) has dominated fast, high-voltage pulsed power systems for the past six decades. Fast magnetic energy storage or Current-Driven Technology (CDT) is characterized by 10,000 X higher energy density than VDT and has a great number of other substantial advantages, but it has all but been neglected for all of these decades. The uniform explanation for neglect of CDT technology is invariably that the industry has never been able to make an effective opening switch, which is essential for the use of CDT. Most approaches to opening switches have involved plasma of one sort or another. On a large scale, gaseous plasmas have been used as a conductor to bridge the switch electrodes that provides an opening function when the current wave front propagates through to the output end of the plasma and fully magnetizes the plasma - this is called a Plasma Opening Switch (POS). Opening can be triggered in a POS using a magnetic field to push the plasma out of the A-K gap - this is called a Magnetically Controlled Plasma Opening Switch (MCPOS). On a small scale, depletion of electron plasmas in semiconductor devices is used to affect opening switch behavior, but these devices are relatively low voltage and low current compared to the hundreds of kilo-volts and tens of kilo-amperes of interest to pulsed power. This work is an investigation into an entirely new approach to opening switch technology that utilizes new materials in new ways. The new materials are Ferroelectrics and using them as an opening switch is a stark contrast to their traditional applications in optics and transducer applications. Emphasis is on use of high performance ferroelectrics with the objective of developing an opening switch that would be suitable for large scale pulsed power applications. Over the course of exploring this new ground, we have discovered new behaviors and properties of these materials that were here to fore unknown. Some of these unexpected discoveries have lead to new research directions to address challenges.

  20. Large scale electromechanical transistor with application in mass sensing

    SciTech Connect (OSTI)

    Jin, Leisheng; Li, Lijie

    2014-12-07

    Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibrationan external force has to be used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.

  1. Large Scale Obscuration and Related Climate Effects Workshop: Proceedings

    SciTech Connect (OSTI)

    Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K.

    1994-05-01

    A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

  2. Large-Scale Data Challenges in Future Power Grids

    SciTech Connect (OSTI)

    Yin, Jian; Sharma, Poorva; Gorton, Ian; Akyol, Bora A.

    2013-03-25

    This paper describes technical challenges in supporting large-scale real-time data analysis for future power grid systems and discusses various design options to address these challenges. Even though the existing U.S. power grid has served the nation remarkably well over the last 120 years, big changes are in the horizon. The widespread deployment of renewable generation, smart grid controls, energy storage, plug-in hybrids, and new conducting materials will require fundamental changes in the operational concepts and principal components. The whole system becomes highly dynamic and needs constant adjustments based on real time data. Even though millions of sensors such as phase measurement units (PMUs) and smart meters are being widely deployed, a data layer that can support this amount of data in real time is needed. Unlike the data fabric in cloud services, the data layer for smart grids must address some unique challenges. This layer must be scalable to support millions of sensors and a large number of diverse applications and still provide real time guarantees. Moreover, the system needs to be highly reliable and highly secure because the power grid is a critical piece of infrastructure. No existing systems can satisfy all the requirements at the same time. We examine various design options. In particular, we explore the special characteristics of power grid data to meet both scalability and quality of service requirements. Our initial prototype can improve performance by orders of magnitude over existing general-purpose systems. The prototype was demonstrated with several use cases from PNNL’s FPGI and was shown to be able to integrate huge amount of data from a large number of sensors and a diverse set of applications.

  3. Ground movements associated with large-scale underground coal gasification

    SciTech Connect (OSTI)

    Siriwardane, H.J.; Layne, A.W.

    1989-09-01

    The primary objective of this work was to predict the surface and underground movement associated with large-scale multiwell burn sites in the Illinois Basin and Appalachian Basin by using the subsidence/thermomechanical model UCG/HEAT. This code is based on the finite element method. In particular, it can be used to compute (1) the temperature field around an underground cavity when the temperature variation of the cavity boundary is known, and (2) displacements and stresses associated with body forces (gravitational forces) and a temperature field. It is hypothesized that large Underground Coal Gasification (UCG) cavities generated during the line-drive process will be similar to those generated by longwall mining. If that is the case, then as a UCG process continues, the roof of the cavity becomes unstable and collapses. In the UCG/HEAT computer code, roof collapse is modeled using a simplified failure criterion (Lee 1985). It is anticipated that roof collapse would occur behind the burn front; therefore, forward combustion can be continued. As the gasification front propagates, the length of the cavity would become much larger than its width. Because of this large length-to-width ratio in the cavity, ground response behavior could be analyzed by considering a plane-strain idealization. In a plane-strain idealization of the UCG cavity, a cross-section perpendicular to the axis of propagation could be considered, and a thermomechanical analysis performed using a modified version of the two-dimensional finite element code UCG/HEAT. 15 refs., 9 figs., 3 tabs.

  4. Small-Scale Spray Releases: Initial Aerosol Test Results

    SciTech Connect (OSTI)

    Mahoney, Lenna A.; Gauglitz, Phillip A.; Kimura, Marcia L.; Brown, Garrett N.; Kurath, Dean E.; Buchmiller, William C.; Smith, Dennese M.; Blanchard, Jeremy; Song, Chen; Daniel, Richard C.; Wells, Beric E.; Tran, Diana N.; Burns, Carolyn A.

    2013-05-29

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and net generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of antifoam agents was assessed with most of the simulants. Orifices included round holes and rectangular slots. For the combination of both test stands, the round holes ranged in size from 0.2 to 4.46 mm. The slots ranged from (width × length) 0.3 × 5 to 2.74 × 76.2 mm. Most slots were oriented longitudinally along the pipe, but some were oriented circumferentially. In addition, a limited number of multi-hole test pieces were tested in an attempt to assess the impact of a more complex breach. Much of the testing was conducted at pressures of 200 and 380 psi, but some tests were conducted at 100 psi. Testing the largest postulated breaches was deemed impractical because of the much larger flow rates and equipment that would be required. This report presents the experimental results and analyses for the aerosol measurements obtained in the small-scale test stand. It includes a description of the simulants used and their properties, equipment and operations, data analysis methodologies, and test results. The results of tests investigating the role of slurry particles in plugging small breaches are reported in Mahoney et al. (2012). The results of the aerosol measurements in the large-scale test stand are reported in Schonewill et al. (2012) along with an analysis of the combined results from both test scales.

  5. Development of fine-resolution analyses and expanded large-scale...

    Office of Scientific and Technical Information (OSTI)

    II: Scale-awareness and application to single-column model experiments Title: Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: ...

  6. Full-scale shear tests of embedded floor modules

    SciTech Connect (OSTI)

    Fricke, K.E.; Jones, W.D.; Burdette, E.G.

    1984-01-01

    A floor module used to support a centrifuge machine is a steel framework embedded in a 2-ft (610-mm) thick concrete slab. This steel framework is made up of four cylindrical hollow sockets tied together with four S-beams to form a square pattern. In the event of a centrifuge machine wreck, large forces are transmitted from the machine to the corner sockets (through connecting steel lugs) and to the concrete slab. The floor modules are loaded with a combination of torsion and shear forces in the plane of the floor slab. Precisely how these wreck loads are transmitted to, and reacted by, the floor modules and the surrounding concrete was the scope of a series of full-scale tests performed at the DOE Gas Centrifuge Enrichment Plant (GCEP) located near Piketon, Ohio. This report describes the tests and the results of the data reduction to date.

  7. DOE/NNSA Participates in Large-Scale CTBT On-Site Inspection Exercise in

    National Nuclear Security Administration (NNSA)

    Jordan | National Nuclear Security Administration DOE/NNSA Participates in Large-Scale CTBT On-Site Inspection Exercise in Jordan Friday, November 28, 2014 - 9:05am Experts from U.S. Department of Energy National Laboratories, including Sandia National Laboratories, Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Pacific Northwest National Laboratory, are participating in the Comprehensive Nuclear-Test-Ban Treaty (CTBT) Integrated Field Exercise 2014 (IFE14), a

  8. LARGE SCALE METHOD FOR THE PRODUCTION AND PURIFICATION OF CURIUM

    DOE Patents [OSTI]

    Higgins, G.H.; Crane, W.W.T.

    1959-05-19

    A large-scale process for production and purification of Cm/sup 242/ is described. Aluminum slugs containing Am are irradiated and declad in a NaOH-- NaHO/sub 3/ solution at 85 to 100 deg C. The resulting slurry filtered and washed with NaOH, NH/sub 4/OH, and H/sub 2/O. Recovery of Cm from filtrate and washings is effected by an Fe(OH)/sub 3/ precipitation. The precipitates are then combined and dissolved ln HCl and refractory oxides centrifuged out. These oxides are then fused with Na/sub 2/CO/sub 3/ and dissolved in HCl. The solution is evaporated and LiCl solution added. The Cm, rare earths, and anionic impurities are adsorbed on a strong-base anfon exchange resin. Impurities are eluted with LiCl--HCl solution, rare earths and Cm are eluted by HCl. Other ion exchange steps further purify the Cm. The Cm is then precipitated as fluoride and used in this form or further purified and processed. (T.R.H.)

  9. Large-Scale Algal Cultivation, Harvesting and Downstream Processing...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    screening strains for desirable characteristics, identifying and mitigating contaminants, scaling up cultures for outdoor growth, harvesting and processing technologies,...

  10. Parallel I/O Software Infrastructure for Large-Scale Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel IO Software Infrastructure for Large-Scale Systems Parallel IO Software Infrastructure for Large-Scale Systems Choudhary.png An illustration of how MPI---IO file domain...

  11. Comparison of the effects in the rock mass of large-scale chemical...

    Office of Scientific and Technical Information (OSTI)

    Comparison of the effects in the rock mass of large-scale chemical and nuclear explosions. ... Title: Comparison of the effects in the rock mass of large-scale chemical and nuclear ...

  12. The IR-resummed Effective Field Theory of Large Scale Structures...

    Office of Scientific and Technical Information (OSTI)

    IR-resummed Effective Field Theory of Large Scale Structures Citation Details In-Document Search Title: The IR-resummed Effective Field Theory of Large Scale Structures We present a ...

  13. Property:Scale Test | Open Energy Information

    Open Energy Info (EERE)

    it generated 40kW in 2 5 m wave height and 4 sec wave period condition MHK TechnologiesHydroGen 10 + Tenths of tests at sea have already been performed MHK TechnologiesHydroflo...

  14. EERE Success Story-FEMP Helps Federal Facilities Develop Large-Scale

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Renewable Energy Projects | Department of Energy Helps Federal Facilities Develop Large-Scale Renewable Energy Projects EERE Success Story-FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects August 21, 2013 - 12:00am Addthis EERE's Federal Energy Management Program issued a new resource that provides best practices and helpful guidance for federal agencies developing large-scale renewable energy projects. The resource, Large-Scale Renewable Energy Guide: Developing

  15. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    SciTech Connect (OSTI)

    Margaret Riley; Merry Buckley

    2009-01-01

    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin, since not only are their genomes available, but they are also accompanied by data on environment and physiology that can be used to understand the resulting data. As single cell isolation methods improve, there should be a shift toward incorporating uncultured organisms and communities into this effort. Efforts to sequence cultivated isolates should target characterized isolates from culture collections for which biochemical data are available, as well as other cultures of lasting value from personal collections. The genomes of type strains should be among the first targets for sequencing, but creative culture methods, novel cell isolation, and sorting methods would all be helpful in obtaining organisms we have not yet been able to cultivate for sequencing. The data that should be provided for strains targeted for sequencing will depend on the phylogenetic context of the organism and the amount of information available about its nearest relatives. Annotation is an important part of transforming genome sequences into useful resources, but it represents the most significant bottleneck to the field of comparative genomics right now and must be addressed. Furthermore, there is a need for more consistency in both annotation and achieving annotation data. As new annotation tools become available over time, re-annotation of genomes should be implemented, taking advantage of advancements in annotation techniques in order to capitalize on the genome sequences and increase both the societal and scientific benefit of genomics work. Given the proper resources, the knowledge and ability exist to be able to select model systems, some simple, some less so, and dissect them so that we may understand the processes and interactions at work in them. Colloquium participants suggest a five-pronged, coordinated initiative to exhaustively describe six different microbial ecosystems, designed to describe all the gene diversity, across genomes. In this effort, sequencing should be complemented by other experimental data, particularly transcriptomics and metabolomics data, all of which

  16. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  17. The Dark Energy of Turbulent Damping: Large Scale Dissipation...

    Office of Scientific and Technical Information (OSTI)

    Resource Relation: Conference: Plasma Energization: Exchanges between Fluid and Kinetic Scales ; 2015-05-04 - 2015-05-06 ; Los Alamos, New Mexico, United States Research Org: Los ...

  18. Large-Scale Manufacturing of Nanoparticulate-Based Lubrication Additives

    SciTech Connect (OSTI)

    2009-06-01

    This factsheet describes a research project whose goal is to design, develop, manufacture, and scale up boron-based nanoparticulate lubrication additives.

  19. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.

  20. The linearly scaling 3D fragment method for large scale electronic structure calculations

    SciTech Connect (OSTI)

    Zhao, Zhengji; Meza, Juan; Lee, Byounghak; Shan, Hongzhang; Strohmaier, Erich; Bailey, David; Wang, Lin-Wang

    2009-07-28

    The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  1. The Linearly Scaling 3D Fragment Method for Large Scale Electronic Structure Calculations

    SciTech Connect (OSTI)

    Zhao, Zhengji; Meza, Juan; Lee, Byounghak; Shan, Hongzhang; Strohmaier, Erich; Bailey, David; Wang, Lin-Wang

    2009-06-26

    The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  2. Pre-Approval Draft Environmental Assessment Large-Scale, Open...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Base (NAFB) has the scheduling responsibility for all ... volatile organics from fuel storage facilities (DOE, 1996). ... low humidity, large daily temperature ranges, and ...

  3. Summary Report on FY12 Small-Scale Test Activities High Temperature Electrolysis Program

    SciTech Connect (OSTI)

    James O'Brien

    2012-09-01

    This report provides a description of the apparatus and the single cell testing results performed at Idaho National Laboratory during JanuaryAugust 2012. It is an addendum to the Small-Scale Test Report issued in January 2012. The primary program objectives during this time period were associated with design, assembly, and operation of two large experiments: a pressurized test, and a 4 kW test. Consequently, the activities described in this report represent a much smaller effort.

  4. Creating Large Scale Database Servers (Technical Report) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    To date, over 70TB of data have been placed in ObjectivityDB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a ...

  5. COLLOQUIUM: Large Scale Superconducting Magnets for Variety of...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    These developments have been made using the low temperature superconductors (LTS) NbTi and Nb3Sn. The now operating Large Hadron Collider at CERN has demonstrated the scientific ...

  6. Scaling Relationships Based on Scaled Tank Mixing and Transfer Test Results

    SciTech Connect (OSTI)

    Piepel, Gregory F.; Holmes, Aimee E.; Heredia-Langner, Alejandro; Lee, Kearn P.; Kelly, Steven E.

    2014-01-01

    This report documents the statistical analyses performed (by Pacific Northwest National Laboratory for Washington River Protection Solutions) on data from 26 tests conducted using two scaled tanks (43 and 120 inches) in the Small Scale Mixing Demonstration platform. The 26 tests varied several test parameters, including mixer-jet nozzle velocity, base simulant, supernatant viscosity, and capture velocity. For each test, samples were taken pre-transfer and during five batch transfers. The samples were analyzed for the concentrations (lbs/gal slurry) of four primary components in the base simulants (gibbsite, stainless steel, sand, and ZrO2). The statistical analyses including modeling the component concentrations as functions of test parameters using stepwise regression with two different model forms. The resulting models were used in an equivalent performance approach to calculate values of scaling exponents (for a simple geometric scaling relationship) as functions of the parameters in the component concentration models. The resulting models and scaling exponents are displayed in tables and graphically. The sensitivities of component concentrations and scaling exponents to the test parameters are presented graphically. These results will serve as inputs to subsequent work by other researchers to develop scaling relationships that are applicable to full-scale tanks.

  7. Scaling Relationships Based on Scaled Tank Mixing and Transfer Test Results

    SciTech Connect (OSTI)

    Piepel, Gregory F.; Holmes, Aimee E.; Heredia-Langner, Alejandro

    2013-09-18

    This report documents the statistical analyses performed (by Pacific Northwest National Laboratory for Washington River Protection Solutions) on data from 26 tests conducted using two scaled tanks (43 and 120 inches) in the Small Scale Mixing Demonstration platform. The 26 tests varied several test parameters, including mixer-jet nozzle velocity, base simulant, supernatant viscosity, and capture velocity. For each test, samples were taken pre-transfer and during five batch transfers. The samples were analyzed for the concentrations (lbs/gal slurry) of four primary components in the base simulants (gibbsite, stainless steel, sand, and ZrO2). The statistical analyses including modeling the component concentrations as functions of test parameters using stepwise regression with two different model forms. The resulting models were used in an equivalent performance approach to calculate values of scaling exponents (for a simple geometric scaling relationship) as functions of the parameters in the component concentration models. The resulting models and scaling exponents are displayed in tables and graphically. The sensitivities of component concentrations and scaling exponents to the test parameters are presented graphically. These results will serve as inputs to subsequent work by other researchers to develop scaling relationships that are applicable to full-scale tanks.

  8. Self-consistency tests of large-scale dynamics parameterizations...

    Office of Scientific and Technical Information (OSTI)

    compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise ... Sponsoring Org: USDOE Office of Science (SC), Biological and Environmental Research (BER) ...

  9. Large-Scale Field Study of Landfill Covers at Sandia National Laboratories

    SciTech Connect (OSTI)

    Dwyer, S.F.

    1998-09-01

    A large-scale field demonstration comparing final landfill cover designs has been constructed and is currently being monitored at Sandia National Laboratories in Albuquerque, New Mexico. Two conventional designs (a RCRA Subtitle `D' Soil Cover and a RCRA Subtitle `C' Compacted Clay Cover) were constructed side-by-side with four alternative cover test plots designed for dry environments. The demonstration is intended to evaluate the various cover designs based on their respective water balance performance, ease and reliability of construction, and cost. This paper presents an overview of the ongoing demonstration.

  10. Implications of the Baltimore Rail Tunnel Fire for Full-Scale Testing of Shipping Casks

    SciTech Connect (OSTI)

    Halstead, R. J.; Dilger, F.

    2003-02-25

    The U.S. Nuclear Regulatory Commission (NRC) does not currently require full-scale physical testing of shipping casks as part of its certification process. Stakeholders have long urged NRC to require full-scale testing as part of certification. NRC is currently preparing a full-scale casktesting proposal as part of the Package Performance Study (PPS) that grew out of the NRC reexamination of the Modal Study. The State of Nevada and Clark County remain committed to the position that demonstration testing would not be an acceptable substitute for a combination of full-scale testing, scale-model tests, and computer simulation of each new cask design prior to certification. Based on previous analyses of cask testing issues, and on preliminary findings regarding the July 2001 Baltimore rail tunnel fire, the authors recommend that NRC prioritize extra-regulatory thermal testing of a large rail cask and the GA-4 truck cask under the PPS. The specific fire conditions and other aspects of the full-scale extra-regulatory tests recommended for the PPS are yet to be determined. NRC, in consultation with stakeholders, must consider past real-world accidents and computer simulations to establish temperature failure thresholds for cask containment and fuel cladding. The cost of extra-regulatory thermal testing is yet to be determined. The minimum cost for regulatory thermal testing of a legal-weight truck cask would likely be $3.3-3.8 million.

  11. Development of fine-resolution analyses and expanded large-scale...

    Office of Scientific and Technical Information (OSTI)

    II: Scale-awareness and application to single-column model experiments Citation Details In-Document Search Title: Development of fine-resolution analyses and expanded large-scale ...

  12. Application of DYNA3D in large scale crashworthiness calculations

    SciTech Connect (OSTI)

    Benson, D.J.; Hallquist, J.O.; Igarashi, M.; Shimomaki, K.; Mizuno, M.

    1986-01-01

    This paper presents an example of an automobile crashworthiness calculation. Based on our experiences with the example calculation, we make recommendations to those interested in performing crashworthiness calculations. The example presented in this paper was supplied by Suzuki Motor Co., Ltd., and provided a significant shakedown for the new large deformation shell capability of the DYNA3D code. 15 refs., 3 figs.

  13. Locations of Smart Grid Demonstration and Large-Scale Energy Storage

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects | Department of Energy Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Map of the United States showing the location of all projects created with funding from the Smart Grid Demonstration and Energy Storage Project, funded through the American Recovery and Reinvestment Act. PDF icon Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects More Documents

  14. DOE's Office of Science Seeks Proposals for Expanded Large-Scale Scientific

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Computing | Department of Energy Seeks Proposals for Expanded Large-Scale Scientific Computing DOE's Office of Science Seeks Proposals for Expanded Large-Scale Scientific Computing May 16, 2005 - 12:47pm Addthis WASHINGTON, D.C. -- Secretary of Energy Samuel W. Bodman announced today that DOE's Office of Science is seeking proposals to support innovative, large-scale computational science projects to enable high-impact advances through the use of advanced computers not commonly available in

  15. HyLights -- Tools to Prepare the Large-Scale European Demonstration...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects on Hydrogen for Transport HyLights -- Tools to Prepare the Large-Scale European Demonstration Projects on Hydrogen for Transport Presented at Refueling ...

  16. Large-Scale Production of Marine Microalgae for Fuel and Feeds

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Bioenergy Technologies Office (BETO) 2015 Project Peer Review Large-Scale Production of Marine Microalgae for Fuel and Feeds March 24, 2015 Algae Platform Review Mark Huntley ...

  17. Development of fine-resolution analyses and expanded large-scale...

    Office of Scientific and Technical Information (OSTI)

    I: Methodology and evaluation Citation Details In-Document Search Title: Development of fine-resolution analyses and expanded large-scale forcing properties. Part I: Methodology ...

  18. Large-scale Offshore Wind Power in the United States. Assessment of Opportunities and Barriers

    SciTech Connect (OSTI)

    Musial, Walter; Ram, Bonnie

    2010-09-01

    This report describes the benefits of and barriers to large-scale deployment of offshore wind energy systems in U.S. waters.

  19. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    researchLANSeventslistn Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based...

  20. Large Scale Comparative Visualisation of Regulatory Networks with TRNDiff

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Chua, Xin-Yi; Buckingham, Lawrence; Hogan, James M.; Novichkov, Pavel

    2015-06-01

    The advent of Next Generation Sequencing (NGS) technologies has seen explosive growth in genomic datasets, and dense coverage of related organisms, supporting study of subtle, strain-specific variations as a determinant of function. Such data collections present fresh and complex challenges for bioinformatics, those of comparing models of complex relationships across hundreds and even thousands of sequences. Transcriptional Regulatory Network (TRN) structures document the influence of regulatory proteins called Transcription Factors (TFs) on associated Target Genes (TGs). TRNs are routinely inferred from model systems or iterative search, and analysis at these scales requires simultaneous displays of multiple networks well beyond thosemore » of existing network visualisation tools [1]. In this paper we describe TRNDiff, an open source system supporting the comparative analysis and visualization of TRNs (and similarly structured data) from many genomes, allowing rapid identification of functional variations within species. The approach is demonstrated through a small scale multiple TRN analysis of the Fur iron-uptake system of Yersinia, suggesting a number of candidate virulence factors; and through a larger study exploiting integration with the RegPrecise database (http://regprecise.lbl.gov; [2]) - a collection of hundreds of manually curated and predicted transcription factor regulons drawn from across the entire spectrum of prokaryotic organisms.« less

  1. Large Scale Comparative Visualisation of Regulatory Networks with TRNDiff

    SciTech Connect (OSTI)

    Chua, Xin-Yi; Buckingham, Lawrence; Hogan, James M.; Novichkov, Pavel

    2015-06-01

    The advent of Next Generation Sequencing (NGS) technologies has seen explosive growth in genomic datasets, and dense coverage of related organisms, supporting study of subtle, strain-specific variations as a determinant of function. Such data collections present fresh and complex challenges for bioinformatics, those of comparing models of complex relationships across hundreds and even thousands of sequences. Transcriptional Regulatory Network (TRN) structures document the influence of regulatory proteins called Transcription Factors (TFs) on associated Target Genes (TGs). TRNs are routinely inferred from model systems or iterative search, and analysis at these scales requires simultaneous displays of multiple networks well beyond those of existing network visualisation tools [1]. In this paper we describe TRNDiff, an open source system supporting the comparative analysis and visualization of TRNs (and similarly structured data) from many genomes, allowing rapid identification of functional variations within species. The approach is demonstrated through a small scale multiple TRN analysis of the Fur iron-uptake system of Yersinia, suggesting a number of candidate virulence factors; and through a larger study exploiting integration with the RegPrecise database (http://regprecise.lbl.gov; [2]) - a collection of hundreds of manually curated and predicted transcription factor regulons drawn from across the entire spectrum of prokaryotic organisms.

  2. Feeding a large-scale physics application to Python

    SciTech Connect (OSTI)

    Beazley, D.M.; Lomdahl, P.S.

    1997-10-01

    The authors describe their experiences using Python with the SPaSM molecular dynamics code at Los Alamos National Laboratory. Originally developed as a large monolithic application for massive parallel processing systems, they have used Python to transform their application into a flexible, highly modular, and extremely powerful system for performing simulation, data analysis, and visualization. In addition, they describe how Python has solved a number of important problems related to the development, debugging, deployment, and maintenance of scientific software.

  3. Large-scale soil bioremediation using white-rot fungi

    SciTech Connect (OSTI)

    Holroyd, M.L.; Caunt, P.

    1995-12-31

    Some organic pollutant compounds are considered resistant to conventional bioremediation because of their structure or behavior in soil. This phenomenon, together with the increasing need to reach lower target levels in shorter time periods, has shown the need for improved or alternative biological processes. It has been known for some time that the white-rot fungi, particularly the species Phanerochaete chrysosporium, have potentially useful abilities to rapidly degrade pollutant molecules. The use of white-rot fungi at the field scale presents a number of challenges, and this paper outlines the use of a process incorporating Phanerochaete to successfully bioremediate over 6,000 m{sup 3} of chlorophenol-contaminated soil at a site in Finland. Moreover, the method developed is very cost-effective and proved capable of reaching the very low target levels within the contracted time span.

  4. Feasibility of Large-Scale Ocean CO2 Sequestration

    SciTech Connect (OSTI)

    Peter Brewer

    2008-08-31

    Scientific knowledge of natural clathrate hydrates has grown enormously over the past decade, with spectacular new findings of large exposures of complex hydrates on the sea floor, the development of new tools for examining the solid phase in situ, significant progress in modeling natural hydrate systems, and the discovery of exotic hydrates associated with sea floor venting of liquid CO{sub 2}. Major unresolved questions remain about the role of hydrates in response to climate change today, and correlations between the hydrate reservoir of Earth and the stable isotopic evidence of massive hydrate dissociation in the geologic past. The examination of hydrates as a possible energy resource is proceeding apace for the subpermafrost accumulations in the Arctic, but serious questions remain about the viability of marine hydrates as an economic resource. New and energetic explorations by nations such as India and China are quickly uncovering large hydrate findings on their continental shelves. In this report we detail research carried out in the period October 1, 2007 through September 30, 2008. The primary body of work is contained in a formal publication attached as Appendix 1 to this report. In brief we have surveyed the recent literature with respect to the natural occurrence of clathrate hydrates (with a special emphasis on methane hydrates), the tools used to investigate them and their potential as a new source of natural gas for energy production.

  5. Parallel Tensor Compression for Large-Scale Scientific Data.

    SciTech Connect (OSTI)

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  6. Single-field consistency relations of large scale structure

    SciTech Connect (OSTI)

    Creminelli, Paolo; Norea, Jorge; Simonovi?, Marko; Vernizzi, Filippo E-mail: jorge.norena@icc.ub.edu E-mail: filippo.vernizzi@cea.fr

    2013-12-01

    We derive consistency relations for the late universe (CDM and ?CDM): relations between an n-point function of the density contrast ? and an (n+1)-point function in the limit in which one of the (n+1) momenta becomes much smaller than the others. These are based on the observation that a long mode, in single-field models of inflation, reduces to a diffeomorphism since its freezing during inflation all the way until the late universe, even when the long mode is inside the horizon (but out of the sound horizon). These results are derived in Newtonian gauge, at first and second order in the small momentum q of the long mode and they are valid non-perturbatively in the short-scale ?. In the non-relativistic limit our results match with [1]. These relations are a consequence of diffeomorphism invariance; they are not satisfied in the presence of extra degrees of freedom during inflation or violation of the Equivalence Principle (extra forces) in the late universe.

  7. Engineering scale mixing system tests for MWTF title II design

    SciTech Connect (OSTI)

    Chang, S.C.

    1994-10-10

    Mixing tests for the Multifunction Waste Tank Facility (MWTF) were conducted in 1/25 and 1/10 scale test tanks with different slurry levels, solids concentrations, different jet mixers and with simulated in-tank structures. The same test procedure was used as in the Title I program, documented in WHC-SD-W236A-ER-005. The test results support the scaling correlation derived previously in the Title I program. The tests also concluded that a partially filled tank requires less mixing power, and horizontal and angled jets in combination (H/A mixer) are significantly more effective than the two horizontal jet mixers (H/H mixer) when used for mixing slurry with a high solids concentrations.

  8. AUTOMATED PARAMETRIC EXECUTION AND DOCUMENTATION FOR LARGE-SCALE SIMULATIONS

    SciTech Connect (OSTI)

    R. L. KELSEY; ET AL

    2001-03-01

    A language has been created to facilitate the automatic execution of simulations for purposes of enabling parametric study and test and evaluation. Its function is similar in nature to a job-control language, but more capability is provided in that the language extends the notion of literate programming to job control. Interwoven markup tags self document and define the job control process. The language works in tandem with another language used to describe physical systems. Both languages are implemented in the Extensible Markup Language (XML). A user describes a physical system for simulation and then creates a set of instructions for automatic execution of the simulation. Support routines merge the instructions with the physical-system description, execute the simulation the specified number of times, gather the output data, and document the process and output for the user. The language enables the guided exploration of a parameter space and can be used for simulations that must determine optimal solutions to particular problems. It is generalized enough that it can be used with any simulation input files that are described using XML. XML is shown to be useful as a description language, an interchange language, and a self-documented language.

  9. Dish/Stirling Hybrid-Receiver Sub-Scale Tests and Full-Scale Design

    SciTech Connect (OSTI)

    Andraka, Charles; Bohn, Mark S.; Corey, John; Mehos, Mark; Moreno, James; Rawlinson, Scott

    1999-05-24

    We have designed and tested a prototype dish/Stirling hybrid-receiver combustion system. The system consists of a pre-mixed natural-gas burner heating a pin-finned sodium heat pipe. The design emphasizes simplicity, low cost, and ruggedness. Our test was on a 1/6th -scale device, with a nominal firing rate of 18kWt, a power throughput of 13kWt, and a sodium vapor temperature of 750C. The air/fuel mixture was electrically preheated to 640C to simulate recuperation. The test rig was instrumented for temperatures, pressures, flow rates, overall leak rate, and exhaust emissions. The data verify our burner and heat-transfer models. Performance and post-test examinations validate our choice of materials and fabrication methods. Based on the 1/6th -scale results, we are designing a till-scale hybrid receiver. This is a fully-integrated system, including burner, pin-fin primary heat exchanger, recuperator (in place of the electrical pre-heater used in the prototype system), solar absorber, and sodium heat pipe. The major challenges of the design are to avoid pre-ignition, achieve robust heat-pipe performance, and attain long life of the burner matrix, recuperator, and flue-gas seals. We have used computational fluid dynamics extensively in designing to avoid pre-ignition and for designing the heat-pipe wick, and we have used individual component tests and results of the 1/6th -scale test to optimize for long life. In this paper, we present our design philosophy and basic details of our design. We describe the sub-scale test rig and compare test results with predictions. Finally, we outline the evolution of our full-scale design, and present its current status.

  10. LLNL Small-Scale Friction sensitivity (BAM) Test

    SciTech Connect (OSTI)

    Simpson, L.R.; Foltz, M.F.

    1996-06-01

    Small-scale safety testing of explosives, propellants and other energetic materials, is done to determine their sensitivity to various stimuli including friction, static spark, and impact. Testing is done to discover potential handling problems for either newly synthesized materials of unknown behavior, or materials that have been stored for long periods of time. This report describes the existing {open_quotes}BAM{close_quotes} Small-Scale Friction Test, and the methods used to determine the friction sensitivity pertinent to handling energetic materials. The accumulated data for the materials tested is not listed here - that information is in a database. Included is, however, a short list of (1) materials that had an unusual response, and (2), a few {open_quotes}standard{close_quotes} materials representing the range of typical responses usually seen.

  11. 100 Area soil washing bench-scale test procedures

    SciTech Connect (OSTI)

    Freeman, H.D.; Gerber, M.A.; Mattigod, S.V.; Serne, R.J.

    1993-03-01

    This document describes methodologies and procedures for conducting soil washing treatability tests in accordance with the 100 Area Soil Washing Treatability Test Plan (DOE-RL 1992, Draft A). The objective of this treatability study is to evaluate the use of physical separation systems and chemical extraction methods as a means of separating chemically and radioactively contaminated soil fractions from uncontaminated soil fractions. These data will be primarily used for determining feasibility of the individual unit operations and defining the requirements for a system, or systems, for pilot-scale testing.

  12. Method for large-scale fabrication of atomic-scale structures on material surfaces using surface vacancies

    DOE Patents [OSTI]

    Lim, Chong Wee; Ohmori, Kenji; Petrov, Ivan Georgiev; Greene, Joseph E.

    2004-07-13

    A method for forming atomic-scale structures on a surface of a substrate on a large-scale includes creating a predetermined amount of surface vacancies on the surface of the substrate by removing an amount of atoms on the surface of the material corresponding to the predetermined amount of the surface vacancies. Once the surface vacancies have been created, atoms of a desired structure material are deposited on the surface of the substrate to enable the surface vacancies and the atoms of the structure material to interact. The interaction causes the atoms of the structure material to form the atomic-scale structures.

  13. Large-Scale First-Principles Molecular Dynamics Simulations on the

    Office of Scientific and Technical Information (OSTI)

    BlueGene/L Platform using the Qbox Code (Conference) | SciTech Connect Conference: Large-Scale First-Principles Molecular Dynamics Simulations on the BlueGene/L Platform using the Qbox Code Citation Details In-Document Search Title: Large-Scale First-Principles Molecular Dynamics Simulations on the BlueGene/L Platform using the Qbox Code We demonstrate that the Qbox code supports unprecedented large-scale First-Principles Molecular Dynamics (FPMD) applications on the BlueGene/L

  14. A review of large-scale LNG spills : experiment and modeling.

    SciTech Connect (OSTI)

    Luketa-Hanlin, Anay Josephine

    2005-04-01

    The prediction of the possible hazards associated with the storage and transportation of liquefied natural gas (LNG) by ship has motivated a substantial number of experimental and analytical studies. This paper reviews the experimental and analytical work performed to date on large-scale spills of LNG. Specifically, experiments on the dispersion of LNG, as well as experiments of LNG fires from spills on water and land are reviewed. Explosion, pool boiling, and rapid phase transition (RPT) explosion studies are described and discussed, as well as models used to predict dispersion and thermal hazard distances. Although there have been significant advances in understanding the behavior of LNG spills, technical knowledge gaps to improve hazard prediction are identified. Some of these gaps can be addressed with current modeling and testing capabilities. A discussion of the state of knowledge and recommendations to further improve the understanding of the behavior of LNG spills on water is provided.

  15. Nuclear EMP simulation for large-scale urban environments. FDTD for electrically large problems.

    SciTech Connect (OSTI)

    Smith, William S.; Bull, Jeffrey S.; Wilcox, Trevor; Bos, Randall J.; Shao, Xuan-Min; Goorley, John T.; Costigan, Keeley R.

    2012-08-13

    In case of a terrorist nuclear attack in a metropolitan area, EMP measurement could provide: (1) a prompt confirmation of the nature of the explosion (chemical or nuclear) for emergency response; and (2) and characterization parameters of the device (reaction history, yield) for technical forensics. However, urban environment could affect the fidelity of the prompt EMP measurement (as well as all other types of prompt measurement): (1) Nuclear EMP wavefront would no longer be coherent, due to incoherent production, attenuation, and propagation of gamma and electrons; and (2) EMP propagation from source region outward would undergo complicated transmission, reflection, and diffraction processes. EMP simulation for electrically-large urban environment: (1) Coupled MCNP/FDTD (Finite-difference time domain Maxwell solver) approach; and (2) FDTD tends to be limited to problems that are not 'too' large compared to the wavelengths of interest because of numerical dispersion and anisotropy. We use a higher-order low-dispersion, isotropic FDTD algorithm for EMP propagation.

  16. FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects

    Broader source: Energy.gov [DOE]

    FEMP developed a guide to help federal agencies, as well as the developers and financiers that work with them, to successfully install large-scale renewable energy projects at federal facilities.

  17. A Semi-Analytical Solution for Large-Scale Injection-Induced...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: A Semi-Analytical Solution for Large-Scale Injection-Induced PressurePerturbation and Leakage in a Laterally Bounded Aquifer-AquitardSystem Citation Details ...

  18. Assembly and installation of the large coil test facility test stand

    SciTech Connect (OSTI)

    Queen, C.C. Jr.

    1983-01-01

    The Large Coil Test Facility (LCTF) was built to test six tokamak-type superconducting coils, with three to be designed and built by US industrial teams and three provided by Japan, Switzerland, and Euratom under an international agreement. The facility is designed to test these coils in an environment which simulates that of a tokamak. The heart of this facility is the test stand, which is made up of four major assemblies: the Gravity Base Assembly, the Bucking Post Assembly, the Torque Ring Assembly, and the Pulse Coil Assembly. This paper provides a detailed review of the assembly and installation of the test stand components and the handling and installation of the first coil into the test stand.

  19. Transport Induced by Large Scale Convective Structures in a Dipole-Confined Plasma

    SciTech Connect (OSTI)

    Grierson, B. A.; Mauel, M. E.; Worstell, M. W.; Klassen, M.

    2010-11-12

    Convective structures characterized by ExB motion are observed in a dipole-confined plasma. Particle transport rates are calculated from density dynamics obtained from multipoint measurements and the reconstructed electrostatic potential. The calculated transport rates determined from the large-scale dynamics and local probe measurements agree in magnitude, show intermittency, and indicate that the particle transport is dominated by large-scale convective structures.

  20. Overcoming the Barrier to Achieving Large-Scale Production - A Case Study

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Semprius Confidential 1 Overcoming the Barriers to Achieving Large-Scale Production - A Case Study From concept to large-scale production, one manufacturer tells the story and identifies the primary challenges and how a small amount of government support could be most helpful. ____________________________________________________ Scott Burroughs Semprius, Inc. August 31, 2011 Semprius Confidential 2 Semprius Overview / Background Company: * Leading developer of commercial & utility solar

  1. A First Step towards Large-Scale Plants to Plastics Engineering |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy A First Step towards Large-Scale Plants to Plastics Engineering A First Step towards Large-Scale Plants to Plastics Engineering November 9, 2010 - 1:56pm Addthis Brookhaven National Laboratory researches making plastics from plants. Niketa Kumar Niketa Kumar Public Affairs Specialist, Office of Public Affairs What does this mean for me? By optimizing the accumulation of particular fatty acids, a Brookhaven team of scientists are developing a method suitable for

  2. Subject Heading: Cosmic Background Radiation - Cosmology LARGE-ANGULAR-SCALE ANISOTROPY IN THE COSMIC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Subject Heading: Cosmic Background Radiation - Cosmology LARGE-ANGULAR-SCALE ANISOTROPY IN THE COSMIC BACKGROUND RADIATION M. V. GORENSTEIN and G. F. SMOOT Space Sciences Laboratory and Lawrence Berkeley Laboratory University of California, Berkeley California 94720 Received: May 25,1980 A RSTRACT We report the results of an extended series of airborne measurements of large-angular-scale anisotropy in the 3 K cosmic background radiation. Observa- tions were carried out with a dual-antenna

  3. Partition-of-unity finite-element method for large scale quantum molecular

    Office of Scientific and Technical Information (OSTI)

    dynamics on massively parallel computational platforms (Technical Report) | SciTech Connect Technical Report: Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms Citation Details In-Document Search Title: Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms Over the course of the past two decades, quantum mechanical calculations have

  4. Overcoming the Barrier to Achieving Large-Scale Production - A Case Study |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Overcoming the Barrier to Achieving Large-Scale Production - A Case Study Overcoming the Barrier to Achieving Large-Scale Production - A Case Study This presentation summarizes the information given by Semprius during the Photovoltaic Validation and Bankability Workshop in San Jose, California, on August 31, 2011. PDF icon semprius_burroughs_pv_validation_2011_aug.pdf More Documents & Publications Federal Energy Management Program Report Template Final Report - 1366

  5. Creating Large Scale Database Servers (Technical Report) | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    Creating Large Scale Database Servers Citation Details In-Document Search Title: Creating Large Scale Database Servers The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced

  6. 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Simulations 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale Simulations 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale Simulations Halo Finder Enhancement Puts Supercomputer Users in the Driver's Seat November 2, 2015 Contact: Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov Nyxfilamentsandreeberhalos In this Reeber halo finder simulation, the blueish haze is a volume rendering of the density field that Nyx calculates every time step. The light blue and

  7. Energy Department Awards $66.7 Million for Large-Scale Carbon Sequestration

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Project | Department of Energy 66.7 Million for Large-Scale Carbon Sequestration Project Energy Department Awards $66.7 Million for Large-Scale Carbon Sequestration Project December 18, 2007 - 4:58pm Addthis Regional Partner to Demonstrate Safe and Permanent Storage of One Million Tons of CO2 at Illinois Site WASHINGTON, DC - Following closely on the heels of three recent awards through the Department of Energy's (DOE) Regional Carbon Sequestration Partnership Program, DOE today awarded

  8. The one-loop matter bispectrum in the Effective Field Theory of Large Scale

    Office of Scientific and Technical Information (OSTI)

    Structures (Journal Article) | SciTech Connect The one-loop matter bispectrum in the Effective Field Theory of Large Scale Structures Citation Details In-Document Search Title: The one-loop matter bispectrum in the Effective Field Theory of Large Scale Structures Authors: Angulo, Raul E. ; Foreman, Simon ; Schmittfull, Marcel ; Senatore, Leonardo Publication Date: 2015-10-01 OSTI Identifier: 1244649 DOE Contract Number: AC02-76SF00515 Resource Type: Journal Article Resource Relation: Journal

  9. Towards a Large-Scale Recording System: Demonstration of Polymer-Based

    Office of Scientific and Technical Information (OSTI)

    Penetrating Array for Chronic Neural Recording (Conference) | SciTech Connect Towards a Large-Scale Recording System: Demonstration of Polymer-Based Penetrating Array for Chronic Neural Recording Citation Details In-Document Search Title: Towards a Large-Scale Recording System: Demonstration of Polymer-Based Penetrating Array for Chronic Neural Recording Authors: Tooker, A ; Liu, D ; Anderson, E B ; Felix, S ; Shah, K G ; Lee, K Y ; Chung, J E ; Pannu, S ; Frank, L ; Tolosa, V Publication

  10. Large scale validation of the M5L lung CAD on heterogeneous CT datasets

    SciTech Connect (OSTI)

    Lopez Torres, E. E-mail: cerello@to.infn.it; Fiorina, E.; Pennazio, F.; Peroni, C.; Saletta, M.; Cerello, P. E-mail: cerello@to.infn.it; Camarlinghi, N.; Fantacci, M. E.

    2015-04-15

    Purpose: M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. Methods: M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number of features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. Results: The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. Conclusions: The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large scale screenings and clinical programs.

  11. Variability of Load and Net Load in Case of Large Scale Distributed Wind Power

    SciTech Connect (OSTI)

    Holttinen, H.; Kiviluoma, J.; Estanqueiro, A.; Gomez-Lazaro, E.; Rawn, B.; Dobschinski, J.; Meibom, P.; Lannoye, E.; Aigner, T.; Wan, Y. H.; Milligan, M.

    2011-01-01

    Large scale wind power production and its variability is one of the major inputs to wind integration studies. This paper analyses measured data from large scale wind power production. Comparisons of variability are made across several variables: time scale (10-60 minute ramp rates), number of wind farms, and simulated vs. modeled data. Ramp rates for Wind power production, Load (total system load) and Net load (load minus wind power production) demonstrate how wind power increases the net load variability. Wind power will also change the timing of daily ramps.

  12. Scaled Tests and Modeling of Effluent Stack Sampling Location Mixing

    SciTech Connect (OSTI)

    Recknagle, Kurtis P.; Yokuda, Satoru T.; Ballinger, Marcel Y.; Barnett, J. M.

    2009-02-01

    The Pacific Northwest National Laboratory researchers used a computational fluid dynamics (CFD) computer code to evaluate the mixing at a sampling system location of a research and development facility. The facility requires continuous sampling for radioactive air emissions. Researchers sought to determine whether the location would meet the criteria for uniform air velocity and contaminant concentration as prescribed in the American National Standard Institute (ANSI) standard, Sampling and Monitoring Releases of Airborne Radioactive Substances from the Stacks and Ducts of Nuclear Facilities. Standard ANSI/HPS N13.1-1999 requires that the sampling location be well-mixed and stipulates specific tests (e.g., velocity, gas, and aerosol uniformity and cyclonic flow angle) to verify the extent of mixing.. The exhaust system for the Radiochemical Processing Laboratory was modeled with a CFD code to better understand the flow and contaminant mixing and to predict mixing test results. The CFD results were compared to actual measurements made at a scale-model stack and to the limited data set for the full-scale facility stack. Results indicated that the CFD code provides reasonably conservative predictions for velocity, gas, and aerosol uniformity. Cyclonic flow predicted by the code is less than that measured by the required methods. In expanding from small to full scale, the CFD predictions for full-scale measurements show similar trends as in the scale model and no unusual effects. This work indicates that a CFD code can be a cost-effective aid in design or retrofit of a facilitys stack sampling location that will be required to meet Standard ANSI/HPS N13.1-1999.

  13. Large-scale simulation of methane dissociation along the West Spitzbergen Margin

    SciTech Connect (OSTI)

    Reagan, M.T.; Moridis, G.J.

    2009-07-15

    Vast quantities of methane are trapped in oceanic hydrate deposits, and there is concern that a rise in the ocean temperature will induce dissociation of these hydrate accumulations, potentially releasing large amounts of methane into the atmosphere. The recent discovery of active methane gas venting along the landward limit of the gas hydrate stability zone (GHSZ) on the shallow continental slope west of Spitsbergen could be an indication of this process, if the source of the methane can be confidently attributed to dissociating hydrates. In the first large-scale simulation study of its kind, we simulate shallow hydrate dissociation in conditions representative of the West Spitsbergen margin to test the hypothesis that the observed gas release originated from hydrates. The simulation results are consistent with this hypothesis, and are in remarkable agreement with the recently published observations. They show that shallow, low-saturation hydrate deposits, when subjected to temperature increases at the seafloor, can release significant quantities of methane, and that the releases will be localized near the landward limit of the top of the GHSZ. These results indicate the possibility that hydrate dissociation and methane release may be both a consequence and a cause of climate change.

  14. Simultaneous effect of modified gravity and primordial non-Gaussianity in large scale structure observations

    SciTech Connect (OSTI)

    Mirzatuny, Nareg; Khosravi, Shahram; Baghram, Shant; Moshafi, Hossein E-mail: khosravi@mail.ipm.ir E-mail: hosseinmoshafi@iasbs.ac.ir

    2014-01-01

    In this work we study the simultaneous effect of primordial non-Gaussianity and the modification of the gravity in f(R) framework on large scale structure observations. We show that non-Gaussianity and modified gravity introduce a scale dependent bias and growth rate functions. The deviation from ?CDM in the case of primordial non-Gaussian models is in large scales, while the growth rate deviates from ?CDM in small scales for modified gravity theories. We show that the redshift space distortion can be used to distinguish positive and negative f{sub NL} in standard background, while in f(R) theories they are not easily distinguishable. The galaxy power spectrum is generally enhanced in presence of non-Gaussianity and modified gravity. We also obtain the scale dependence of this enhancement. Finally we define galaxy growth rate and galaxy growth rate bias as new observational parameters to constrain cosmology.

  15. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect (OSTI)

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  16. Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities

    Broader source: Energy.gov [DOE]

    The Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities provides best practices and other helpful guidance for federal agencies developing large-scale renewable energy projects.

  17. Scaled Testing of Hydrogen Gas Getters for Transuranic Waste

    SciTech Connect (OSTI)

    Kaszuba, J.; Mroz, E.; Haga, M.; Hollis, W. K. [Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, New Mexico, 87545 (United States); Peterson, E.; Stone, M.; Orme, C.; Luther, T.; Benson, M. [Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-2208 (United States)

    2006-07-01

    Alpha radiolysis of hydrogenous waste and packaging materials generates hydrogen gas in radioactive storage and shipment containers. Hydrogen forms a flammable mixture with air over a wide range of concentrations (5% to 75%), and very low energy is needed to ignite hydrogen-air mixtures. For these reasons, the concentration of hydrogen in waste shipment containers (Transuranic Package Transporter-II or TRUPACT-II containers) needs to remain below the lower explosion limit of hydrogen in air (5 vol%). Accident scenarios and the resulting safety analysis require that this limit not be exceeded. The use of 'hydrogen getters' is being investigated as a way to prevent the build up of hydrogen in TRUPACT-II containers. Preferred getters are solid materials that scavenge hydrogen from the gas phase and chemically and irreversibly bind it into the solid state. In this study, two getter systems are evaluated: a) 1,4-bis (phenylethynyl)benzene or DEB, characterized by the presence of carbon-carbon triple bonds; and b) a proprietary polymer hydrogen getter, VEI or TruGetter, characterized by carbon-carbon double bonds. Carbon in both getter types may, in the presence of suitable precious metal catalysts such as palladium, irreversibly react with and bind hydrogen. With oxygen present, the precious metal may also eliminate hydrogen by catalyzing the formation of water. This reaction is called catalytic recombination. DEB and VEI performed satisfactorily in lab scale tests using small test volumes (ml-scale), high hydrogen generation rates, and short time spans of hours to days. The purpose of this study is to evaluate whether DEB and VEI perform satisfactorily in actual drum-scale tests with realistic hydrogen generation rates and time frames. The two getter systems were evaluated in test vessels comprised of a Gas Generation Test Program-style bell-jar and a drum equipped with a composite drum filter. The vessels were scaled to replicate the ratio between void space in the inner containment vessel of a TRUPACT-II container and volume of a payload of seven 55-gallon drums. The tests were conducted in an atmosphere of air for 60 days at ambient temperature (15 to 27 deg. C) and a scaled hydrogen generation rate of 2.60 E-07 moles hydrogen per second (0.35 cc/min). Hydrogen was successfully 'gettered' by both systems. Hydrogen concentrations remained below 5 vol% (in air) for the duration of the tests. However, catalytic reaction of hydrogen with carbon triple or double bonds in the getter materials did not take place. Instead, catalytic recombination was the predominant mechanism in both getters as evidenced by 1) consumption of oxygen in the bell-jars; 2) production of free water in the bell-jars; and 3) absence of chemical changes in both getters as shown by NMR spectra. (authors)

  18. Recent Accomplishments in the Irradiation Testing of Engineering-Scale Monolithic Fuel Specimens

    SciTech Connect (OSTI)

    N.E. Woolstenhulme; D.M. Wachs; M.K. Meyer; H.W. Glunz; R.B. Nielson

    2012-10-01

    The US fuel development team is focused on qualification and demonstration of the uranium-molybdenum monolithic fuel including irradiation testing of engineering-scale specimens. The team has recently accomplished the successful irradiation of the first monolithic multi-plate fuel element assembly within the AFIP-7 campaign. The AFIP-6 MKII campaign, while somewhat truncated by hardware challenges, exhibited successful irradiation of a large-scale monolithic specimen under extreme irradiation conditions. The channel gap and ultrasonic data are presented for AFIP-7 and AFIP-6 MKII, respectively. Finally, design concepts are summarized for future irradiations such as the base fuel demonstration and design demonstration experiment campaigns.

  19. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    SciTech Connect (OSTI)

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem important to the nations scientific progress as described shortly. Further, SLAC researchers routinely generate massive amounts of data, and frequently collaborate with other researchers located around the world. Thus SLAC is an ideal teammate through which to develop, test and deploy this technology. The nature of the datasets generated by simulations performed at SLAC presented unique visualization challenges especially when dealing with higher-order elements that were addressed during this Phase II. During this Phase II, we have developed a strong platform for collaborative visualization based on ParaView. We have developed and deployed a ParaView Web Visualization framework that can be used for effective collaboration over the Web. Collaborating and visualizing over the Web presents the community with unique opportunities for sharing and accessing visualization and HPC resources that hitherto with either inaccessible or difficult to use. The technology we developed in here will alleviate both these issues as it becomes widely deployed and adopted.

  20. Large-Scale Urban Decontamination; Developments, Historical Examples and Lessons Learned

    SciTech Connect (OSTI)

    Rick Demmer

    2007-02-01

    Recent terrorist threats and actual events have lead to a renewed interest in the technical field of large scale, urban environment decontamination. One of the driving forces for this interest is the real potential for the cleanup and removal of radioactive dispersal device (RDD or dirty bomb) residues. In response the U. S. Government has spent many millions of dollars investigating RDD contamination and novel decontamination methodologies. Interest in chemical and biological (CB) cleanup has also peaked with the threat of terrorist action like the anthrax attack at the Hart Senate Office Building and with catastrophic natural events such as Hurricane Katrina. The efficiency of cleanup response will be improved with these new developments and a better understanding of the old reliable methodologies. Perhaps the most interesting area of investigation for large area decontamination is that of the RDD. While primarily an economic and psychological weapon, the need to cleanup and return valuable or culturally significant resources to the public is nonetheless valid. Several private companies, universities and National Laboratories are currently developing novel RDD cleanup technologies. Because of its longstanding association with radioactive facilities, the U. S. Department of Energy National Laboratories are at the forefront in developing and testing new RDD decontamination methods. However, such cleanup technologies are likely to be fairly task specific; while many different contamination mechanisms, substrate and environmental conditions will make actual application more complicated. Some major efforts have also been made to model potential contamination, to evaluate both old and new decontamination techniques and to assess their readiness for use. Non-radioactive, CB threats each have unique decontamination challenges and recent events have provided some examples. The U. S. Environmental Protection Agency (EPA), as lead agency for these emergency cleanup responses, has a sound approach for decontamination decision-making that has been applied several times. The anthrax contamination at the U. S. Hart Senate Office Building and numerous U. S. Post Office facilities are examples of employing novel technical responses. Decontamination of the Hart Office building required development of a new approach for high level decontamination of biological contamination as well as techniques for evaluating the technology effectiveness. The World Trade Center destruction also demonstrated the need for, and successful implementation of, appropriate cleanup methodologies. There are a number of significant lessons that can be gained from a look at previous large scale cleanup projects. Too often we are quick to apply a costly package and dispose method when sound technological cleaning approaches are available. Understanding historical perspectives, advanced planning and constant technology improvement are essential to successful decontamination.

  1. Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet.

    SciTech Connect (OSTI)

    Adalsteinsson, Helgi; Armstrong, Robert C.; Chiang, Ken; Gentile, Ann C.; Lloyd, Levi; Minnich, Ronald G.; Vanderveen, Keith; Van Randwyk, Jamie A; Rudish, Don W.

    2008-10-01

    We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3

  2. What Will the Neighbors Think? Building Large-Scale Science Projects Around the World

    ScienceCinema (OSTI)

    Jones, Craig; Mrotzek, Christian; Toge, Nobu; Sarno, Doug

    2010-01-08

    Public participation is an essential ingredient for turning the International Linear Collider into a reality. Wherever the proposed particle accelerator is sited in the world, its neighbors -- in any country -- will have something to say about hosting a 35-kilometer-long collider in their backyards. When it comes to building large-scale physics projects, almost every laboratory has a story to tell. Three case studies from Japan, Germany and the US will be presented to examine how community relations are handled in different parts of the world. How do particle physics laboratories interact with their local communities? How do neighbors react to building large-scale projects in each region? How can the lessons learned from past experiences help in building the next big project? These and other questions will be discussed to engage the audience in an active dialogue about how a large-scale project like the ILC can be a good neighbor.

  3. Design Report for the Scale Air-Cooled RCCS Tests in the Natural convection Shutdown heat removal Test Facility (NSTF)

    SciTech Connect (OSTI)

    Lisowski, D. D.; Farmer, M. T.; Lomperski, S.; Kilsdonk, D. J.; Bremer, N.; Aeschlimann, R. W.

    2014-06-01

    The Natural convection Shutdown heat removal Test Facility (NSTF) is a large scale thermal hydraulics test facility that has been built at Argonne National Laboratory (ANL). The facility was constructed in order to carry out highly instrumented experiments that can be used to validate the performance of passive safety systems for advanced reactor designs. The facility has principally been designed for testing of Reactor Cavity Cooling System (RCCS) concepts that rely on natural convection cooling for either air or water-based systems. Standing 25-m in height, the facility is able to supply up to 220 kW at 21 kW/m2 to accurately simulate the heat fluxes at the walls of a reactor pressure vessel. A suite of nearly 400 data acquisition channels, including a sophisticated fiber optic system for high density temperature measurements, guides test operations and provides data to support scaling analysis and modeling efforts. Measurements of system mass flow rate, air and surface temperatures, heat flux, humidity, and pressure differentials, among others; are part of this total generated data set. The following report provides an introduction to the top level-objectives of the program related to passively safe decay heat removal, a detailed description of the engineering specifications, design features, and dimensions of the test facility at Argonne. Specifications of the sensors and their placement on the test facility will be provided, along with a complete channel listing of the data acquisition system.

  4. Evaluation of Flygt Mixers for Application in Savannah River Site Tank 19 Test Results from Phase A: Small-Scale Testing at ITT Flygt

    SciTech Connect (OSTI)

    Powell, M.R.; Farmer, J.R.; Gladki, H.; Hatchell, B.K.; Poirier, M.R.; Rodwell, P.O.

    1999-03-30

    The key findings of the small-scale Flygt mixer tests are provided in this section. Some of these findings may not apply in larger tanks, so these data must be applied carefully when making predictions for large tanks. Flygt mixer testing in larger tanks at PNNL and in a full-scale tank at the SRS will be used to determine the applicability of these findings. The principal objectives of the small-scale Flygt mixer tests were to measure the critical fluid velocities required for sludge mobilization and particle suspension, to evaluate the applicability of the Gladki (1997) method for predicting required mixer thrust, and to provide small-scale test results for comparison with larger-scale tests to observe the effects of scale-up. The tank profile and mixer orientation (i.e., stationary, horizontal mixers) were in the same configuration as the prototype system, however, available resources did not allow geometric, kinematic, and dynamic similitude to be achieved. The results of these tests will be used in conjunction with the results from similar tests using larger tanks and mixers (tank diameters of 1.8 and 5.7 m [Powell et al. 1999]) to evaluate the effects of scaling and to aid in developing a methodology for predicting performance at full scale.

  5. Evaluation of LLTR Series II tests A-1A and A-1B test results. [Large Leak Test Rig

    SciTech Connect (OSTI)

    Shoopak, B F; Amos, J C; Norvell, T J

    1980-03-01

    The standard methodology, with minor modifications provides conservative yet realistic predictions of leaksite and other sodium system pressures in the LLTR Series II vessel and piping. The good agreement between predicted and measured pressures indicates that the TRANSWRAP/RELAP modeling developed from the Series I tests is applicable to larger scale units prototypical of the Clinch River steam generator design. Calculated sodium system pressures are sensitive to several modeling parameters including rupture disc modeling, acoustic velocity in the test vessel, and flow rate from the rupture tube. The acoustic velocity which produced best agreement with leaksite pressures was calculated based on the shroud diameter and shroud wall thickness. The corresponding rupture tube discharge coefficient was that of the standard design methodology developed from Series I testing. As found in Series I testing, the Series II data suggests that the leading edge of the flow in the relief line is two phase for a single, doubled-ended guillotine tube rupture. The steam generator shroud acts as if it is relatively transparent to the transmission of radial pressures to the vessel wall. Slightly lower sodium system maximum pressures measured during Test A-1b compared to Test A-1a are attributed to premature failure (failure at a lower pressure) of the rupture disc in contact with the sodium for test A-1b. The delay in failure of the second disc in Test A-1b, which was successfully modeled with TRANSWRAP, is attributed to the limited energy in the nitrogen injection.

  6. Large scale magnetic fields and coherent structures in nonuniform unmagnetized plasma

    SciTech Connect (OSTI)

    Jucker, Martin; Andrushchenko, Zhanna N.; Pavlenko, Vladimir P.

    2006-07-15

    The properties of streamers and zonal magnetic structures in magnetic electron drift mode turbulence are investigated. The stability of such large scale structures is investigated in the kinetic and the hydrodynamic regime, for which an instability criterion similar to the Lighthill criterion for modulational instability is found. Furthermore, these large scale flows can undergo further nonlinear evolution after initial linear growth, which can lead to the formation of long-lived coherent structures consisting of self-bound wave packets between the surfaces of two different flow velocities with an expected modification of the anomalous electron transport properties.

  7. Large-Scale Deep Learning on the YFCC100M Dataset (Conference) | SciTech

    Office of Scientific and Technical Information (OSTI)

    Connect Conference: Large-Scale Deep Learning on the YFCC100M Dataset Citation Details In-Document Search Title: Large-Scale Deep Learning on the YFCC100M Dataset Authors: Ni, K ; Boakye, K ; Van Essen, B ; Pearce, R ; Borth, D ; Chen, B ; Wang, E Publication Date: 2014-10-01 OSTI Identifier: 1177251 Report Number(s): LLNL-CONF-661841 DOE Contract Number: DE-AC52-07NA27344 Resource Type: Conference Resource Relation: Conference: Presented at: Neural Information Processing Systems 2014,

  8. Stimulated forward Raman scattering in large scale-length laser-produced

    Office of Scientific and Technical Information (OSTI)

    plasmas (Journal Article) | SciTech Connect Stimulated forward Raman scattering in large scale-length laser-produced plasmas Citation Details In-Document Search Title: Stimulated forward Raman scattering in large scale-length laser-produced plasmas Authors: Niemann, C ; Berger, R L ; Divol, L ; Kirkwood, R K ; Moody, J D ; Sorce, C M ; Glenzer, S H Publication Date: 2011-08-22 OSTI Identifier: 1113524 Report Number(s): LLNL-JRNL-496073 DOE Contract Number: W-7405-ENG-48 Resource Type:

  9. DOE Awards $126.6 Million for Two More Large-Scale Carbon Sequestration

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects | Department of Energy 6.6 Million for Two More Large-Scale Carbon Sequestration Projects DOE Awards $126.6 Million for Two More Large-Scale Carbon Sequestration Projects May 6, 2008 - 11:30am Addthis Projects in California and Ohio Join Four Others in Effort to Drastically Reduce Greenhouse Gas Emissions WASHINGTON, DC - The U.S. Department of Energy (DOE) today announced awards of more than $126.6 million to the West Coast Regional Carbon Sequestration Partnership (WESTCARB) and

  10. Robust and scalable scheme to generate large-scale entanglement webs

    Office of Scientific and Technical Information (OSTI)

    (Journal Article) | SciTech Connect Robust and scalable scheme to generate large-scale entanglement webs Citation Details In-Document Search Title: Robust and scalable scheme to generate large-scale entanglement webs We propose a robust and scalable scheme to generate an N-qubit W state among separated quantum nodes (cavity-QED systems) by using linear optics and postselections. The present scheme inherits the robustness of the Barrett-Kok scheme [S. D. Barrett and P. Kok, Phys. Rev. A 71,

  11. Energy Department Loan Guarantee Would Support Large-Scale Rooftop Solar

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Power for U.S. Military Housing | Department of Energy Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S. Military Housing Energy Department Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S. Military Housing September 7, 2011 - 2:10pm Addthis Washington D.C. - U.S. Energy Secretary Steven Chu today announced the offer of a conditional commitment for a partial guarantee of a $344 million loan that will support the SolarStrong Project, which is expected

  12. Thermal Performance Evaluation of Attic Radiant Barrier Systems Using the Large Scale Climate Simulator (LSCS)

    SciTech Connect (OSTI)

    Shrestha, Som S; Miller, William A; Desjarlais, Andre Omer

    2013-01-01

    Application of radiant barriers and low-emittance surface coatings in residential building attics can significantly reduce conditioning loads from heat flow through attic floors. The roofing industry has been developing and using various radiant barrier systems and low-emittance surface coatings to increase energy efficiency in buildings; however, minimal data are available that quantifies the effectiveness of these technologies. This study evaluates performance of various attic radiant barrier systems under simulated summer daytime conditions and nighttime or low solar gain daytime winter conditions using the large scale climate simulator (LSCS). The four attic configurations that were evaluated are 1) no radiant barrier (control), 2) perforated low-e foil laminated oriented strand board (OSB) deck, 3) low-e foil stapled on rafters, and 4) liquid applied low-emittance coating on roof deck and rafters. All test attics used nominal RUS 13 h-ft2- F/Btu (RSI 2.29 m2-K/W) fiberglass batt insulation on attic floor. Results indicate that the three systems with radiant barriers had heat flows through the attic floor during summer daytime condition that were 33%, 50%, and 19% lower than the control, respectively.

  13. Large-Scale Delamination of Multi-Layers Transition Metal Carbides and Carbonitrides MXenes

    SciTech Connect (OSTI)

    Abdelmalak, Michael Naguib; Unocic, Raymond R; Armstrong, Beth L; Nanda, Jagjit

    2015-01-01

    Herein we report on a general approach to delaminate multi-layered MXenes using an organic base to induce swelling that in turn weakens the bonds between the MX layers. Simple agitation or mild sonication of the swollen MXene in water resulted in the large-scale delamination of the MXene layers. The delamination method is demonstrated for vanadium carbide, and titanium carbonitrides MXenes.

  14. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect (OSTI)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  15. Development of large scale production of Nd-doped phosphate glasses for megajoule-scale laser systems

    SciTech Connect (OSTI)

    Ficini, G.; Campbell, J.H.

    1996-05-01

    Nd-doped phosphate glasses are the preferred gain medium for high-peak-power lasers used for Inertial Confinement Fusion research because they have excellent energy storage and extraction characteristics. In addition, these glasses can be manufactured defect-free in large sizes and at relatively low cost. To meet the requirements of the future mega-joule size lasers, advanced laser glass manufacturing methods are being developed that would enable laser glass to be continuously produced at the rate of several thousand large (790 x 440 x 44 mm{sup 3}) plates of glass per year. This represents more than a 10 to 100-fold improvement in the scale of the present manufacturing technology.

  16. Effects of Introduced Materials in the Drift Scale Test

    SciTech Connect (OSTI)

    DeLoach, L; Jones, RL

    2002-01-11

    Water samples previously acquired from superheated (>140 C) zones within hydrological test boreholes of the Drift Scale Test (DST) show relatively high fluoride concentrations (5-66 ppm) and low pH (3.1-3.5) values. In these high temperature regions of the rock, water is present superheated vapor only--liquid water for sampling purposes is obtained during the sampling process by cooling. Based on data collected to date, it is evident that the source of the fluoride and low pH is from introduced man-made materials (Teflon{trademark} and/or Viton{trademark} fluoroelastomer) used in the test. The test materials may contribute fluoride either by degassing hydrogen fluoride (HF) directly to produce trace concentrations of HF gas ({approx}0.1 ppm) in the high temperature steam, or by leaching fluoride in the sampling tubes after condensation of the superheated steam. HF gas is known to be released from Viton{trademark} at high temperatures (Dupont Dow Elastomers L.L.C., Elkton, MD, personal communication) and the sample water compositions indicate near stoichiometric balance of hydrogen ion and fluoride ion, indicating dissolution of HF gas into the aqueous phase. These conclusions are based on a series of water samples collected to determine if the source of the fluoride is from the degradation of materials originally installed to facilitate measurements. Analyses of these water samples show that the source of the fluoride is the introduced materials, that is the Viton{trademark} packers used to isolate test zones and/or Teflon{trademark} tubing used to draw water and steam from the test zones. In particular, water samples collected from borehole (BH) 72 high temperatures ({approx} 170 C) prior to introduction of any Viton{trademark} or Teflon{trademark} show pH Values (4.8 to 5.5) and fluoride concentrations well below 1 ppm over a period of six months. These characteristics are typical of condensing DST steam that contains only some dissolved carbon dioxide generated by water-mineral-gas reactions in the rock. With the introduction of the Viton{trademark} packer materials and Teflon{trademark} sampling tube in BH72, the water samples show pH values drop to 3.8, while fluoride rises to 2.4 ppm within three days. After nine days, the pH values reach as low as 3.4 and fluoride concentrations rise as high as 7.5 ppm in the collected samples. The background information describing the fluoride issue and a summary of the water collection activities along with the analytical results are provided below. The results of the field test confirm the hypothesis that the source of the fluoride in specific samples from the DST is the introduced test materials (i.e. Viton{trademark} and/or Teflon{trademark}). This is positive from the perspective of repository performance, particularly waste package and drip shield degradation behavior, as deleterious introduced materials would be avoided in an operating repository. Ongoing laboratory testing to be Completed in January 2002, and additional testing in BH72 and BH55 will address further details, such as the specific material introducing the fluorine and the material breakdown process.

  17. Towards physics responsible for large-scale Lyman-α forest bias parameters

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Agnieszka M. Cieplak; Slosar, Anze

    2016-03-08

    Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density (bδ) and velocity gradient (bη) biases of the Lyman-α forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamically simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit of no thermalmore » broadening and linear redshift-space distortions. We also show that his bη formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of bη and the small scale flux PDF might be a sensitive probe of the thermal state of the IGM. Lastly, we find that large-scale biases derived from the smoothed total matter field are within 10–20% to those based on hydrodynamical quantities, in line with other measurements in the literature.« less

  18. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    SciTech Connect (OSTI)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.

  19. NREL Controllable Grid Interface for Testing MW-Scale Wind Turbine

    Office of Scientific and Technical Information (OSTI)

    Controllable Grid Interface for Testing MW-Scale Wind Turbine Generators (Poster) McDade, M.; Gevorgian, V.; Wallen, R.; Erdman, W. 17 WIND ENERGY WIND TURBINE TESTING;...

  20. On scale and magnitude of pressure build-up induced by large-scale geologic storage of CO2

    SciTech Connect (OSTI)

    Zhou, Q.; Birkholzer, J. T.

    2011-05-01

    The scale and magnitude of pressure perturbation and brine migration induced by geologic carbon sequestration is discussed assuming a full-scale deployment scenario in which enough CO{sub 2} is captured and stored to make relevant contributions to global climate change mitigation. In this scenario, the volumetric rates and cumulative volumes of CO{sub 2} injection would be comparable to or higher than those related to existing deep-subsurface injection and extraction activities, such as oil production. Large-scale pressure build-up in response to the injection may limit the dynamic storage capacity of suitable formations, because over-pressurization may fracture the caprock, may drive CO{sub 2}/brine leakage through localized pathways, and may cause induced seismicity. On the other hand, laterally extensive sedimentary basins may be less affected by such limitations because (i) local pressure effects are moderated by pressure propagation and brine displacement into regions far away from the CO{sub 2} storage domain; and (ii) diffuse and/or localized brine migration into overlying and underlying formations allows for pressure bleed-off in the vertical direction. A quick analytical estimate of the extent of pressure build-up induced by industrial-scale CO{sub 2} storage projects is presented. Also discussed are pressure perturbation and attenuation effects simulated for two representative sedimentary basins in the USA: the laterally extensive Illinois Basin and the partially compartmentalized southern San Joaquin Basin in California. These studies show that the limiting effect of pressure build-up on dynamic storage capacity is not as significant as suggested by Ehlig-Economides and Economides, who considered closed systems without any attenuation effects.

  1. Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets

    SciTech Connect (OSTI)

    Patchett, John M; Ahrens, James P; Lo, Li - Ta; Browniee, Carson S; Mitchell, Christopher J; Hansen, Chuck

    2010-10-15

    Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We present a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.

  2. Large Optic Drying Station: Summary of Dryer Certification Tests

    SciTech Connect (OSTI)

    Barbee, T W; Ayers, S L; Ayers, M J

    2009-08-28

    The purpose of this document is to outline the methodology used to baseline and maintain the cleanliness status of the newly built and installed Large Optic Cleaning Station (LOCS). The station has currently been in use for eleven months; and after many cleaning studies and implementation of resulting improvements appears to be cleaning optics to a level that is acceptable for the fabrication of Nano-Laminates.

  3. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect (OSTI)

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  4. Full-Scale Cask Testing and Public Acceptance of Spent Nuclear Fuel Shipments - 12254

    SciTech Connect (OSTI)

    Dilger, Fred; Halstead, Robert J.; Ballard, James D.

    2012-07-01

    Full-scale physical testing of spent fuel shipping casks has been proposed by the National Academy of Sciences (NAS) 2006 report on spent nuclear fuel transportation, and by the Presidential Blue Ribbon Commission (BRC) on America's Nuclear Future 2011 draft report. The U.S. Nuclear Regulatory Commission (NRC) in 2005 proposed full-scale testing of a rail cask, and considered 'regulatory limits' testing of both rail and truck casks (SRM SECY-05-0051). The recent U.S. Department of Energy (DOE) cancellation of the Yucca Mountain project, NRC evaluation of extended spent fuel storage (possibly beyond 60-120 years) before transportation, nuclear industry adoption of very large dual-purpose canisters for spent fuel storage and transport, and the deliberations of the BRC, will fundamentally change assumptions about the future spent fuel transportation system, and reopen the debate over shipping cask performance in severe accidents and acts of sabotage. This paper examines possible approaches to full-scale testing for enhancing public confidence in risk analyses, perception of risk, and acceptance of spent fuel shipments. The paper reviews the literature on public perception of spent nuclear fuel and nuclear waste transportation risks. We review and summarize opinion surveys sponsored by the State of Nevada over the past two decades, which show consistent patterns of concern among Nevada residents about health and safety impacts, and socioeconomic impacts such as reduced property values along likely transportation routes. We also review and summarize the large body of public opinion survey research on transportation concerns at regional and national levels. The paper reviews three past cask testing programs, the way in which these cask testing program results were portrayed in films and videos, and examines public and official responses to these three programs: the 1970's impact and fire testing of spent fuel truck casks at Sandia National Laboratories, the 1980's regulatory and demonstration testing of MAGNOX fuel flasks in the United Kingdom (the CEGB 'Operation Smash Hit' tests), and the 1980's regulatory drop and fire tests conducted on the TRUPACT II containers used for transuranic waste shipments to the Waste Isolation Pilot Plant in New Mexico. The primary focus of the paper is a detailed evaluation of the cask testing programs proposed by the NRC in its decision implementing staff recommendations based on the Package Performance Study, and by the State of Nevada recommendations based on previous work by Audin, Resnikoff, Dilger, Halstead, and Greiner. The NRC approach is based on demonstration impact testing (locomotive strike) of a large rail cask, either the TAD cask proposed by DOE for spent fuel shipments to Yucca Mountain, or a similar currently licensed dual-purpose cask. The NRC program might also be expanded to include fire testing of a legal-weight truck cask. The Nevada approach calls for a minimum of two tests: regulatory testing (impact, fire, puncture, immersion) of a rail cask, and extra-regulatory fire testing of a legal-weight truck cask, based on the cask performance modeling work by Greiner. The paper concludes with a discussion of key procedural elements - test costs and funding sources, development of testing protocols, selection of testing facilities, and test peer review - and various methods of communicating the test results to a broad range of stakeholder audiences. (authors)

  5. General-Purpose Heat Source development: Extended series test program large fragment tests

    SciTech Connect (OSTI)

    Cull, T.A.

    1989-08-01

    General-Purpose Heat Source radioisotope thermoelectric generators (GPHS-RTGs) will provide electric power for the NASA Galileo and European Space Agency Ulysses missions. Each GPHS-RTG comprises two major components: GPHS modules, which provide thermal energy, and a thermoelectric converter, which converts the thermal energy into electric power. Each of the 18 GPHS modules in a GPHS-RTG contains four /sup 238/PuO/sub 2/-fueled capsules. LANL conducted a series of safety verification tests on the GPHS-RTG before the scheduled May 1986 launch of the Galileo spacecraft to assess the ability of the GPHS modules to contain the plutonia in potential accident environments. As a result of the Challenger 51-L accident in January 1986, NASA postponed the launch of Galileo; the launch vehicle was reconfigured and the spacecraft trajectory was modified. These actions prompted NASA to reevaluate potential mission accidents, and an extended series safety test program was initiated. The program included a series of large fragment tests that simulated the collision of solid rocket booster (SRB) fragments, generated in an SRB motor case rupture or resulting from a range safety officer SRB destruct action, with the GPHS-RTG. The tests indicated that fueled clads, inside a converter, will not breach or release fuel after a square (142 cm on a side) SRB fragment impacts flat-on at velocities up to 212 m/s, and that only the leading fueled capsules breach and release fuel after the square SRB fragment impacts the modules, inside the converter, edge-on at 95 m/s. 8 refs., 32 figs., 7 tabs.

  6. Molecular Dynamics Simulations from SNL's Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Plimpton, Steve; Thompson, Aidan; Crozier, Paul

    LAMMPS (http://lammps.sandia.gov/index.html) stands for Large-scale Atomic/Molecular Massively Parallel Simulator and is a code that can be used to model atoms or, as the LAMMPS website says, as a parallel particle simulator at the atomic, meso, or continuum scale. This Sandia-based website provides a long list of animations from large simulations. These were created using different visualization packages to read LAMMPS output, and each one provides the name of the PI and a brief description of the work done or visualization package used. See also the static images produced from simulations at http://lammps.sandia.gov/pictures.html The foundation paper for LAMMPS is: S. Plimpton, Fast Parallel Algorithms for Short-Range Molecular Dynamics, J Comp Phys, 117, 1-19 (1995), but the website also lists other papers describing contributions to LAMMPS over the years.

  7. Structure Discovery in Large Semantic Graphs Using Extant Ontological Scaling and Descriptive Statistics

    SciTech Connect (OSTI)

    al-Saffar, Sinan; Joslyn, Cliff A.; Chappell, Alan R.

    2011-07-18

    As semantic datasets grow to be very large and divergent, there is a need to identify and exploit their inherent semantic structure for discovery and optimization. Towards that end, we present here a novel methodology to identify the semantic structures inherent in an arbitrary semantic graph dataset. We first present the concept of an extant ontology as a statistical description of the semantic relations present amongst the typed entities modeled in the graph. This serves as a model of the underlying semantic structure to aid in discovery and visualization. We then describe a method of ontological scaling in which the ontology is employed as a hierarchical scaling filter to infer different resolution levels at which the graph structures are to be viewed or analyzed. We illustrate these methods on three large and publicly available semantic datasets containing more than one billion edges each. Keywords-Semantic Web; Visualization; Ontology; Multi-resolution Data Mining;

  8. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    SciTech Connect (OSTI)

    RODRIGUEZ, MARKO A.; BOLLEN, JOHAN; VAN DE SOMPEL, HERBERT

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  9. ISSUANCE 2015-12-11: Final Rule Regarding Test Procedures for Small, Large,

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    and Very Large Air-Cooled Commercial Package Air Conditioning and Heating Equipment | Department of Energy 1: Final Rule Regarding Test Procedures for Small, Large, and Very Large Air-Cooled Commercial Package Air Conditioning and Heating Equipment ISSUANCE 2015-12-11: Final Rule Regarding Test Procedures for Small, Large, and Very Large Air-Cooled Commercial Package Air Conditioning and Heating Equipment PDF icon CUAC TP Final Rule.pdf More Documents & Publications ISSUANCE 2015-07-27:

  10. PARTICLE ACCELERATION BY COLLISIONLESS SHOCKS CONTAINING LARGE-SCALE MAGNETIC-FIELD VARIATIONS

    SciTech Connect (OSTI)

    Guo, F.; Jokipii, J. R.; Kota, J. E-mail: jokipii@lpl.arizona.ed

    2010-12-10

    Diffusive shock acceleration at collisionless shocks is thought to be the source of many of the energetic particles observed in space. Large-scale spatial variations of the magnetic field have been shown to be important in understanding observations. The effects are complex, so here we consider a simple, illustrative model. Here we solve numerically the Parker transport equation for a shock in the presence of large-scale sinusoidal magnetic-field variations. We demonstrate that the familiar planar-shock results can be significantly altered as a consequence of large-scale, meandering magnetic lines of force. Because the perpendicular diffusion coefficient {kappa}{sub perpendicular} is generally much smaller than the parallel diffusion coefficient {kappa}{sub ||}, the energetic charged particles are trapped and preferentially accelerated along the shock front in the regions where the connection points of magnetic field lines intersecting the shock surface converge, and thus create the 'hot spots' of the accelerated particles. For the regions where the connection points separate from each other, the acceleration to high energies will be suppressed. Further, the particles diffuse away from the 'hot spot' regions and modify the spectra of downstream particle distribution. These features are qualitatively similar to the recent Voyager observations in the Heliosheath. These results are potentially important for particle acceleration at shocks propagating in turbulent magnetized plasmas as well as those which contain large-scale nonplanar structures. Examples include anomalous cosmic rays accelerated by the solar wind termination shock, energetic particles observed in propagating heliospheric shocks, galactic cosmic rays accelerated by supernova blast waves, etc.

  11. Metal Catalyzed sp2 Bonded Carbon - Large-scale Graphene Synthesis and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Beyond | MIT-Harvard Center for Excitonics Metal Catalyzed sp2 Bonded Carbon - Large-scale Graphene Synthesis and Beyond December 1, 2009 at 3pm/36-428 Peter Sutter Center for Functional Nanomaterials sutter abstract: Carbon honeycomb lattices have shown a number of remarkable properties. When wrapped up into fullerenes, for instance, superconductivity with high transition temperatures can be induced by alkali intercalation. Rolling carbon sheets up into 1-dimensional nanotubes generates the

  12. Effects of Volcanism, Crustal Thickness, and Large Scale Faulting on the

    Broader source: Energy.gov (indexed) [DOE]

    Development and Evolution of Geothermal Systems: Collaborative Project in Chile | Department of Energy Effects of Volcanism, Crustal Thickness, and Large Scale Faulting on the Development and Evolution of Geothermal Systems: Collaborative Project in Chile presentation at the April 2013 peer review meeting held in Denver, Colorado. PDF icon collaborative_project_chile_peer2013.pdf More Documents & Publications track 2: hydrothermal | geothermal 2015 peer review Blind Geothermal System

  13. High Fidelity Simulations of Large-Scale Wireless Networks (Plus-Up)

    SciTech Connect (OSTI)

    Onunkwo, Uzoma

    2015-11-01

    Sandia has built a strong reputation in scalable network simulation and emulation for cyber security studies to protect our nation’s critical information infrastructures. Georgia Tech has preeminent reputation in academia for excellence in scalable discrete event simulations, with strong emphasis on simulating cyber networks. Many of the experts in this field, such as Dr. Richard Fujimoto, Dr. George Riley, and Dr. Chris Carothers, have strong affiliations with Georgia Tech. The collaborative relationship that we intend to immediately pursue is in high fidelity simulations of practical large-scale wireless networks using ns-3 simulator via Dr. George Riley. This project will have mutual benefits in bolstering both institutions’ expertise and reputation in the field of scalable simulation for cyber-security studies. This project promises to address high fidelity simulations of large-scale wireless networks. This proposed collaboration is directly in line with Georgia Tech’s goals for developing and expanding the Communications Systems Center, the Georgia Tech Broadband Institute, and Georgia Tech Information Security Center along with its yearly Emerging Cyber Threats Report. At Sandia, this work benefits the defense systems and assessment area with promise for large-scale assessment of cyber security needs and vulnerabilities of our nation’s critical cyber infrastructures exposed to wireless communications.

  14. A method of orbital analysis for large-scale first-principles simulations

    SciTech Connect (OSTI)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  15. Fingerprints of anomalous primordial Universe on the abundance of large scale structures

    SciTech Connect (OSTI)

    Baghram, Shant; Abolhasani, Ali Akbar; Firouzjahi, Hassan; Namjoo, Mohammad Hossein E-mail: abolhasani@ipm.ir E-mail: MohammadHossein.Namjoo@utdallas.edu

    2014-12-01

    We study the predictions of anomalous inflationary models on the abundance of structures in large scale structure observations. The anomalous features encoded in primordial curvature perturbation power spectrum are (a): localized feature in momentum space, (b): hemispherical asymmetry and (c): statistical anisotropies. We present a model-independent expression relating the number density of structures to the changes in the matter density variance. Models with localized feature can alleviate the tension between observations and numerical simulations of cold dark matter structures on galactic scales as a possible solution to the missing satellite problem. In models with hemispherical asymmetry we show that the abundance of structures becomes asymmetric depending on the direction of observation to sky. In addition, we study the effects of scale-dependent dipole amplitude on the abundance of structures. Using the quasars data and adopting the power-law scaling k{sup n{sub A}-1} for the amplitude of dipole we find the upper bound n{sub A}<0.6 for the spectral index of the dipole asymmetry. In all cases there is a critical mass scale M{sub c} in which for MM{sub c}) the enhancement in variance induced from anomalous feature decreases (increases) the abundance of dark matter structures in Universe.

  16. A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations

    SciTech Connect (OSTI)

    Nomura, K; Seymour, R; Wang, W; Kalia, R; Nakano, A; Vashishta, P; Shimojo, F; Yang, L H

    2009-02-17

    A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based on hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).

  17. Technical and economical aspects of large-scale CO{sub 2} storage in deep oceans

    SciTech Connect (OSTI)

    Sarv, H.; John, J.

    2000-07-01

    The authors examined the technical and economical feasibility of two options for large-scale transportation and ocean sequestration of captured CO{sub 2} at depths of 3000 meters or greater. In one case, CO{sub 2} was pumped from a land-based collection center through six parallel-laid subsea pipelines. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating platform or a barge for vertical injection through a large-diameter pipe to the ocean floor. Based on the preliminary technical and economic analyses, tanker transportation and offshore injection through a large-diameter, 3,000-meter vertical pipeline from a floating structure appears to be the best method for delivering liquid CO{sub 2} to deep ocean floor depressions for distances greater than 400 km. Other benefits of offshore injection are high payload capability and ease of relocation. For shorter distances (less than 400 km), CO{sub 2} delivery by subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines or tankers were under 2 dollars per ton of stored CO{sub 2}. Their analyses also indicates that large-scale sequestration of captured CO{sub 2} in oceans is technologically feasible and has many commonalities with other strategies for deepsea natural gas and oil exploration installations.

  18. Large-scale structure evolution in axisymmetric, compressible free-shear layers

    SciTech Connect (OSTI)

    Aeschliman, D.P.; Baty, R.S.

    1997-05-01

    This paper is a description of work-in-progress. It describes Sandia`s program to study the basic fluid mechanics of large-scale mixing in unbounded, compressible, turbulent flows, specifically, the turbulent mixing of an axisymmetric compressible helium jet in a parallel, coflowing compressible air freestream. Both jet and freestream velocities are variable over a broad range, providing a wide range mixing layer Reynolds number. Although the convective Mach number, M{sub c}, range is currently limited by the present nozzle design to values of 0.6 and below, straightforward nozzle design changes would permit a wide range of convective Mach number, to well in excess of 1.0. The use of helium allows simulation of a hot jet due to the large density difference, and also aids in obtaining optical flow visualization via schlieren due to the large density gradient in the mixing layer. The work comprises a blend of analysis, experiment, and direct numerical simulation (DNS). There the authors discuss only the analytical and experimental efforts to observe and describe the evolution of the large-scale structures. The DNS work, used to compute local two-point velocity correlation data, will be discussed elsewhere.

  19. Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: Scale-awareness and application to single-column model experiments

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Endo, Satoshi

    2015-01-20

    Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less

  20. Developing of the large-bore powder gun for the Nevada test site

    SciTech Connect (OSTI)

    Jensen, Brian J; Esparza, James S

    2009-01-01

    Plate-impact experiments on single stage guns provide very planar loading conditions suitable for studying complex phenomena such as phase transitions and material strength, and provide important data useful for constraining and validating predictive models. The objective of the current work was to develop a large-bore (3.5-inches or greater) powder gun capable of accelerating projectiles to moderately high velocities (greater than 2.25 km/s) for impact experiments at Nevada Test Site. This gun will span a performance gap between existing gun facilities and provide a means of examining phenomena over a wide range of stresses and time-scales. Advantages of the large-bore gun include the capability to load multiple samples simultaneously, the use of large diameter samples that significantly extend the time duration of the experiment, and minimal tilt (no bow). This new capability required the development of a disposable confinement system that used an explosively driven closure method to prevent contamination from moving up into the gun system. Experimental results for both the gun system and the explosive valve are presented.

  1. DEVELOPMENT OF THE LARGE-BORE POWDER GUN FOR THE NEVADA TEST SITE

    SciTech Connect (OSTI)

    Jensen, B.J.; Esparza, J.

    2009-12-28

    Plate-impact experiments on single stage guns provide very planar loading conditions suitable for studying complex phenomena such as phase transitions and material strength, and provide important data useful for constraining and validating predictive models. The objective of the current work was to develop a large-bore (3.5'' or greater) powder gun capable of accelerating projectiles to moderately high velocities (greater than 2.25 km/s) for impact experiments at Nevada Test Site. This gun will span a performance gap between existing gun facilities and provide a means of examining phenomena over a wide range of stresses and time-scales. Advantages of the large-bore gun include the capability to load multiple samples simultaneously, the use of large diameter samples that significantly extend the time duration of the experiment, and minimal tilt (no bow). This new capability required the development of a disposable confinement system that used an explosively driven closure method to prevent contamination from moving up into the gun system. Experimental results for both the gun system and the explosive valve are presented.

  2. Primordial non-Gaussianity in the bispectra of large-scale structure

    SciTech Connect (OSTI)

    Tasinato, Gianmassimo; Tellarini, Matteo; Ross, Ashley J.; Wands, David E-mail: matteo.tellarini@port.ac.uk E-mail: david.wands@port.ac.uk

    2014-03-01

    The statistics of large-scale structure in the Universe can be used to probe non-Gaussianity of the primordial density field, complementary to existing constraints from the cosmic microwave background. In particular, the scale dependence of halo bias, which affects the halo distribution at large scales, represents a promising tool for analyzing primordial non-Gaussianity of local form. Future observations, for example, may be able to constrain the trispectrum parameter g{sub NL} that is difficult to study and constrain using the CMB alone. We investigate how galaxy and matter bispectra can distinguish between the two non-Gaussian parameters f{sub NL} and g{sub NL}, whose effects give nearly degenerate contributions to the power spectra. We use a generalization of the univariate bias approach, making the hypothesis that the number density of halos forming at a given position is a function of the local matter density contrast and of its local higher-order statistics. Using this approach, we calculate the halo-matter bispectra and analyze their properties. We determine a connection between the sign of the halo bispectrum on large scales and the parameter g{sub NL}. We also construct a combination of halo and matter bispectra that is sensitive to f{sub NL}, with little contamination from g{sub NL}. We study both the case of single and multiple sources to the primordial gravitational potential, discussing how to extend the concept of stochastic halo bias to the case of bispectra. We use a specific halo mass-function to calculate numerically the bispectra in appropriate squeezed limits, confirming our theoretical findings.

  3. LARGE-SCALE HYDROGEN PRODUCTION FROM NUCLEAR ENERGY USING HIGH TEMPERATURE ELECTROLYSIS

    SciTech Connect (OSTI)

    James E. O'Brien

    2010-08-01

    Hydrogen can be produced from water splitting with relatively high efficiency using high-temperature electrolysis. This technology makes use of solid-oxide cells, running in the electrolysis mode to produce hydrogen from steam, while consuming electricity and high-temperature process heat. When coupled to an advanced high temperature nuclear reactor, the overall thermal-to-hydrogen efficiency for high-temperature electrolysis can be as high as 50%, which is about double the overall efficiency of conventional low-temperature electrolysis. Current large-scale hydrogen production is based almost exclusively on steam reforming of methane, a method that consumes a precious fossil fuel while emitting carbon dioxide to the atmosphere. Demand for hydrogen is increasing rapidly for refining of increasingly low-grade petroleum resources, such as the Athabasca oil sands and for ammonia-based fertilizer production. Large quantities of hydrogen are also required for carbon-efficient conversion of biomass to liquid fuels. With supplemental nuclear hydrogen, almost all of the carbon in the biomass can be converted to liquid fuels in a nearly carbon-neutral fashion. Ultimately, hydrogen may be employed as a direct transportation fuel in a hydrogen economy. The large quantity of hydrogen that would be required for this concept should be produced without consuming fossil fuels or emitting greenhouse gases. An overview of the high-temperature electrolysis technology will be presented, including basic theory, modeling, and experimental activities. Modeling activities include both computational fluid dynamics and large-scale systems analysis. We have also demonstrated high-temperature electrolysis in our laboratory at the 15 kW scale, achieving a hydrogen production rate in excess of 5500 L/hr.

  4. On the possible origin of the large scale cosmic magnetic field

    SciTech Connect (OSTI)

    Coroniti, F. V.

    2014-01-10

    The possibility that the large scale cosmic magnetic field is directly generated at microgauss, equipartition levels during the reionization epoch by collisionless shocks that are forced to satisfy a downstream shear flow boundary condition is investigated through the development of two modelsthe accretion of an ionized plasma onto a weakly ionized cool galactic disk and onto a cool filament of the cosmic web. The dynamical structure and the physical parameters of the models are synthesized from recent cosmological simulations of the early reionization era after the formation of the first stars. The collisionless shock stands upstream of the disk and filament, and its dissipation is determined by ion inertial length Weibel turbulence. The downstream shear boundary condition is determined by the rotational neutral gas flow in the disk and the inward accretion flow along the filament. The shocked plasma is accelerated to the downstream shear flow velocity by the Weibel turbulence, and the relative shearing motion between the electrons and ions produces a strong, ion inertial scale current sheet that generates an equipartition strength, large scale downstream magnetic field, ?10{sup 6} G for the disk and ?6 10{sup 8} G for the filament. By assumption, hydrodynamic turbulence transports the shear-shock generated magnetic flux throughout the disk and filament volume.

  5. Large scale synthesis of nanostructured zirconia-based compounds from freeze-dried precursors

    SciTech Connect (OSTI)

    Gomez, A.; Villanueva, R.; Vie, D.; Murcia-Mascaros, S.; Martinez, E.; Beltran, A.; Sapina, F.; Vicent, M.; Sanchez, E.

    2013-01-15

    Nanocrystalline zirconia powders have been obtained at the multigram scale by thermal decomposition of precursors resulting from the freeze-drying of aqueous acetic solutions. This technique has equally made possible to synthesize a variety of nanostructured yttria or scandia doped zirconia compositions. SEM images, as well as the analysis of the XRD patterns, show the nanoparticulated character of those solids obtained at low temperature, with typical particle size in the 10-15 nm range when prepared at 673 K. The presence of the monoclinic, the tetragonal or both phases depends on the temperature of the thermal treatment, the doping concentration and the nature of the dopant. In addition, Rietveld refinement of the XRD profiles of selected samples allows detecting the coexistence of the tetragonal and the cubic phases for high doping concentration and high thermal treatment temperatures. Raman experiments suggest the presence of both phases also at relatively low treatment temperatures. - Graphical abstract: Zr{sub 1-x}A{sub x}O{sub 2-x/2} (A=Y, Sc; 0{<=}x{<=}0.12) solid solutions have been prepared as nanostructured powders by thermal decomposition of precursors obtained by freeze-drying, and this synthetic procedure has been scaled up to the 100 g scale. Highlights: Black-Right-Pointing-Pointer Zr{sub 1-x}A{sub x}O{sub 2-x/2} (A=Y, Sc; 0{<=}x{<=}0.12) solid solutions have been prepared as nanostructured powders. Black-Right-Pointing-Pointer The synthetic method involves the thermal decomposition of precursors obtained by freeze-drying. Black-Right-Pointing-Pointer The temperature of the thermal treatment controls particle sizes. Black-Right-Pointing-Pointer The preparation procedure has been scaled up to the 100 g scale. Black-Right-Pointing-Pointer This method is appropriate for the large-scale industrial preparation of multimetallic systems.

  6. APPLICATIONS OF CFD METHOD TO GAS MIXING ANALYSIS IN A LARGE-SCALED TANK

    SciTech Connect (OSTI)

    Lee, S; Richard Dimenna, R

    2007-03-19

    The computational fluid dynamics (CFD) modeling technique was applied to the estimation of maximum benzene concentration for the vapor space inside a large-scaled and high-level radioactive waste tank at Savannah River site (SRS). The objective of the work was to perform the calculations for the benzene mixing behavior in the vapor space of Tank 48 and its impact on the local concentration of benzene. The calculations were used to evaluate the degree to which purge air mixes with benzene evolving from the liquid surface and its ability to prevent an unacceptable concentration of benzene from forming. The analysis was focused on changing the tank operating conditions to establish internal recirculation and changing the benzene evolution rate from the liquid surface. The model used a three-dimensional momentum coupled with multi-species transport. The calculations included potential operating conditions for air inlet and exhaust flows, recirculation flow rate, and benzene evolution rate with prototypic tank geometry. The flow conditions are assumed to be fully turbulent since Reynolds numbers for typical operating conditions are in the range of 20,000 to 70,000 based on the inlet conditions of the air purge system. A standard two-equation turbulence model was used. The modeling results for the typical gas mixing problems available in the literature were compared and verified through comparisons with the test results. The benchmarking results showed that the predictions are in good agreement with the analytical solutions and literature data. Additional sensitivity calculations included a reduced benzene evolution rate, reduced air inlet and exhaust flow, and forced internal recirculation. The modeling results showed that the vapor space was fairly well mixed and that benzene concentrations were relatively low when forced recirculation and 72 cfm ventilation air through the tank boundary were imposed. For the same 72 cfm air inlet flow but without forced recirculation, the heavier benzene gas was stratified. The results demonstrated that benzene concentrations were relatively low for typical operating configurations and conditions. Detailed results and the cases considered in the calculations will be discussed here.

  7. Large Wind Turbine Blade Test Facilities to be in Mass., Texas...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Wind Turbine Blade Test Facilities to be in Mass., Texas Access to waterways key; ... build and operate new facilities to test the next generation of giant wind turbine blades. ...

  8. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.

    2016-02-18

    The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less

  9. Analysis of Soluble Re Concentrations in Refractory from Bulk Vitrification Full-Scale Test 38B

    SciTech Connect (OSTI)

    Cooley, Scott K.; Pierce, Eric M.; Bagaasen, Larry M.; Schweiger, Michael J.

    2006-06-30

    The capacity of the waste treatment plant (WTP) being built at the Hanford Site is not sufficient to process all of the tank waste accumulated from more than 40 years of nuclear materials production. Bulk vitrification can accelerate tank waste treatment by providing some supplemental low-activity waste (LAW) treatment capacity. Bulk vitrification combines LAW and glass-forming chemicals in a large metal container and melts the contents using electrical resistance heating. A castable refractory block (CRB) is used along with sand to insulate the container from the heat generated while melting the contents into a glass waste form. This report describes engineering-scale (ES) and full-scale (FS) tests that have been conducted. Several ES tests showed that a small fraction of soluble Tc moves in the CRB and results in a groundwater peak different than WTP glass. The total soluble Tc-99 fraction in the FS CRB is expected to be different than that determined in the ES tests, but until FS test results are available, the best-estimate soluble Tc-99 fraction from the ES tests has been used as a conservative estimate. The first FS test results are from cold simulant tests that have been spiked with Re. An estimated scale-up factor extrapolates the Tc-99 data collected at the ES to the FS bulk vitrification waste package. Test FS-38A tested the refractory design and did not have a Re spike. Samples were taken and analyzed to help determine Re CRB background concentrations using a Re-spiked, six-tank composite simulant mixed with soil and glass formers to produce the waste feed. Although this feed is not physically the same as the Demonstration Bulk Vitrification System feed , the chemical make-up is the same. Extensive sampling of the CRB was planned, but difficulties with the test prevented completion of a full box. An abbreviated plan is described that looks at duplicate samples taken from refractory archive sections, a lower wall sample, and two base samples to gain early information about Re and projected Tc-99 levels in the FS box.

  10. Harvey Wasserman! Large Scale Computing and Storage Requirements for High Energy Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Harvey Wasserman! Large Scale Computing and Storage Requirements for High Energy Physics Research: Target 2017 Meeting Goals & Process! ! --- 1 --- December 3 , 2 012 Logistics: Schedule * Agenda o n w orkshop w eb p age - h%p://www.nersc.gov/science/requirements/HEP * Mid---morning / a <ernoon b reak, l unch * Self---organizaBon for dinner * MulBple s cience a reas, o ne w orkshop - Science---focused b ut c rosscu?ng d iscussion - Explore a reas o f c ommon n eed ( within H EP) *

  11. NREL Offers an Open-Source Solution for Large-Scale Energy Data Collection

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Analysis - News Releases | NREL NREL Offers an Open-Source Solution for Large-Scale Energy Data Collection and Analysis June 18, 2013 The Energy Department's National Renewable Energy Laboratory (NREL) is launching an open-source system for storing, integrating, and aligning energy-related time-series data. NREL's Energy DataBus is used for tracking and analyzing energy use on its own campus. The system is applicable to other facilities-including anything from a single building to a

  12. Networks of silicon nanowires: A large-scale atomistic electronic structure analysis

    SciTech Connect (OSTI)

    Kele?, mit; Bulutay, Ceyhun; Liedke, Bartosz; Heinig, Karl-Heinz

    2013-11-11

    Networks of silicon nanowires possess intriguing electronic properties surpassing the predictions based on quantum confinement of individual nanowires. Employing large-scale atomistic pseudopotential computations, as yet unexplored branched nanostructures are investigated in the subsystem level as well as in full assembly. The end product is a simple but versatile expression for the bandgap and band edge alignments of multiply-crossing Si nanowires for various diameters, number of crossings, and wire orientations. Further progress along this line can potentially topple the bottom-up approach for Si nanowire networks to a top-down design by starting with functionality and leading to an enabling structure.

  13. Materials Science and Materials Chemistry for Large Scale Electrochemical Energy Storage: From Transportation to Electrical Grid

    SciTech Connect (OSTI)

    Liu, Jun; Zhang, Jiguang; Yang, Zhenguo; Lemmon, John P.; Imhoff, Carl H.; Graff, Gordon L.; Li, Liyu; Hu, Jian Z.; Wang, Chong M.; Xiao, Jie; Xia, Guanguang; Viswanathan, Vilayanur V.; Baskaran, Suresh; Sprenkle, Vincent L.; Li, Xiaolin; Shao, Yuyan; Schwenzer, Birgit

    2013-02-15

    Large-scale electrical energy storage has become more important than ever for reducing fossil energy consumption in transportation and for the widespread deployment of intermittent renewable energy in electric grid. However, significant challenges exist for its applications. Here, the status and challenges are reviewed from the perspective of materials science and materials chemistry in electrochemical energy storage technologies, such as Li-ion batteries, sodium (sulfur and metal halide) batteries, Pb-acid battery, redox flow batteries, and supercapacitors. Perspectives and approaches are introduced for emerging battery designs and new chemistry combinations to reduce the cost of energy storage devices.

  14. Laboratory measurements of large-scale carbon sequestration flows in saline reservoirs

    SciTech Connect (OSTI)

    Backhaus, Scott N

    2010-01-01

    Brine saturated with CO{sub 2} is slightly denser than the original brine causing it to sink to the bottom of a saline reservoir where the CO{sub 2} is safely sequestered. However, the buoyancy of pure CO{sub 2} relative to brine drives it to the top of the reservoir where it collects underneath the cap rock as a separate phase of supercritical fluid. Without additional processes to mix the brine and CO{sub 2}, diffusion in this geometry is slow and would require an unacceptably long time to consume the pure CO{sub 2}. However, gravity and diffusion-driven convective instabilities have been hypothesized that generate enhanced CO{sub 2}-brine mixing promoting dissolution of CO{sub 2} into the brine on time scale of a hundred years. These flows involve a class of hydrodynamic problems that are notoriously difficult to simulate; the simultaneous flow of mUltiple fluids (CO{sub 2} and brine) in porous media (rock or sediment). The hope for direct experimental confirmation of simulations is dim due to the difficulty of obtaining high resolution data from the subsurface and the high pressures ({approx}100 bar), long length scales ({approx}100 meters), and long time scales ({approx}100 years) that are characteristic of these flows. We have performed imaging and mass transfer measurements in similitude-scaled laboratory experiments that provide benchmarks to test reservoir simulation codes and enhance their predictive power.

  15. FAST Code Verification of Scaling Laws for DeepCwind Floating Wind System Tests: Preprint

    SciTech Connect (OSTI)

    Jain, A.; Robertson, A. N.; Jonkman, J. M.; Goupee, A. J.; Kimball, R. W.; Swift, A. H. P.

    2012-04-01

    This paper investigates scaling laws that were adopted for the DeepCwind project for testing three different floating wind systems at 1/50 scale in a wave tank under combined wind and wave loading.

  16. Detecting and mitigating abnormal events in large scale networks: budget constrained placement on smart grids

    SciTech Connect (OSTI)

    Santhi, Nandakishore; Pan, Feng

    2010-10-19

    Several scenarios exist in the modern interconnected world which call for an efficient network interdiction algorithm. Applications are varied, including various monitoring and load shedding applications on large smart energy grids, computer network security, preventing the spread of Internet worms and malware, policing international smuggling networks, and controlling the spread of diseases. In this paper we consider some natural network optimization questions related to the budget constrained interdiction problem over general graphs, specifically focusing on the sensor/switch placement problem for large-scale energy grids. Many of these questions turn out to be computationally hard to tackle. We present a particular form of the interdiction question which is practically relevant and which we show as computationally tractable. A polynomial-time algorithm will be presented for solving this problem.

  17. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    SciTech Connect (OSTI)

    Bonne, François; Bonnay, Patrick

    2014-01-29

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  18. DOE’s New Large Blade Test Facility in Massachusetts Completes First Commercial Blade Tests

    Office of Energy Efficiency and Renewable Energy (EERE)

    Since opening its doors for business in May, the Wind Technology Testing Center (WTTC), in Boston, Massachusetts, has come up to full speed testing the long wind turbine blades produced for today's larger wind turbines.

  19. A High-Performance Rechargeable Iron Electrode for Large-Scale Battery-Based Energy Storage

    SciTech Connect (OSTI)

    Manohar, AK; Malkhandi, S; Yang, B; Yang, C; Prakash, GKS; Narayanan, SR

    2012-01-01

    Inexpensive, robust and efficient large-scale electrical energy storage systems are vital to the utilization of electricity generated from solar and wind resources. In this regard, the low cost, robustness, and eco-friendliness of aqueous iron-based rechargeable batteries are particularly attractive and compelling. However, wasteful evolution of hydrogen during charging and the inability to discharge at high rates have limited the deployment of iron-based aqueous batteries. We report here new chemical formulations of the rechargeable iron battery electrode to achieve a ten-fold reduction in the hydrogen evolution rate, an unprecedented charging efficiency of 96%, a high specific capacity of 0.3 Ah/g, and a twenty-fold increase in discharge rate capability. We show that modifying high-purity carbonyl iron by in situ electro-deposition of bismuth leads to substantial inhibition of the kinetics of the hydrogen evolution reaction. The in situ formation of conductive iron sulfides mitigates the passivation by iron hydroxide thereby allowing high discharge rates and high specific capacity to be simultaneously achieved. These major performance improvements are crucial to advancing the prospect of a sustainable large-scale energy storage solution based on aqueous iron-based rechargeable batteries. (C) 2012 The Electrochemical Society. [DOI: 10.1149/2.034208jes] All rights reserved.

  20. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis

    SciTech Connect (OSTI)

    Chen, H.-W.; Chang, N.-B.; Chen, J.-C.; Tsai, S.-J.

    2010-07-15

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.

  1. Large Scale Ice Water Path and 3-D Ice Water Content

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Liu, Guosheng

    2008-01-15

    Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.

  2. Implementation of a multi-threaded framework for large-scale scientific applications

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Sexton-Kennedy, E.; Gartung, Patrick; Jones, C. D.; Lange, David

    2015-05-22

    The CMS experiment has recently completed the development of a multi-threaded capable application framework. In this paper, we will discuss the design, implementation and application of this framework to production applications in CMS. For the 2015 LHC run, this functionality is particularly critical for both our online and offline production applications, which depend on faster turn-around times and a reduced memory footprint relative to before. These applications are complex codes, each including a large number of physics-driven algorithms. While the framework is capable of running a mix of thread-safe and 'legacy' modules, algorithms running in our production applications need tomore » be thread-safe for optimal use of this multi-threaded framework at a large scale. Towards this end, we discuss the types of changes, which were necessary for our algorithms to achieve good performance of our multithreaded applications in a full-scale application. Lastly performance numbers for what has been achieved for the 2015 run are presented.« less

  3. Large Scale Ice Water Path and 3-D Ice Water Content

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Liu, Guosheng

    Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.

  4. Plasma turbulence driven by transversely large-scale standing shear Alfven waves

    SciTech Connect (OSTI)

    Singh, Nagendra; Rao, Sathyanarayan

    2012-12-15

    Using two-dimensional particle-in-cell simulations, we study generation of turbulence consisting of transversely small-scale dispersive Alfven and electrostatic waves when plasma is driven by a large-scale standing shear Alfven wave (LS-SAW). The standing wave is set up by reflecting a propagating LS-SAW. The ponderomotive force of the standing wave generates transversely large-scale density modifications consisting of density cavities and enhancements. The drifts of the charged particles driven by the ponderomotive force and those directly caused by the fields of the standing LS-SAW generate non-thermal features in the plasma. Parametric instabilities driven by the inherent plasma nonlinearities associated with the LS-SAW in combination with the non-thermal features generate small-scale electromagnetic and electrostatic waves, yielding a broad frequency spectrum ranging from below the source frequency of the LS-SAW to ion cyclotron and lower hybrid frequencies and beyond. The power spectrum of the turbulence has peaks at distinct perpendicular wave numbers (k{sub Up-Tack }) lying in the range d{sub e}{sup -1}-6d{sub e}{sup -1}, d{sub e} being the electron inertial length, suggesting non-local parametric decay from small to large k{sub Up-Tack }. The turbulence spectrum encompassing both electromagnetic and electrostatic fluctuations is also broadband in parallel wave number (k{sub ||}). In a standing-wave supported density cavity, the ratio of the perpendicular electric to magnetic field amplitude is R(k{sub Up-Tack }) = |E{sub Up-Tack }(k{sub Up-Tack })/|B{sub Up-Tack }(k{sub Up-Tack })| Much-Less-Than V{sub A} for k{sub Up-Tack }d{sub e} < 0.5, where V{sub A} is the Alfven velocity. The characteristic features of the broadband plasma turbulence are compared with those available from satellite observations in space plasmas.

  5. Primordial Magnetic Field Effects on the CMB and Large-Scale Structure

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yamazaki, Dai G.; Ichiki, Kiyotomo; Kajino, Toshitaka; Mathews, Grant J.

    2010-01-01

    Mmore » agnetic fields are everywhere in nature, and they play an important role in every astronomical environment which involves the formation of plasma and currents. It is natural therefore to suppose that magnetic fields could be present in the turbulent high-temperature environment of the big bang. Such a primordial magnetic field (PMF) would be expected to manifest itself in the cosmic microwave background (CMB) temperature and polarization anisotropies, and also in the formation of large-scale structure. In this paper, we summarize the theoretical framework which we have developed to calculate the PMF power spectrum to high precision. Using this formulation, we summarize calculations of the effects of a PMF which take accurate quantitative account of the time evolution of the cutoff scale. We review the constructed numerical program, which is without approximation, and an improvement over the approach used in a number of previous works for studying the effect of the PMF on the cosmological perturbations. We demonstrate how the PMF is an important cosmological physical process on small scales. We also summarize the current constraints on the PMF amplitude B λ and the power spectral index n B which have been deduced from the available CMB observational data by using our computational framework.« less

  6. Combined Climate and Carbon-Cycle Effects of Large-Scale Deforestation

    SciTech Connect (OSTI)

    Bala, G; Caldeira, K; Wickett, M; Phillips, T J; Lobell, D B; Delire, C; Mirin, A

    2006-10-17

    The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO{sub 2} to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These are the first such simulations performed using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has a net cooling influence on Earth's climate, since the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. While these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate.

  7. Pilot-scale testing of paint-waste incineration. Final report

    SciTech Connect (OSTI)

    Not Available

    1989-07-01

    Operations at the U.S. Army depots generate large quantities of paint removal and application wastes. These wastes, many of which are hazardous, are currently disposed of off site. Off-site disposal of solids is often by landfilling, which will be banned or highly restricted in the future. Several research activities have been initiated by USATHAMA to evaluate alternative technologies for management of paint wastes. The project described in this report involved pilot-scale incineration testing of two paint wastes: spent plastic blast media and spent agricultural blast media (ground walnut shells). The objective of this task was to continue development of incineration as an alternative treatment technology for paint wastes through pilot-scale rotary-kiln incineration testing. The results of the pilot test were evaluated to assess how the paint waste characteristics and incinerator operating conditions affected the following: characteristics of ash residue volume reduction achieved, destruction and removal efficiencies (DRE's) for organic compound and characteristics of stack gases.

  8. Calculation of large scale relative permeabilities from stochastic properties of the permeability field and fluid properties

    SciTech Connect (OSTI)

    Lenormand, R.; Thiele, M.R.

    1997-08-01

    The paper describes the method and presents preliminary results for the calculation of homogenized relative permeabilities using stochastic properties of the permeability field. In heterogeneous media, the spreading of an injected fluid is mainly sue to the permeability heterogeneity and viscosity fingering. At large scale, when the heterogeneous medium is replaced by a homogeneous one, we need to introduce a homogenized (or pseudo) relative permeability to obtain the same spreading. Generally, is derived by using fine-grid numerical simulations (Kyte and Berry). However, this operation is time consuming and cannot be performed for all the meshes of the reservoir. We propose an alternate method which uses the information given by the stochastic properties of the field without any numerical simulation. The method is based on recent developments on homogenized transport equations (the {open_quotes}MHD{close_quotes} equation, Lenormand SPE 30797). The MHD equation accounts for the three basic mechanisms of spreading of the injected fluid: (1) Dispersive spreading due to small scale randomness, characterized by a macrodispersion coefficient D. (2) Convective spreading due to large scale heterogeneities (layers) characterized by a heterogeneity factor H. (3) Viscous fingering characterized by an apparent viscosity ration M. In the paper, we first derive the parameters D and H as functions of variance and correlation length of the permeability field. The results are shown to be in good agreement with fine-grid simulations. The are then derived a function of D, H and M. The main result is that this approach lead to a time dependent . Finally, the calculated are compared to the values derived by history matching using fine-grid numerical simulations.

  9. LyMAS: Predicting large-scale Ly? forest statistics from the dark matter density field

    SciTech Connect (OSTI)

    Peirani, Sbastien; Colombi, Stphane; Dubois, Yohan; Pichon, Christophe; Weinberg, David H.; Blaizot, Jrmy

    2014-03-20

    We describe Ly? Mass Association Scheme (LyMAS), a method of predicting clustering statistics in the Ly? forest on large scales from moderate-resolution simulations of the dark matter (DM) distribution, with calibration from high-resolution hydrodynamic simulations of smaller volumes. We use the 'Horizon-MareNostrum' simulation, a 50 h {sup 1} Mpc comoving volume evolved with the adaptive mesh hydrodynamic code RAMSES, to compute the conditional probability distribution P(F{sub s} |? {sub s}) of the transmitted flux F{sub s} , smoothed (one-dimensionally, 1D) over the spectral resolution scale, on the DM density contrast ? {sub s}, smoothed (three-dimensionally, 3D) over a similar scale. In this study we adopt the spectral resolution of the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS) at z = 2.5, and we find optimal results for a DM smoothing length ? = 0.3 h {sup 1} Mpc (comoving). In its simplest form, LyMAS draws randomly from the hydro-calibrated P(F{sub s} |? {sub s}) to convert DM skewers into Ly? forest pseudo-spectra, which are then used to compute cross-sightline flux statistics. In extended form, LyMAS exactly reproduces both the 1D power spectrum and one-point flux distribution of the hydro simulation spectra. Applied to the MareNostrum DM field, LyMAS accurately predicts the two-point conditional flux distribution and flux correlation function of the full hydro simulation for transverse sightline separations as small as 1 h {sup 1} Mpc, including redshift-space distortion effects. It is substantially more accurate than a deterministic density-flux mapping ({sup F}luctuating Gunn-Peterson Approximation{sup )}, often used for large-volume simulations of the forest. With the MareNostrum calibration, we apply LyMAS to 1024{sup 3} N-body simulations of a 300 h {sup 1} Mpc and 1.0 h {sup 1} Gpc cube to produce large, publicly available catalogs of mock BOSS spectra that probe a large comoving volume. LyMAS will be a powerful tool for interpreting 3D Ly? forest data, thereby transforming measurements from BOSS and other massive quasar absorption surveys into constraints on dark energy, DM, space geometry, and intergalactic medium physics.

  10. MHK Projects/Wave Star Energy 1 10 Scale Model Test | Open Energy...

    Open Energy Info (EERE)

    Star Energy 1 10 Scale Model Test < MHK Projects Jump to: navigation, search << Return to the MHK database homepage Loading map... "minzoom":false,"mappingservice":"googlemaps3","...

  11. SMALL-SCALE IMPACT SENSITIVITY TESTING ON EDC37

    SciTech Connect (OSTI)

    HSU, P C; HUST, G; MAIENSCHEIN, J L

    2008-04-28

    EDC37 was tested at LLNL to determine its impact sensitivity in the LLNL's drop hammer system. The results showed that impact sensitivities of the samples were between 86 cm and 156 cm, depending on test methods. EDC37 is a plastic bonded explosive consisting of 90% HMX, 1% nitrocellulose and binder. We recently conducted impact sensitivity testing in our drop hammer system and the results are presented in this report.

  12. FY results for the Los Alamos large scale demonstration and deployment project

    SciTech Connect (OSTI)

    Stallings, E.; McFee, J.

    2000-11-01

    The Los Alamos Large Scale Demonstration and Deployment Project (LSDDP) in support of the US Department of Energy (DOE) Deactivation and Decommissioning Focus Area (DDFA) is identifying and demonstrating technologies to reduce the cost and risk of management of transuranic element contaminated large metal objects, i.e. gloveboxes. DOE must dispose of hundreds of gloveboxes from Rocky Flats, Los Alamos and other DOE sites. Current practices for removal, decontamination and size reduction of large metal objects translates to a DOE system-wide cost in excess of $800 million, without disposal costs. In FY99 and FY00 the Los Alamos LSDDP performed several demonstrations on cost/risk savings technologies. Commercial air pallets were demonstrated for movement and positioning of the oversized crates in neutron counting equipment. The air pallets are able to cost effectively address the complete waste management inventory, whereas the baseline wheeled carts could address only 25% of the inventory with higher manpower costs. A gamma interrogation radiography technology was demonstrated to support characterization of the crates. The technology was developed for radiography of trucks for identification of contraband. The radiographs were extremely useful in guiding the selection and method for opening very large crated metal objects. The cost of the radiography was small and the operating benefit is high. Another demonstration compared a Blade Cutting Plunger and reciprocating saw for removal of glovebox legs and appurtenances. The cost comparison showed that the Blade Cutting Plunger costs were comparable, and a significant safety advantage was reported. A second radiography demonstration was conducted evaluation of a technology based on WIPP-type x-ray characterization of large boxes. This technology provides considerable detail of the contents of the crates. The technology identified details as small as the fasteners in the crates, an unpunctured aerosol can, and a vessel containing liquids. The cost of this technology is higher than the gamma interrogation technique, but the detail provided is much greater.

  13. Field Testing of a Wet FGD Additive for Enhanced Mercury Control - Task 3 Full-scale Test Results

    SciTech Connect (OSTI)

    Gary Blythe

    2007-05-01

    This Topical Report summarizes progress on Cooperative Agreement DE-FC26-04NT42309, 'Field Testing of a Wet FGD Additive'. The objective of the project is to demonstrate the use of a flue gas desulfurization (FGD) additive, Degussa Corporation's TMT-15, to prevent the reemission of elemental mercury (Hg{sup 0}) in flue gas exiting wet FGD systems on coal-fired boilers. Furthermore, the project intends to demonstrate whether the additive can be used to precipitate most of the mercury (Hg) removed in the wet FGD system as a fine TMT salt that can be separated from the FGD liquor and bulk solid byproducts for separate disposal. The project is conducting pilot- and full-scale tests of the TMT-15 additive in wet FGD absorbers. The tests are intended to determine required additive dosages to prevent Hg{sup 0} reemissions and to separate mercury from the normal FGD byproducts for three coal types: Texas lignite/Power River Basin (PRB) coal blend, high-sulfur Eastern bituminous coal, and low-sulfur Eastern bituminous coal. The project team consists of URS Group, Inc., EPRI, TXU Generation Company LP, Southern Company, and Degussa Corporation. TXU Generation has provided the Texas lignite/PRB cofired test site for pilot FGD tests, Monticello Steam Electric Station Unit 3. Southern Company is providing the low-sulfur Eastern bituminous coal host site for wet scrubbing tests, as well as the pilot- and full-scale jet bubbling reactor (JBR) FGD systems to be tested. IPL, an AES company, provided the high-sulfur Eastern bituminous coal full-scale FGD test site and cost sharing. Degussa Corporation is providing the TMT-15 additive and technical support to the test program as cost sharing. The project is being conducted in six tasks. Of the six project tasks, Task 1 involves project planning and Task 6 involves management and reporting. The other four tasks involve field testing on FGD systems, either at pilot or full scale. The four tasks include: Task 2 - Pilot Additive Testing in Texas Lignite Flue Gas; Task 3 - Full-scale FGD Additive Testing in High-sulfur Eastern Bituminous Flue Gas; Task 4 - Pilot Wet Scrubber Additive Tests at Plant Yates; and Task 5 - Full-scale Additive Tests at Plant Yates. The pilot-scale tests were completed in 2005 and have been previously reported. This topical report presents the results from the Task 3 full-scale additive tests, conducted at IPL's Petersburg Station Unit 2. The Task 5 full-scale additive tests will be conducted later in calendar year 2007.

  14. Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.

    2010-01-01

    Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster headsmore » to minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less

  15. Large-scale production of anhydrous nitric acid and nitric acid solutions of dinitrogen pentoxide

    DOE Patents [OSTI]

    Harrar, Jackson E.; Quong, Roland; Rigdon, Lester P.; McGuire, Raymond R.

    2001-01-01

    A method and apparatus are disclosed for a large scale, electrochemical production of anhydrous nitric acid and N.sub.2 O.sub.5. The method includes oxidizing a solution of N.sub.2 O.sub.4 /aqueous-HNO.sub.3 at the anode, while reducing aqueous HNO.sub.3 at the cathode, in a flow electrolyzer constructed of special materials. N.sub.2 O.sub.4 is produced at the cathode and may be separated and recycled as a feedstock for use in the anolyte. The process is controlled by regulating the electrolysis current until the desired products are obtained. The chemical compositions of the anolyte and catholyte are monitored by measurement of the solution density and the concentrations of N.sub.2 O.sub.4.

  16. Measurement of the large-scale anisotropy of the cosmic background radiation at 3mm

    SciTech Connect (OSTI)

    Epstein, G.L.

    1983-12-01

    A balloon-borne differential radiometer has measured the large-scale anisotropy of the cosmic background radiation (CBR) with high sensitivity. The antenna temperature dipole anistropy at 90 GHz (3 mm wavelength) is 2.82 +- 0.19 mK, corresponding to a thermodynamic anistropy of 3.48 +- mK for a 2.7 K blackbody CBR. The dipole direction, 11.3 +- 0.1 hours right ascension and -5.7/sup 0/ +- 1.8/sup 0/ declination, agrees well with measurements at other frequencies. Calibration error dominates magnitude uncertainty, with statistical errors on dipole terms being under 0.1 mK. No significant quadrupole power is found, placing a 90% confidence-level upper limit of 0.27 mK on the RMS thermodynamic quadrupolar anistropy. 22 figures, 17 tables.

  17. Overview of large scale experiments performed within the LBB project in the Czech Republic

    SciTech Connect (OSTI)

    Kadecka, P.; Lauerova, D.

    1997-04-01

    During several recent years NRI Rez has been performing the LBB analyses of safety significant primary circuit pipings of NPPs in Czech and Slovak Republics. The analyses covered the NPPs with reactors WWER 440 Type 230 and 213 and WWER 1000 Type 320. Within the relevant LBB projects undertaken with the aim to prove the fulfilling of the requirements of LBB, a series of large scale experiments were performed. The goal of these experiments was to verify the properties of the components selected, and to prove the quality and/or conservatism of assessments used in the LBB-analyses. In this poster, a brief overview of experiments performed in Czech Republic under guidance of NRI Rez is presented.

  18. Drivers and barriers to e-invoicing adoption in Greek large scale manufacturing industries

    SciTech Connect (OSTI)

    Marinagi, Catherine E-mail: ptrivel@yahoo.com Trivellas, Panagiotis E-mail: ptrivel@yahoo.com Reklitis, Panagiotis E-mail: ptrivel@yahoo.com; Skourlas, Christos

    2015-02-09

    This paper attempts to investigate the drivers and barriers that large-scale Greek manufacturing industries experience in adopting electronic invoices (e-invoices), based on three case studies with organizations having international presence in many countries. The study focuses on the drivers that may affect the increase of the adoption and use of e-invoicing, including the customers demand for e-invoices, and sufficient know-how and adoption of e-invoicing in organizations. In addition, the study reveals important barriers that prevent the expansion of e-invoicing, such as suppliers’ reluctance to implement e-invoicing, and IT infrastructures incompatibilities. Other issues examined by this study include the observed benefits from e-invoicing implementation, and the financial priorities of the organizations assumed to be supported by e-invoicing.

  19. Large-Scale First-Principles Molecular Dynamics Simulations with Electrostatic Embedding: Application to Acetylcholinesterase Catalysis

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Fattebert, Jean-Luc; Lau, Edmond Y.; Bennion, Brian J.; Huang, Patrick; Lightstone, Felice C.

    2015-10-22

    Enzymes are complicated solvated systems that typically require many atoms to simulate their function with any degree of accuracy. We have recently developed numerical techniques for large scale First-Principles molecular dynamics simulations and applied them to study the enzymatic reaction catalyzed by acetylcholinesterase. We carried out Density functional theory calculations for a quantum mechanical (QM) sub- system consisting of 612 atoms with an O(N) complexity finite-difference approach. The QM sub-system is embedded inside an external potential field representing the electrostatic effect due to the environment. We obtained finite temperature sampling by First-Principles molecular dynamics for the acylation reaction of acetylcholinemore » catalyzed by acetylcholinesterase. Our calculations shows two energies barriers along the reaction coordinate for the enzyme catalyzed acylation of acetylcholine. In conclusion, the second barrier (8.5 kcal/mole) is rate-limiting for the acylation reaction and in good agreement with experiment.« less

  20. Large-Scale Computational Screening of Zeolites for Ethane/Ethene Separation

    SciTech Connect (OSTI)

    Kim, J; Lin, LC; Martin, RL; Swisher, JA; Haranczyk, M; Smit, B

    2012-08-14

    Large-scale computational screening of thirty thousand zeolite structures was conducted to find optimal structures for seperation of ethane/ethene mixtures. Efficient grand canonical Monte Carlo (GCMC) simulations were performed with graphics processing units (GPUs) to obtain pure component adsorption isotherms for both ethane and ethene. We have utilized the ideal adsorbed solution theory (LAST) to obtain the mixture isotherms, which were used to evaluate the performance of each zeolite structure based on its working capacity and selectivity. In our analysis, we have determined that specific arrangements of zeolite framework atoms create sites for the preferential adsorption of ethane over ethene. The majority of optimum separation materials can be identified by utilizing this knowledge and screening structures for the presence of this feature will enable the efficient selection of promising candidate materials for ethane/ethene separation prior to performing molecular simulations.

  1. Analysis of ground response data at Lotung large-scale soil- structure interaction experiment site. Final report

    SciTech Connect (OSTI)

    Chang, C.Y.; Mok, C.M.; Power, M.S.

    1991-12-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4-scale and 1/2-scale) of a nuclear plant containment structure at a site in Lotung (Tang, 1987), a seismically active region in northeast Taiwan. The models were constructed to gather data for the evaluation and validation of soil-structure interaction (SSI) analysis methodologies. Extensive instrumentation was deployed to record both structural and ground responses at the site during earthquakes. The experiment is generally referred to as the Lotung Large-Scale Seismic Test (LSST). As part of the LSST, two downhole arrays were installed at the site to record ground motions at depths as well as at the ground surface. Structural response and ground response have been recorded for a number of earthquakes (i.e. a total of 18 earthquakes in the period of October 1985 through November 1986) at the LSST site since the completion of the installation of the downhole instruments in October 1985. These data include those from earthquakes having magnitudes ranging from M{sub L} 4.5 to M{sub L} 7.0 and epicentral distances range from 4.7 km to 77.7 km. Peak ground surface accelerations range from 0.03 g to 0.21 g for the horizontal component and from 0.01 g to 0.20 g for the vertical component. The objectives of the study were: (1) to obtain empirical data on variations of earthquake ground motion with depth; (2) to examine field evidence of nonlinear soil response due to earthquake shaking and to determine the degree of soil nonlinearity; (3) to assess the ability of ground response analysis techniques including techniques to approximate nonlinear soil response to estimate ground motions due to earthquake shaking; and (4) to analyze earth pressures recorded beneath the basemat and on the side wall of the 1/4 scale model structure during selected earthquakes.

  2. Impact of Distribution-Connected Large-Scale Wind Turbines on Transmission System Stability during Large Disturbances: Preprint

    SciTech Connect (OSTI)

    Zhang, Y.; Allen, A.; Hodge, B. M.

    2014-02-01

    This work examines the dynamic impacts of distributed utility-scale wind power during contingency events on both the distribution system and the transmission system. It is the first step toward investigating high penetrations of distribution-connected wind power's impact on both distribution and transmission stability.

  3. PATHWAYS OF LARGE-SCALE MAGNETIC COUPLINGS BETWEEN SOLAR CORONAL EVENTS

    SciTech Connect (OSTI)

    Schrijver, Carolus J.; Title, Alan M.; DeRosa, Marc L.; Yeates, Anthony R.

    2013-08-20

    The high-cadence, comprehensive view of the solar corona by SDO/AIA shows many events that are widely separated in space while occurring close together in time. In some cases, sets of coronal events are evidently causally related, while in many other instances indirect evidence can be found. We present case studies to highlight a variety of coupling processes involved in coronal events. We find that physical linkages between events do occur, but concur with earlier studies that these couplings appear to be crucial to understanding the initiation of major eruptive or explosive phenomena relatively infrequently. We note that the post-eruption reconfiguration timescale of the large-scale corona, estimated from the extreme-ultraviolet afterglow, is on average longer than the mean time between coronal mass ejections (CMEs), so that many CMEs originate from a corona that is still adjusting from a previous event. We argue that the coronal field is intrinsically global: current systems build up over days to months, the relaxation after eruptions continues over many hours, and evolving connections easily span much of a hemisphere. This needs to be reflected in our modeling of the connections from the solar surface into the heliosphere to properly model the solar wind, its perturbations, and the generation and propagation of solar energetic particles. However, the large-scale field cannot be constructed reliably by currently available observational resources. We assess the potential of high-quality observations from beyond Earth's perspective and advanced global modeling to understand the couplings between coronal events in the context of CMEs and solar energetic particle events.

  4. Survey and analysis of selected jointly owned large-scale electric utility storage projects

    SciTech Connect (OSTI)

    Not Available

    1982-05-01

    The objective of this study was to examine and document the issues surrounding the curtailment in commercialization of large-scale electric storage projects. It was sensed that if these issues could be uncovered, then efforts might be directed toward clearing away these barriers and allowing these technologies to penetrate the market to their maximum potential. Joint-ownership of these projects was seen as a possible solution to overcoming the major barriers, particularly economic barriers, of commercializaton. Therefore, discussions with partners involved in four pumped storage projects took place to identify the difficulties and advantages of joint-ownership agreements. The four plants surveyed included Yards Creek (Public Service Electric and Gas and Jersey Central Power and Light); Seneca (Pennsylvania Electric and Cleveland Electric Illuminating Company); Ludington (Consumers Power and Detroit Edison, and Bath County (Virginia Electric Power Company and Allegheny Power System, Inc.). Also investigated were several pumped storage projects which were never completed. These included Blue Ridge (American Electric Power); Cornwall (Consolidated Edison); Davis (Allegheny Power System, Inc.) and Kttatiny Mountain (General Public Utilities). Institutional, regulatory, technical, environmental, economic, and special issues at each project were investgated, and the conclusions relative to each issue are presented. The major barriers preventing the growth of energy storage are the high cost of these systems in times of extremely high cost of capital, diminishing load growth and regulatory influences which will not allow the building of large-scale storage systems due to environmental objections or other reasons. However, the future for energy storage looks viable despite difficult economic times for the utility industry. Joint-ownership can ease some of the economic hardships for utilites which demonstrate a need for energy storage.

  5. Macroscopic x-ray fluorescence capability for large-scale elemental mapping

    SciTech Connect (OSTI)

    Volz, Heather M; Havrilla, George J; Aikin, Jr., Robert M; Montoya, Velma M

    2010-01-01

    A non-destructive method of determining segregation of constituent elements over large length-scales is desired. Compositional information to moderate resolution over centimeters will be powerful not only to validate casting models but also to understand large-scale phenomena during solidification. To this end, they have rebuilt their XRF capability in conjunction with IXRF Systems, Inc. (Houston, TX) to accommodate samples that are much larger than those that typically fit into an XRF instrument chamber (up to 70 cm x 70 cm x 25 cm). This system uses a rhodium tube with maximum power of 35 kV and 100 {mu}A, the detector is a liquid nitrogen cooled lithium drifted silicon detector, and the smallest spot size is approximately 0.4 mm. Reference standard specimens will enable quantitative elemental mapping and analysis. Challenges to modifying the equipment are described. Non-uniformities in the Inconel 718 system will be shown and discussed. As another example, segregation of niobium or molybdenum in depleted uranium (DU) castings has been known to occur based on wet chemical anslysis (ICP-MS), but this destructive and time-consuming measurement is not practical for routine inspection of ingots. The U-Nb system is complicated due to overlap of the Nb K-alpha line with the U L-beta. Preliminary quantitative results are included on the distribution of Nb across slices from DU castings with different cooling rates. They foresee this macro-XRF elemental mapping capability to prove invaluable to many in the materials processing industry.

  6. LUCI: A facility at DUSEL for large-scale experimental study of geologic carbon sequestration

    SciTech Connect (OSTI)

    Peters, C. A.; Dobson, P.F.; Oldenburg, C.M.; Wang, J. S. Y.; Onstott, T.C.; Scherer, G.W.; Freifeld, B.M.; Ramakrishnan, T.S.; Stabinski, E.L.; Liang, K.; Verma, S.

    2010-10-01

    LUCI, the Laboratory for Underground CO{sub 2} Investigations, is an experimental facility being planned for the DUSEL underground laboratory in South Dakota, USA. It is designed to study vertical flow of CO{sub 2} in porous media over length scales representative of leakage scenarios in geologic carbon sequestration. The plan for LUCI is a set of three vertical column pressure vessels, each of which is {approx}500 m long and {approx}1 m in diameter. The vessels will be filled with brine and sand or sedimentary rock. Each vessel will have an inner column to simulate a well for deployment of down-hole logging tools. The experiments are configured to simulate CO{sub 2} leakage by releasing CO{sub 2} into the bottoms of the columns. The scale of the LUCI facility will permit measurements to study CO{sub 2} flow over pressure and temperature variations that span supercritical to subcritical gas conditions. It will enable observation or inference of a variety of relevant processes such as buoyancy-driven flow in porous media, Joule-Thomson cooling, thermal exchange, viscous fingering, residual trapping, and CO{sub 2} dissolution. Experiments are also planned for reactive flow of CO{sub 2} and acidified brines in caprock sediments and well cements, and for CO{sub 2}-enhanced methanogenesis in organic-rich shales. A comprehensive suite of geophysical logging instruments will be deployed to monitor experimental conditions as well as provide data to quantify vertical resolution of sensor technologies. The experimental observations from LUCI will generate fundamental new understanding of the processes governing CO{sub 2} trapping and vertical migration, and will provide valuable data to calibrate and validate large-scale model simulations.

  7. The Lagrangian-space Effective Field Theory of large scale structures

    SciTech Connect (OSTI)

    Porto, Rafael A.; Zaldarriaga, Matias; Senatore, Leonardo E-mail: senatore@stanford.edu

    2014-05-01

    We introduce a Lagrangian-space Effective Field Theory (LEFT) formalism for the study of cosmological large scale structures. Unlike the previous Eulerian-space construction, it is naturally formulated as an effective field theory of extended objects in Lagrangian space. In LEFT the resulting finite size effects are described using a multipole expansion parameterized by a set of time dependent coefficients and organized in powers of the ratio of the wavenumber of interest k over the non-linear scale k{sub NL}. The multipoles encode the effects of the short distance modes on the long-wavelength universe and absorb UV divergences when present. There are no IR divergences in LEFT. Some of the parameters that control the perturbative approach are not assumed to be small and can be automatically resummed. We present an illustrative one-loop calculation for a power law universe. We describe the dynamics both at the level of the equations of motion and through an action formalism.

  8. Large-Angular-Scale Anisotropy in the Cosmic BackgroundRadiation

    SciTech Connect (OSTI)

    Gorenstein, M.V.; Smoot, G.F.

    1980-05-01

    We report the results of an extended series of airborne measurements of large-angular-scale anisotropy in the 3 K cosmic background radiation. Observations were carried out with a dual-antenna microwave radiometer operating at 33 GHz (0.89 cm wavelength) flown on board a U-2 aircraft to 20 km altitude. In eleven flights, between December 1976 and May 1978, the radiometer measured differential intensity between pairs of directions distributed over most of the northern hemisphere with an rms sensitivity of 47 mK Hz{sup -1/2}. The measurements show clear evidence of anisotropy that is readily interpreted as due to the solar motion relative to the sources of the radiation. The anisotropy is well fit by a first order spherical harmonic of amplitude 360 {+-} 50km sec{sup -1} toward the direction 11.2 {+-} 0.5 hours of right ascension and 19 {+-} 8 degrees declination. A simultaneous fit to a combined hypothesis of dipole and quadrupole angular distributions places a 1 mK limit on the amplitude of most components of quadrupole anisotropy with 90% confidence. Additional analysis places a 0.5 mK limit on uncorrelated fluctuations (sky-roughness) in the 3 K background on an angular scale of the antenna beam width, about 7 degrees.

  9. Public attitudes regarding large-scale solar energy development in the U.S.

    SciTech Connect (OSTI)

    Carlisle, Juliet E.; Kane, Stephanie L.; Solan, David; Bowman, Madelaine; Joe, Jeffrey C.

    2015-08-01

    Using data collected from both a National sample as well as an oversample in U.S. Southwest, we examine public attitudes toward the construction of utility-scale solar facilities in the U.S. as well as development in one’s own county. Our multivariate analyses assess demographic and sociopsychological factors as well as context in terms of proximity of proposed project by considering the effect of predictors for respondents living in the Southwest versus those from a National sample.We find that the predictors, and impact of the predictors, related to support and opposition to solar development vary in terms of psychological and physical distance. Overall, for respondents living in the U.S. Southwest we find that environmentalism, belief that developers receive too many incentives, and trust in project developers to be significantly related to support and opposition to solar development, in general. When Southwest respondents consider large-scale solar development in their county, the influence of these variables changes so that property value, race, and age only yield influence. Differential effects occur for respondents of our National sample.We believe our findings to be relevant for those outside the U.S. due to the considerable growth PV solar has experienced in the last decade, especially in China, Japan, Germany, and the U.S.

  10. Public attitudes regarding large-scale solar energy development in the U.S.

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Carlisle, Juliet E.; Kane, Stephanie L.; Solan, David; Bowman, Madelaine; Joe, Jeffrey C.

    2015-08-01

    Using data collected from both a National sample as well as an oversample in U.S. Southwest, we examine public attitudes toward the construction of utility-scale solar facilities in the U.S. as well as development in one’s own county. Our multivariate analyses assess demographic and sociopsychological factors as well as context in terms of proximity of proposed project by considering the effect of predictors for respondents living in the Southwest versus those from a National sample.We find that the predictors, and impact of the predictors, related to support and opposition to solar development vary in terms of psychological and physical distance.more » Overall, for respondents living in the U.S. Southwest we find that environmentalism, belief that developers receive too many incentives, and trust in project developers to be significantly related to support and opposition to solar development, in general. When Southwest respondents consider large-scale solar development in their county, the influence of these variables changes so that property value, race, and age only yield influence. Differential effects occur for respondents of our National sample.We believe our findings to be relevant for those outside the U.S. due to the considerable growth PV solar has experienced in the last decade, especially in China, Japan, Germany, and the U.S.« less

  11. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    SciTech Connect (OSTI)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp; Vazhkudai, Sudharshan S.; Cao, Qing

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  12. Scaling tests with dynamical overlap and rooted staggered fermions

    SciTech Connect (OSTI)

    Duerr, Stephan; Hoelbling, Christian

    2005-03-01

    We present a scaling analysis in the 1-flavor Schwinger model with the full overlap and the rooted staggered determinant. In the latter case the chiral and continuum limit of the scalar condensate do not commute, while for overlap fermions they do. For the topological susceptibility a universal continuum limit is suggested, as is for the partition function and the Leutwyler-Smilga sum rule. In the heavy-quark force no difference is visible even at finite coupling. Finally, a direct comparison between the complete overlap and the rooted staggered determinant yields evidence that their ratio is constant up to O(a{sup 2}) effects.

  13. Analysis of long-term flows resulting from large-scale sodium-water reactions in an LMFBR secondary system

    SciTech Connect (OSTI)

    Shin, Y.W.; Chung, H.; Choi, U.S.; Wiedermann, A.H.; Ockert, C.E.

    1984-07-01

    Leaks in LMFBR steam generators cannot entirely be prevented; thus the steam generators and the intermediate heat transport system (IHTS) of an LMFBR must be designed to withstand the effects of the leaks. A large-scale leak which might result from a sudden break of a steam generator tube, and the resulting sodium-water reaction (SWR) can generate large pressure pulses that propagate through the IHTS and exert large forces on the piping supports. This paper discusses computer programs for analyzing long-term flow and thermal effects in an LMFBR secondary system resulting from large-scale steam generator leaks, and the status of the development of the codes.

  14. LARGE-SCALE MAGNETIC HELICITY FLUXES ESTIMATED FROM MDI MAGNETIC SYNOPTIC CHARTS OVER THE SOLAR CYCLE 23

    SciTech Connect (OSTI)

    Yang Shangbin; Zhang Hongqi

    2012-10-10

    To investigate the characteristics of large-scale and long-term evolution of magnetic helicity with solar cycles, we use the method of Local Correlation Tracking to estimate the magnetic helicity evolution over solar cycle 23 from 1996 to 2009 using 795 MDI magnetic synoptic charts. The main results are as follows: the hemispheric helicity rule still holds in general, i.e., the large-scale negative (positive) magnetic helicity dominates the northern (southern) hemisphere. However, the large-scale magnetic helicity fluxes show the same sign in both hemispheres around 2001 and 2005. The global, large-scale magnetic helicity flux over the solar disk changes from a negative value at the beginning of solar cycle 23 to a positive value at the end of the cycle, while the net accumulated magnetic helicity is negative in the period between 1996 and 2009.

  15. EPRI-DOE Joint Report on Fossil Fleet Transition with Fuel Changes and Large Scale Variable Renewable Integration Now Available

    Broader source: Energy.gov [DOE]

    A new report “Fossil Fleet Transition with Fuel Changes and Large Scale Variable Renewable Integration” from the Electric Power Research Institute (EPRI) and jointly funded by the Offices of...

  16. Pilot-scale treatability test plan for the 100-HR-3 operable unit

    SciTech Connect (OSTI)

    Not Available

    1994-08-01

    This document presents the treatability test plan for pilot-scale pump-and-treat testing at the 100-HR-3 Operable Unit. The test will be conducted in fulfillment of interim Milestone M-15-06E to begin pilot-scale pump-and-treat operations by August 1994. The scope of the test was determined based on the results of lab/bench-scale tests (WHC 1993a) conducted in fulfillment of Milestone M-15-06B. These milestones were established per agreement between the U.S. Department of Energy (DOE), the Washington State Department of Ecology and the U.S. Environmental Protection Agency (EPA), and documented on Hanford Federal of Ecology Facility Agreement and Consent Order Change Control Form M-15-93-02. This test plan discusses a pilot-scale pump-and-treat test for the chromium plume associated with the D Reactor portion of the 100-HR-3 Operable Unit. Data will be collected during the pilot test to assess the effectiveness, operating parameters, and resource needs of the ion exchange (IX) pump-and-treat system. The test will provide information to assess the ability to remove contaminants by extracting groundwater from wells and treating extracted groundwater using IX. Bench-scale tests were conducted previously in which chromium VI was identified as the primary contaminant of concern in the 100-D reactor plume. The DOWEX 21K{trademark} resin was recommended for pilot-scale testing of an IX pump-and-treat system. The bench-scale test demonstrated that the system could remove chromium VI from groundwater to concentrations less than 50 ppb. The test also identified process parameters to monitor during pilot-scale testing. Water will be re-injected into the plume using wells outside the zone of influence and upgradient of the extraction well.

  17. Carbon Molecular Sieve Membrane as a True One Box Unit for Large Scale Hydrogen Production

    SciTech Connect (OSTI)

    Paul Liu

    2012-05-01

    IGCC coal-fired power plants show promise for environmentally-benign power generation. In these plants coal is gasified to syngas then processed in a water gas-shift (WGS) reactor to maximize the hydrogen/CO{sub 2} content. The gas stream can then be separated into a hydrogen rich stream for power generation and/or further purified for sale as a chemical and a CO{sub 2} rich stream for the purpose of carbon capture and storage (CCS). Today, the separation is accomplished using conventional absorption/desorption processes with post CO{sub 2} compression. However, significant process complexity and energy penalties accrue with this approach, accounting for ~20% of the capital cost and ~27% parasitic energy consumption. Ideally, a “one-box” process is preferred in which the syngas is fed directly to the WGS reactor without gas pre-treatment, converting the CO to hydrogen in the presence of H{sub 2}S and other impurities and delivering a clean hydrogen product for power generation or other uses. The development of such a process is the primary goal of this project. Our proposed "one-box" process includes a catalytic membrane reactor (MR) that makes use of a hydrogen-selective, carbon molecular sieve (CMS) membrane, and a sulfur-tolerant Co/Mo/Al{sub 2}O{sub 3} catalyst. The membrane reactor’s behavior has been investigated with a bench top unit for different experimental conditions and compared with the modeling results. The model is used to further investigate the design features of the proposed process. CO conversion >99% and hydrogen recovery >90% are feasible under the operating pressures available from IGCC. More importantly, the CMS membrane has demonstrated excellent selectivity for hydrogen over H{sub 2}S (>100), and shown no flux loss in the presence of a synthetic "tar"-like material, i.e., naphthalene. In summary, the proposed "one-box" process has been successfully demonstrated with the bench-top reactor. In parallel we have successfully designed and fabricated a full-scale CMS membrane and module for the proposed application. This full-scale membrane element is a 3" diameter with 30"L, composed of ~85 single CMS membrane tubes. The membrane tubes and bundles have demonstrated satisfactory thermal, hydrothermal, thermal cycling and chemical stabilities under an environment simulating the temperature, pressure and contaminant levels encountered in our proposed process. More importantly, the membrane module packed with the CMS bundle was tested for over 30 pressure cycles between ambient pressure and >300 -600 psi at 200 to 300°C without mechanical degradation. Finally, internal baffles have been designed and installed to improve flow distribution within the module, which delivered ≥90% separation efficiency in comparison with the efficiency achieved with single membrane tubes. In summary, the full-scale CMS membrane element and module have been successfully developed and tested satisfactorily for our proposed one-box application; a test quantity of elements/modules have been fabricated for field testing. Multiple field tests have been performed under this project at National Carbon Capture Center (NCCC). The separation efficiency and performance stability of our full-scale membrane elements have been verified in testing conducted for times ranging from 100 to >250 hours of continuous exposure to coal/biomass gasifier off-gas for hydrogen enrichment with no gas pre-treatment for contaminants removal. In particular, "tar-like" contaminants were effectively rejected by the membrane with no evidence of fouling. In addition, testing was conducted using a hybrid membrane system, i.e., the CMS membrane in conjunction with the palladium membrane, to demonstrate that 99+% H{sub 2} purity and a high degree of CO{sub 2} capture could be achieved. In summary, the stability and performance of the full-scale hydrogen selective CMS membrane/module has been verified in multiple field tests in the presence of coal/biomass gasifier off-gas under this project. A promising process scheme has been developed for power generation and/or hydrogen coproduction with CCS based upon our proposed "one-box" process. Our preliminary economic analysis indicates about 10% reduction in the required electricity selling price and ~40% cost reduction in CCS on per ton CO{sub 2} can be achieved in comparison with the base case involving conventional WGS with a two-stage Selexsol® for CCS. Long term field tests (e.g., >1,000 hrs) with the incorporation of the catalyst for the WGS membrane reactor and more in-depth analysis of the process scheme are recommended for the future study.

  18. Scaled Testing to Evaluate Pulse Jet Mixer Performance in Waste Treatment Plant Mixing Vessels

    SciTech Connect (OSTI)

    Fort, James A.; Meyer, Perry A.; Bamberger, Judith A.; Enderlin, Carl W.; Scott, Paul A.; Minette, Michael J.; Gauglitz, Phillip A.

    2010-03-07

    The Waste Treatment and Immobilization Plant (WTP) at Hanford is being designed and built to pre-treat and vitrify the waste in Hanfords 177 underground waste storage tanks. Numerous process vessels will hold waste at various stages in the WTP. These vessels have pulse jet mixer (PJM) systems. A test program was developed to evaluate the adequacy of mixing system designs in the solids-containing vessels in the WTP. The program focused mainly on non-cohesive solids behavior. Specifically, the program addressed the effectiveness of the mixing systems to suspend settled solids off the vessel bottom, and distribute the solids vertically. Experiments were conducted at three scales using various particulate simulants. A range of solids loadings and operational parameters were evaluated, including jet velocity, pulse volume, and duty cycle. In place of actual PJMs, the tests used direct injection from tubes with suction at the top of the tank fluid. This gave better control over the discharge duration and duty cycle and simplified the facility requirements. The mixing system configurations represented in testing varied from 4 to 12 PJMs with various jet nozzle sizes. In this way the results collected could be applied to the broad range of WTP vessels with varying geometrical configurations and planned operating conditions. Data for just-suspended velocity, solids cloud height, and solids concentration vertical profile were collected, analyzed, and correlated. The correlations were successfully benchmarked against previous large-scale test results, then applied to the WTP vessels using reasonable assumptions of anticipated waste properties to evaluate adequacy of the existing mixing system designs.

  19. LARGE-SCALE EXTREME-ULTRAVIOLET DISTURBANCES ASSOCIATED WITH A LIMB CORONAL MASS EJECTION

    SciTech Connect (OSTI)

    Dai, Y.; Auchere, F.; Vial, J.-C.; Tang, Y. H.; Zong, W. G.

    2010-01-10

    We present composite observations of a coronal mass ejection (CME) and the associated large-scale extreme-ultraviolet (EUV) disturbances on 2007 December 31 by the Extreme-ultraviolet Imager (EUVI) and COR1 coronagraph on board the recent Solar Terrestrial Relations Observatory mission. For this limb event, the EUV disturbances exhibit some typical characteristics of EUV Imaging Telescope waves: (1) in the 195 A bandpass, diffuse brightenings are observed propagating oppositely away from the flare site with a velocity of approx260 km s{sup -1}, leaving dimmings behind; (2) when the brightenings encounter the boundary of a polar coronal hole, they stop there to form a stationary front. Multi-temperature analysis of the propagating EUV disturbances favors a heating process over a density enhancement in the disturbance region. Furthermore, the EUVI-COR1 composite display shows unambiguously that the propagation of the diffuse brightenings coincides with a large lateral expansion of the CME, which consequently results in a double-loop-structured CME leading edge. Based on these observational facts, we suggest that the wave-like EUV disturbances are a result of magnetic reconfiguration related to the CME liftoff rather than true waves in the corona. Reconnections between the expanding CME magnetic field lines and surrounding quiet-Sun magnetic loops account for the propagating diffuse brightenings; dimmings appear behind them as a consequence of volume expansion. X-ray and radio data provide us with complementary evidence.

  20. Tests of innovative photon detectors and integrated electronics for the large-area CLAS12 ring-imaging Cherenkov detector

    SciTech Connect (OSTI)

    Contalbrigo, Marco

    2015-07-01

    A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Lab. Its aim is to study the 3D nucleon structure in the yet poorly explored valence region by deep-inelastic scattering, and to perform precision measurements in hadron spectroscopy. The adopted solution foresees a novel hybrid optics design based on an aerogel radiator, composite mirrors and a densely packed and highly segmented photon detector. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). Extensive tests have been performed on Hamamatsu H8500 and novel flat multi-anode photomultipliers under development and on various types of silicon photomultipliers. A large scale prototype based on 28 H8500 MA-PMTs has been realized and tested with few GeV/c hadron beams at the T9 test-beam facility of CERN. In addition a small prototype was used to study the response of customized SiPM matrices within a temperature interval ranging from 25 down to –25 °C. The preliminary results of the individual photon detector tests and of the prototype performance at the test-beams are here reported.

  1. Comparison of prestellar core elongations and large-scale molecular cloud structures in the Lupus I region

    SciTech Connect (OSTI)

    Poidevin, Frdrick; Ade, Peter A. R.; Hargrave, Peter C.; Nutter, David; Angile, Francesco E.; Devlin, Mark J.; Klein, Jeffrey; Benton, Steven J.; Netterfield, Calvin B.; Chapin, Edward L.; Fissel, Laura M.; Gandilo, Natalie N.; Fukui, Yasuo; Gundersen, Joshua O.; Korotkov, Andrei L.; Matthews, Tristan G.; Novak, Giles; Moncelsi, Lorenzo; Mroczkowski, Tony K.; Olmi, Luca; and others

    2014-08-10

    Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 ?m maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of the morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 ?m with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamicsincluding secondary filaments that often run orthogonally to the primary filamentand possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.

  2. Energy Department Announces $10 Million for Full-Scale Wave Energy Device Testing

    Broader source: Energy.gov [DOE]

    The Energy Department, in coordination with the Navy, today announced funding for two companies to test their innovative wave energy conversion devices in new deep water test berths off the waters of the Navy’s Marine Corps Base Hawaii. Ocean Energy USA will leverage lessons learned from previous quarter-scale test deployments that have led to design improvements for a full-scale deployment of their Ocean Energy Buoy. Northwest Energy Innovations will build and test a full-scale model of its Azura device.

  3. Pilot-scale grout production test with a simulated low-level waste

    SciTech Connect (OSTI)

    Fow, C.L.; Mitchell, D.H.; Treat, R.L.; Hymas, C.R.

    1987-05-01

    Plans are underway at the Hanford Site near Richland, Washington, to convert the low-level fraction of radioactive liquid wastes to a grout form for permanent disposal. Grout is a mixture of liquid waste and grout formers, including portland cement, fly ash, and clays. In the plan, the grout slurry is pumped to subsurface concrete vaults on the Hanford Site, where the grout will solidify into large monoliths, thereby immobilizing the waste. A similar disposal concept is being planned at the Savannah River Laboratory site. The underground disposal of grout was conducted at Oak Ridge National Laboratory between 1966 and 1984. Design and construction of grout processing and disposal facilities are underway. The Transportable Grout Facility (TGF), operated by Rockwell Hanford Operations (Rockwell) for the Department of Energy (DOE), is scheduled to grout Phosphate/Sulfate N Reactor Operations Waste (PSW) in FY 1988. Phosphate/Sulfate Waste is a blend of two low-level waste streams generated at Hanford's N Reactor. Other wastes are scheduled to be grouted in subsequent years. Pacific Northwest Laboratory (PNL) is verifying that Hanford grouts can be safely and efficiently processed. To meet this objective, pilot-scale grout process equipment was installed. On July 29 and 30, 1986, PNL conducted a pilot-scale grout production test for Rockwell. During the test, 16,000 gallons of simulated nonradioactive PSW were mixed with grout formers to produce 22,000 gallons of PSW grout. The grout was pumped at a nominal rate of 15 gpm (about 25% of the nominal production rate planned for the TGF) to a lined and covered trench with a capacity of 30,000 gallons. Emplacement of grout in the trench will permit subsequent evaluation of homogeneity of grout in a large monolith. 12 refs., 34 figs., 5 tabs.

  4. Scaling and design analyses of a scaled-down, high-temperature test facility for experimental investigation of the initial stages of a VHTR air-ingress accident

    SciTech Connect (OSTI)

    Arcilesi, David J.; Ham, Tae Kyu; Kim, In Hun; Sun, Xiaodong; Christensen, Richard N.; Oh, Chang H.

    2015-07-01

    A critical event in the safety analysis of the very high-temperature gas-cooled reactor (VHTR) is an air-ingress accident. This accident is initiated, in its worst case scenario, by a double-ended guillotine break of the coaxial cross vessel, which leads to a rapid reactor vessel depressurization. In a VHTR, the reactor vessel is located within a reactor cavity that is filled with air during normal operating conditions. Following the vessel depressurization, the dominant mode of ingress of an air–helium mixture into the reactor vessel will either be molecular diffusion or density-driven stratified flow. The mode of ingress is hypothesized to depend largely on the break conditions of the cross vessel. Since the time scales of these two ingress phenomena differ by orders of magnitude, it is imperative to understand under which conditions each of these mechanisms will dominate in the air ingress process. Computer models have been developed to analyze this type of accident scenario. There are, however, limited experimental data available to understand the phenomenology of the air-ingress accident and to validate these models. Therefore, there is a need to design and construct a scaled-down experimental test facility to simulate the air-ingress accident scenarios and to collect experimental data. The current paper focuses on the analyses performed for the design and operation of a 1/8th geometric scale (by height and diameter), high-temperature test facility. A geometric scaling analysis for the VHTR, a time scale analysis of the air-ingress phenomenon, a transient depressurization analysis of the reactor vessel, a hydraulic similarity analysis of the test facility, a heat transfer characterization of the hot plenum, a power scaling analysis for the reactor system, and a design analysis of the containment vessel are discussed.

  5. POC-SCALE TESTING OF A DRY TRIBOELECTROSTATIC SEPARATOR FOR FINE COAL CLEANING

    SciTech Connect (OSTI)

    R.-H. Yoon; G.H. Luttrell; A.D. Walters

    2000-01-01

    During the past quarter, several modifications were made to the TES unit and the materials handling system. The cylindrical electrodes were replaced by a set of screen electrodes to provide a more uniform electrostatic field. The problem with the recycle conveyor neutralizing the particle charge was also corrected by replacing it with a bucket elevator. In addition, problems with the turbocharger were corrected by increasing the number of charging stages from one to two. These modifications have significantly improved the separation performance and have permitted the POC-scale unit to achieve results in line with those obtained by the bench-scale separator. The testing phase of the project was continued at a rapid pace during this quarter. The test work showed that the modifications to the TES unit and the reduction in feed size from 28 mesh to 35 mesh resulted in significant overall improvement in yield and combustible recovery compared to the data reported in the last quarter. At that time, there was a significant discrepancy between the bench-scale and the pilot-scale results. The pilot-scale test work is now approaching the bench scale test results. However, further pilot-scale test work is required to further improve the results and duplicate the bench-scale test work.

  6. Biomass Energy for Transport and Electricity: Large scale utilization under low CO2 concentration scenarios

    SciTech Connect (OSTI)

    Luckow, Patrick; Wise, Marshall A.; Dooley, James J.; Kim, Son H.

    2010-01-25

    This paper examines the potential role of large scale, dedicated commercial biomass energy systems under global climate policies designed to stabilize atmospheric concentrations of CO2 at 400ppm and 450ppm. We use an integrated assessment model of energy and agriculture systems to show that, given a climate policy in which terrestrial carbon is appropriately valued equally with carbon emitted from the energy system, biomass energy has the potential to be a major component of achieving these low concentration targets. The costs of processing and transporting biomass energy at much larger scales than current experience are also incorporated into the modeling. From the scenario results, 120-160 EJ/year of biomass energy is produced by midcentury and 200-250 EJ/year by the end of this century. In the first half of the century, much of this biomass is from agricultural and forest residues, but after 2050 dedicated cellulosic biomass crops become the dominant source. A key finding of this paper is the role that carbon dioxide capture and storage (CCS) technologies coupled with commercial biomass energy can play in meeting stringent emissions targets. Despite the higher technology costs of CCS, the resulting negative emissions used in combination with biomass are a very important tool in controlling the cost of meeting a target, offsetting the venting of CO2 from sectors of the energy system that may be more expensive to mitigate, such as oil use in transportation. The paper also discusses the role of cellulosic ethanol and Fischer-Tropsch biomass derived transportation fuels and shows that both technologies are important contributors to liquid fuels production, with unique costs and emissions characteristics. Through application of the GCAM integrated assessment model, it becomes clear that, given CCS availability, bioenergy will be used both in electricity and transportation.

  7. Impact of Large Scale Energy Efficiency Programs On Consumer Tariffs and Utility Finances in India

    SciTech Connect (OSTI)

    Abhyankar, Nikit; Phadke, Amol

    2011-01-20

    Large-scale EE programs would modestly increase tariffs but reduce consumers' electricity bills significantly. However, the primary benefit of EE programs is a significant reduction in power shortages, which might make these programs politically acceptable even if tariffs increase. To increase political support, utilities could pursue programs that would result in minimal tariff increases. This can be achieved in four ways: (a) focus only on low-cost programs (such as replacing electric water heaters with gas water heaters); (b) sell power conserved through the EE program to the market at a price higher than the cost of peak power purchase; (c) focus on programs where a partial utility subsidy of incremental capital cost might work and (d) increase the number of participant consumers by offering a basket of EE programs to fit all consumer subcategories and tariff tiers. Large scale EE programs can result in consistently negative cash flows and significantly erode the utility's overall profitability. In case the utility is facing shortages, the cash flow is very sensitive to the marginal tariff of the unmet demand. This will have an important bearing on the choice of EE programs in Indian states where low-paying rural and agricultural consumers form the majority of the unmet demand. These findings clearly call for a flexible, sustainable solution to the cash-flow management issue. One option is to include a mechanism like FAC in the utility incentive mechanism. Another sustainable solution might be to have the net program cost and revenue loss built into utility's revenue requirement and thus into consumer tariffs up front. However, the latter approach requires institutionalization of EE as a resource. The utility incentive mechanisms would be able to address the utility disincentive of forgone long-run return but have a minor impact on consumer benefits. Fundamentally, providing incentives for EE programs to make them comparable to supply-side investments is a way of moving the electricity sector toward a model focused on providing energy services rather than providing electricity.

  8. Intermediate-scale tests of sodium interactions with calcite and dolomite aggregate concretes. [LMFBR

    SciTech Connect (OSTI)

    Randich, E.; Acton, R.U.

    1983-09-01

    Two intermediate-scale tests were performed to compare the behavior of calcite and dolomite aggregate concretes when attacked by molten sodium. The tests were performed as part of an interlaboratory comparison between Sandia National Laboratories and Hanford Engineering Development Laboratories. Results of the tests at Sandia National Laboratories are reported here. The results show that both concretes exhibit similar exothermic reactions with molten sodium. The large difference in reaction vigor suggested by thermodynamic considerations of CO/sub 2/ release from calcite and dolomite was not realized. Penetration rates of 1.4 to 1.7 mm/min were observed for short periods of time with reaction zone temperatures in excess of 800/sup 0/C during the energetic attack. The penetration was not uniform over the entire sodium-concrete contact area. Rapid attack may be localized due to inhomogeneities in the concrete. The chemical reaction zone is less then one cm thick for the calcite concrete but is about seven cm thick for the dolomite concrete.

  9. ARRA-Multi-Level Energy Storage and Controls for Large-Scale Wind Energy Integration

    SciTech Connect (OSTI)

    David Wenzhong Gao

    2012-09-30

    The Project Objective is to design innovative energy storage architecture and associated controls for high wind penetration to increase reliability and market acceptance of wind power. The project goals are to facilitate wind energy integration at different levels by design and control of suitable energy storage systems. The three levels of wind power system are: Balancing Control Center level, Wind Power Plant level, and Wind Power Generator level. Our scopes are to smooth the wind power fluctuation and also ensure adequate battery life. In the new hybrid energy storage system (HESS) design for wind power generation application, the boundary levels of the state of charge of the battery and that of the supercapacitor are used in the control strategy. In the controller, some logic gates are also used to control the operating time durations of the battery. The sizing method is based on the average fluctuation of wind profiles of a specific wind station. The calculated battery size is dependent on the size of the supercapacitor, state of charge of the supercapacitor and battery wear. To accommodate the wind power fluctuation, a hybrid energy storage system (HESS) consisting of battery energy system (BESS) and super-capacitor is adopted in this project. A probability-based power capacity specification approach for the BESS and super-capacitors is proposed. Through this method the capacities of BESS and super-capacitor are properly designed to combine the characteristics of high energy density of BESS and the characteristics of high power density of super-capacitor. It turns out that the super-capacitor within HESS deals with the high power fluctuations, which contributes to the extension of BESS lifetime, and the super-capacitor can handle the peaks in wind power fluctuations without the severe penalty of round trip losses associated with a BESS. The proposed approach has been verified based on the real wind data from an existing wind power plant in Iowa. An intelligent controller that increases battery life within hybrid energy storage systems for wind application was developed. Comprehensive studies have been conducted and simulation results are analyzed. A permanent magnet synchronous generator, coupled with a variable speed wind turbine, is connected to a power grid (14-bus system). A rectifier, a DC-DC converter and an inverter are used to provide a complete model of the wind system. An Energy Storage System (ESS) is connected to a DC-link through a DC-DC converter. An intelligent controller is applied to the DC-DC converter to help the Voltage Source Inverter (VSI) to regulate output power and also to control the operation of the battery and supercapacitor. This ensures a longer life time for the batteries. The detailed model is simulated in PSCAD/EMTP. Additionally, economic analysis has been done for different methods that can reduce the wind power output fluctuation. These methods are, wind power curtailment, dumping loads, battery energy storage system and hybrid energy storage system. From the results, application of single advanced HESS can save more money for wind turbines owners. Generally the income would be the same for most of methods because the wind does not change and maximum power point tracking can be applied to most systems. On the other hand, the cost is the key point. For short term and small wind turbine, the BESS is the cheapest and applicable method while for large scale wind turbines and wind farms the application of advanced HESS would be the best method to reduce the power fluctuation. The key outcomes of this project include a new intelligent controller that can reduce energy exchanged between the battery and DC-link, reduce charging/discharging cycles, reduce depth of discharge and increase time interval between charge/discharge, and lower battery temperature. This improves the overall lifetime of battery energy storages. Additionally, a new design method based on probability help optimize the power capacity specification for BESS and super-capacitors. Recommendations include experimental implementation of the controller and energy storage systems in laboratory environment for further testing and verification, which will help commercialization of the proposed system design and controller.

  10. POC-SCALE TESTING OF A DRY TRIBOELECTROSTATIC SEPARATOR FOR FINE COAL CLEANING

    SciTech Connect (OSTI)

    A.D. Walters; G.H. Luttrell; G.T. Adel; R.-H. Yoon

    1999-01-01

    It is the objective of the current project to further refine the TES process developed at FETC through bench-scale and proof-of-concept (POC) test programs. The bench-scale test program is aimed at studying the charging mechanisms associated with coal and mineral matter and improving the triboelectrification process, while the POC test program is aimed at obtaining scale-up information. The POC tests will be conducted at a throughput of 200-250 kg/hr. It is also the objective of the project to conduct a cost analysis based on the scale-up information obtained in the present work. Specific objectives of the work conducted during the current reporting period can be summarized as follows: to complete the engineering design of the TES tribocharging system and electrostatic separator, and to continue work related to the procurement and fabrication of the key components required to construct and install the proposed POC test circuit.

  11. Creation of the dam for the No. 2 Kambaratinskaya HPP by large-scale blasting: analysis of planning experience and lessons learned

    SciTech Connect (OSTI)

    Shuifer, M. I.; Argal, E. S.

    2012-05-15

    Results of complex instrument observations and video taping during large-scale blasts detonated for creation of the dam at the No. 2 Kambaratinskaya HPP on the Naryn River in the Kyrgyz Republic are analyzed. Tests of the energy effectiveness of the explosives are evaluated, characteristics of LSB manifestations in seismic and air waves are revealed, and the shaping and movement of the rock mass are examined. A methodological analysis of the planning and production of the LSB is given.

  12. CMI Unique Facility: Pilot-Scale Separations Test Bed Facility | Critical

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Materials Institute Pilot-Scale Separations Test Bed Facility Pilot-scale separations test bed facility at Idaho National Laboratory A group tours the 30-stage mixer-settler during a meeting at Idaho National Laboratory. This technology was developed for a CMI project. The Pilot-Scale Separations Test Bed Facility is one of more than a dozen unique facilities developed by the Critical Materials Institute, an Energy Innovation Hub of the U.S. Department of Energy. Noting that the CMI Grand

  13. Kinematic morphology of large-scale structure: evolution from potential to rotational flow

    SciTech Connect (OSTI)

    Wang, Xin; Szalay, Alex; Aragn-Calvo, Miguel A.; Neyrinck, Mark C.; Eyink, Gregory L.

    2014-09-20

    As an alternative way to describe the cosmological velocity field, we discuss the evolution of rotational invariants constructed from the velocity gradient tensor. Compared with the traditional divergence-vorticity decomposition, these invariants, defined as coefficients of the characteristic equation of the velocity gradient tensor, enable a complete classification of all possible flow patterns in the dark-matter comoving frame, including both potential and vortical flows. We show that this tool, first introduced in turbulence two decades ago, is very useful for understanding the evolution of the cosmic web structure, and in classifying its morphology. Before shell crossing, different categories of potential flow are highly associated with the cosmic web structure because of the coherent evolution of density and velocity. This correspondence is even preserved at some level when vorticity is generated after shell crossing. The evolution from the potential to vortical flow can be traced continuously by these invariants. With the help of this tool, we show that the vorticity is generated in a particular way that is highly correlated with the large-scale structure. This includes a distinct spatial distribution and different types of alignment between the cosmic web and vorticity direction for various vortical flows. Incorporating shell crossing into closed dynamical systems is highly non-trivial, but we propose a possible statistical explanation for some of the phenomena relating to the internal structure of the three-dimensional invariant space.

  14. Induced core formation time in subcritical magnetic clouds by large-scale trans-Alfvnic flows

    SciTech Connect (OSTI)

    Kudoh, Takahiro; Basu, Shantanu E-mail: basu@uwo.ca

    2014-10-20

    We clarify the mechanism of accelerated core formation by large-scale nonlinear flows in subcritical magnetic clouds by finding a semi-analytical formula for the core formation time and describing the physical processes that lead to them. Recent numerical simulations show that nonlinear flows induce rapid ambipolar diffusion that leads to localized supercritical regions that can collapse. Here, we employ non-ideal magnetohydrodynamic simulations including ambipolar diffusion for gravitationally stratified sheets threaded by vertical magnetic fields. One of the horizontal dimensions is eliminated, resulting in a simpler two-dimensional simulation that can clarify the basic process of accelerated core formation. A parameter study of simulations shows that the core formation time is inversely proportional to the square of the flow speed when the flow speed is greater than the Alfvn speed. We find a semi-analytical formula that explains this numerical result. The formula also predicts that the core formation time is about three times shorter than that with no turbulence, when the turbulent speed is comparable to the Alfvn speed.

  15. A large-scale structure at redshift 1.71 in the Lockman Hole

    SciTech Connect (OSTI)

    Henry, J. Patrick; Hasinger, Günther; Suh, Hyewon; Aoki, Kentaro; Finoguenov, Alexis; Fotopoulou, Sotiria; Salvato, Mara; Tanaka, Masayuki

    2014-01-01

    We previously identified LH146, a diffuse X-ray source in the Lockman Hole, as a galaxy cluster at redshift 1.753. The redshift was based on one spectroscopic value, buttressed by seven additional photometric redshifts. We confirm here the previous spectroscopic redshift and present concordant spectroscopic redshifts for an additional eight galaxies. The average of these nine redshifts is 1.714 ± 0.012 (error on the mean). Scrutiny of the galaxy distribution in redshift space and the plane of the sky shows that there are two concentrations of galaxies near the X-ray source. In addition, there are three diffuse X-ray sources spread along the axis connecting the galaxy concentrations. LH146 is one of these three and lies approximately at the center of the two galaxy concentrations and the outer two diffuse X-ray sources. We thus conclude that LH146 is at the redshift initially reported but it is not a single virialized galaxy cluster, as previously assumed. Rather, it appears to mark the approximate center of a larger region containing more objects. For brevity, we refer to all these objects and their alignments as a large-scale structure. The exact nature of LH146 itself remains unclear.

  16. NV Energy Large-Scale Photovoltaic Integration Study: Intra-Hour Dispatch and AGC Simulation

    SciTech Connect (OSTI)

    Lu, Shuai; Etingov, Pavel V.; Meng, Da; Guo, Xinxin; Jin, Chunlian; Samaan, Nader A.

    2013-01-02

    The uncertainty and variability with photovoltaic (PV) generation make it very challenging to balance power system generation and load, especially under high penetration cases. Higher reserve requirements and more cycling of conventional generators are generally anticipated for large-scale PV integration. However, whether the existing generation fleet is flexible enough to handle the variations and how well the system can maintain its control performance are difficult to predict. The goal of this project is to develop a software program that can perform intra-hour dispatch and automatic generation control (AGC) simulation, by which the balancing operations of a system can be simulated to answer the questions posed above. The simulator, named Electric System Intra-Hour Operation Simulator (ESIOS), uses the NV Energy southern system as a study case, and models the system’s generator configurations, AGC functions, and operator actions to balance system generation and load. Actual dispatch of AGC generators and control performance under various PV penetration levels can be predicted by running ESIOS. With data about the load, generation, and generator characteristics, ESIOS can perform similar simulations and assess variable generation integration impacts for other systems as well. This report describes the design of the simulator and presents the study results showing the PV impacts on NV Energy real-time operations.

  17. Using calibrated engineering models to predict energy savings in large-scale geothermal heat pump projects

    SciTech Connect (OSTI)

    Shonder, J.A.; Hughes, P.J.; Thornton, J.W.

    1998-10-01

    Energy savings performance contracting (ESPC) is now receiving greater attention as a means of implementing large-scale energy conservation projects in housing. Opportunities for such projects exist for military housing, federally subsidized low-income housing, and planned communities (condominiums, townhomes, senior centers), to name a few. Accurate prior (to construction) estimates of the energy savings in these projects reduce risk, decrease financing costs, and help avoid post-construction disputes over performance contract baseline adjustments. This paper demonstrates an improved method of estimating energy savings before construction takes place. Using an engineering model calibrated to pre-construction energy-use data collected in the field, this method is able to predict actual energy savings to a high degree of accuracy. This is verified with post-construction energy-use data from a geothermal heat pump ESPC at Fort Polk, Louisiana. This method also allows determination of the relative impact of the various energy conservation measures installed in a comprehensive energy conservation project. As an example, the breakout of savings at Fort Polk for the geothermal heat pumps, desuperheaters, lighting retrofits, and low-flow hot water outlets is provided.

  18. Using Calibrated Engineering Models To Predict Energy Savings In Large-Scale Geothermal Heat Pump Projects

    SciTech Connect (OSTI)

    Shonder, John A; Hughes, Patrick; Thornton, Jeff W.

    1998-01-01

    Energy savings performance contracting (ESPC) is now receiving greater attention as a means of implementing large-scale energy conservation projects in housing. Opportunities for such projects exist for military housing, federally subsidized low-income housing, and planned communities (condominiums, townhomes, senior centers), to name a few. Accurate prior (to construction) estimates of the energy savings in these projects reduce risk, decrease financing costs, and help avoid post-construction disputes over performance contract baseline adjustments. This paper demonstrates an improved method of estimating energy savings before construction takes place. Using an engineering model calibrated to pre-construction energy-use data collected in the field, this method is able to predict actual energy savings to a high degree of accuracy. This is verified with post-construction energy-use data from a geothermal heat pump ESPC at Fort Polk, Louisiana. This method also allows determination of the relative impact of the various energy conservation measures installed in a comprehensive energy conservation project. As an example, the breakout of savings at Fort Polk for the geothermal heat pumps, desuperheaters, lighting retrofits, and low-flow hot water outlets is provided.

  19. In-Flight Measurement of the Absolute Energy Scale of the Fermi Large Area Telescope

    SciTech Connect (OSTI)

    Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W.B.; Axelsson, M.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Bloom, E.D.; Bonamente, E.; Borgland, A.W.; Bouvier, A.; Bregeon, J.; Brez, A.; Brigida, M.; Bruel, P.; Buehler, R.; Buson, S.; /more authors..

    2012-09-20

    The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to survey the gamma-ray sky from 20 MeV to several hundreds of GeV. In this energy band there are no astronomical sources with sufficiently well known and sharp spectral features to allow an absolute calibration of the LAT energy scale. However, the geomagnetic cutoff in the cosmic ray electron-plus-positron (CRE) spectrum in low Earth orbit does provide such a spectral feature. The energy and spectral shape of this cutoff can be calculated with the aid of a numerical code tracing charged particles in the Earth's magnetic field. By comparing the cutoff value with that measured by the LAT in different geomagnetic positions, we have obtained several calibration points between {approx}6 and {approx}13 GeV with an estimated uncertainty of {approx}2%. An energy calibration with such high accuracy reduces the systematic uncertainty in LAT measurements of, for example, the spectral cutoff in the emission from gamma ray pulsars.

  20. Galaxy evolution and large-scale structure in the far-infrared. I. IRAS pointed observations

    SciTech Connect (OSTI)

    Lonsdale, C.J.; Hacking, P.B.

    1989-04-01

    Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution. 81 refs.

  1. Building a Large Scale Climate Data System in Support of HPC Environment

    SciTech Connect (OSTI)

    Wang, Feiyi; Harney, John F; Shipman, Galen M

    2011-01-01

    The Earth System Grid Federation (ESG) is a large scale, multi-institutional, interdisciplinary project that aims to provide climate scientists and impact policy makers worldwide a web-based and client-based platform to publish, disseminate, compare and analyze ever increasing climate related data. This paper describes our practical experiences on the design, development and operation of such a system. In particular, we focus on the support of the data lifecycle from a high performance computing (HPC) perspective that is critical to the end-to-end scientific discovery process. We discuss three subjects that interconnect the consumer and producer of scientific datasets: (1) the motivations, complexities and solutions of deep storage access and sharing in a tightly controlled environment; (2) the importance of scalable and flexible data publication/population; and (3) high performance indexing and search of data with geospatial properties. These perceived corner issues collectively contributed to the overall user experience and proved to be as important as any other architectural design considerations. Although the requirements and challenges are rooted and discussed from a climate science domain context, we believe the architectural problems, ideas and solutions discussed in this paper are generally useful and applicable in a larger scope.

  2. The power of event-driven analytics in Large Scale Data Processing

    ScienceCinema (OSTI)

    None

    2011-04-25

    FeedZai is a software company specialized in creating high-­-throughput low-­-latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­-driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­-time web-­-based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­-20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­-scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in the scope of the data processing needs of CERN. Pulse is available as open-­-source and can be licensed both for non-­-commercial and commercial applications. FeedZai is interested in exploring possible synergies with CERN in high-­-volume low-­-latency data processing applications. The seminar will be structured in two sessions, the first one being aimed to expose the general scope of FeedZai's activities, and the second focused on Pulse itself: 10:00-11:00 FeedZai and Large Scale Data Processing Introduction to FeedZai FeedZai Pulse and Complex Event Processing Demonstration Use-Cases and Applications Conclusion and Q&A 11:00-11:15 Coffee break 11:15-12:30 FeedZai Pulse Under the Hood A First FeedZai Pulse Application PulseQL overview Defining KPIs and Baselines Conclusion and Q&A About the speakers Nuno Sebastião is the CEO of FeedZai. Having worked for many years for the European Space Agency (ESA), he was responsible the overall design and development of Satellite Simulation Infrastructure of the agency. Having left ESA to found FeedZai, Nuno is currently responsible for the whole operations of the company. Nuno holds an M.Eng. in Informatics Engineering for the University of Coimbra, and an MBA from the London Business School. Paulo Marques is the CTO of FeedZai, being responsible for product development. Paulo is an Assistant Professor at the University of Coimbra, in the area of Distributed Data Processing, and an Adjunct Associated Professor at Carnegie Mellon, in the US. In the past Paulo lead a large number of projects for institutions like the ESA, Microsoft Research, SciSys, Siemens, among others, being now fully dedicated to FeedZai. Paulo holds a Ph.D. in Distributed Systems from the University of Coimbra.

  3. The power of event-driven analytics in Large Scale Data Processing

    SciTech Connect (OSTI)

    2011-02-24

    FeedZai is a software company specialized in creating high-­-throughput low-­-latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­-driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­-time web-­-based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­-20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­-scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in the scope of the data processing needs of CERN. Pulse is available as open-­-source and can be licensed both for non-­-commercial and commercial applications. FeedZai is interested in exploring possible synergies with CERN in high-­-volume low-­-latency data processing applications. The seminar will be structured in two sessions, the first one being aimed to expose the general scope of FeedZai's activities, and the second focused on Pulse itself: 10:00-11:00 FeedZai and Large Scale Data Processing Introduction to FeedZai FeedZai Pulse and Complex Event Processing Demonstration Use-Cases and Applications Conclusion and Q&A 11:00-11:15 Coffee break 11:15-12:30 FeedZai Pulse Under the Hood A First FeedZai Pulse Application PulseQL overview Defining KPIs and Baselines Conclusion and Q&A About the speakers Nuno Sebastião is the CEO of FeedZai. Having worked for many years for the European Space Agency (ESA), he was responsible the overall design and development of Satellite Simulation Infrastructure of the agency. Having left ESA to found FeedZai, Nuno is currently responsible for the whole operations of the company. Nuno holds an M.Eng. in Informatics Engineering for the University of Coimbra, and an MBA from the London Business School. Paulo Marques is the CTO of FeedZai, being responsible for product development. Paulo is an Assistant Professor at the University of Coimbra, in the area of Distributed Data Processing, and an Adjunct Associated Professor at Carnegie Mellon, in the US. In the past Paulo lead a large number of projects for institutions like the ESA, Microsoft Research, SciSys, Siemens, among others, being now fully dedicated to FeedZai. Paulo holds a Ph.D. in Distributed Systems from the University of Coimbra.

  4. Summary of results from velocity profile tests and wastage tests in support of LLTR series II test A-4. [Large Leak Test Rig

    SciTech Connect (OSTI)

    Greene, D.A.

    1981-01-01

    The following conclusions were drawn from the experimental program conducted in support of LLTR (Large Leak Test Rig) Series II Test A-4: Fabrication technique for making precise slits was developed. Wastage boundary agrees with velocity profile boundary. Circumferential slit angles would have to be 120/sup 0/ to ensure adequate coverage of adjacent tubes. 120/sup 0/ circumferential slit weakens tubes such that maintaining desired slit dimensions for LLTI application is not considered practical. Use of intermittent slit geometry would be required. 120/sup 0/ slits, precisely machined and precisely aligned with target tubes, produced different penetration rates on adjacent tubes. Production of simultaneous failures in LLTI with 120/sup 0/ slit or simulated interrupted slit is not considered credible.

  5. Field Testing of a Wet FGD Additive for Enhanced Mercury Control - Task 5 Full-Scale Test Results

    SciTech Connect (OSTI)

    Gary Blythe; MariJon Owens

    2007-12-01

    This Topical Report summarizes progress on Cooperative Agreement DE-FC26-04NT42309, 'Field Testing of a Wet FGD Additive'. The objective of the project is to demonstrate the use of two flue gas desulfurization (FGD) additives, Evonik Degussa Corporation's TMT-15 and Nalco Company's Nalco 8034, to prevent the re-emission of elemental mercury (Hg{sup 0}) in flue gas exiting wet FGD systems on coal-fired boilers. Furthermore, the project intends to demonstrate whether the additive can be used to precipitate most of the mercury (Hg) removed in the wet FGD system as a fine salt that can be separated from the FGD liquor and bulk solid byproducts for separate disposal. The project is conducting pilot- and full-scale tests of the additives in wet FGD absorbers. The tests are intended to determine required additive dosages to prevent Hg{sup 0} re-emissions and to separate mercury from the normal FGD byproducts for three coal types: Texas lignite/Powder River Basin (PRB) coal blend, high-sulfur Eastern bituminous coal, and low-sulfur Eastern bituminous coal. The project team consists of URS Group, Inc., EPRI, Luminant Power (was TXU Generation Company LP), Southern Company, IPL (an AES company), Evonik Degussa Corporation and the Nalco Company. Luminant Power has provided the Texas lignite/PRB co-fired test site for pilot FGD tests and cost sharing. Southern Company has provided the low-sulfur Eastern bituminous coal host site for wet scrubbing tests, as well as the pilot- and full-scale jet bubbling reactor (JBR) FGD systems tested. IPL provided the high-sulfur Eastern bituminous coal full-scale FGD test site and cost sharing. Evonik Degussa Corporation is providing the TMT-15 additive, and the Nalco Company is providing the Nalco 8034 additive. Both companies are also supplying technical support to the test program as in-kind cost sharing. The project is being conducted in six tasks. Of the six project tasks, Task 1 involves project planning and Task 6 involves management and reporting. The other four tasks involve field testing on FGD systems, either at pilot or full scale. The four tasks include: Task 2 - Pilot Additive Testing in Texas Lignite Flue Gas; Task 3 - Full-scale FGD Additive Testing in High-sulfur Eastern Bituminous Flue Gas; Task 4 - Pilot Wet Scrubber Additive Tests at Plant Yates; and Task 5 - Full-scale Additive Tests at Plant Yates. The pilot-scale tests and the full-scale test using high-sulfur coal were completed in 2005 and 2006 and have been previously reported. This topical report presents the results from the Task 5 full-scale additive tests, conducted at Southern Company's Plant Yates Unit 1. Both additives were tested there.

  6. New Membrane Technology for Post-Combustion Carbon Capture Begins Pilot-Scale Test

    Broader source: Energy.gov [DOE]

    A promising new technology sponsored by the U.S. Department of Energy (DOE) for economically capturing 90 percent of the carbon dioxide (CO2) emitted from a coal-burning power plant has begun pilot-scale testing.

  7. Novel Carbon Capture Solvent Begins Pilot-Scale Testing for Emissions Control

    Broader source: Energy.gov [DOE]

    Pilot-scale testing of an advanced technology for economically capturing carbon dioxide (CO2) from flue gas has begun at the National Carbon Capture Center (NCCC) in Wilsonville, Ala.

  8. Multiple pollutant removal using the condensing heat exchanger: Preliminary test plan for Task 2, Pilot scale IFGT testing

    SciTech Connect (OSTI)

    Jankura, B.J.

    1995-11-01

    The purpose of Task 2 (IFGT Pilot-Scale Tests at the B&W Alliance Research Center) is to evaluate the emission reduction performance of the Integrated Flue Gas Treatment (IFGT) process for coal-fired applications. The IFGT system is a two-stage condensing heat exchanger that captures multiple pollutants -- while recovering waste heat. The IFGT technology offers the potential of addressing the emission of S0{sub 2} and particulate from electric utilities currently regulated under the Phase 1 and Phase 2 requirements defined in Title IV, and many of the air pollutants that will soon be regulated under Title III of the Clean Air Act. The performance data will be obtained at pilot-scale conditions similar to full-scale operating systems. The Task 2 IFGT tests have been designed to investigate several aspects of IFGT process conditions at a broader range of variables than would be feasible at a larger scale facility. The data from these tests greatly expands the IFGT performance database for coals and is needed for the technology to progress from the component engineering phase to system integration and commercialization. The performance parameters that will be investigated are as follows: SO{sub 2} removal; particulate removal; removal of mercury and other heavy metals; NO{sub x} removal; HF and HCl removal; NH{sub 3} removal; ammonia-sulfur compounds generation; and steam injection for particle removal. For all of the pollutant removal tests, removal efficiency will be based on measurements at the inlet and outlet of the IFGT facility. Heat recovery measurements will also be made during these tests to demonstrate the heat recovery provided by the IFGT technology. This report provides a preliminary test plan for all of the Task 2 pilot-scale IFGT tests.

  9. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    SciTech Connect (OSTI)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a brief summary of those relevant to issues raised by researchers at the workshop.

  10. Autonomous UAV-Based Mapping of Large-Scale Urban Firefights

    SciTech Connect (OSTI)

    Snarski, S; Scheibner, K F; Shaw, S; Roberts, R S; LaRow, A; Oakley, D; Lupo, J; Neilsen, D; Judge, B; Forren, J

    2006-03-09

    This paper describes experimental results from a live-fire data collect designed to demonstrate the ability of IR and acoustic sensing systems to detect and map high-volume gunfire events from tactical UAVs. The data collect supports an exploratory study of the FightSight concept in which an autonomous UAV-based sensor exploitation and decision support capability is being proposed to provide dynamic situational awareness for large-scale battalion-level firefights in cluttered urban environments. FightSight integrates IR imagery, acoustic data, and 3D scene context data with prior time information in a multi-level, multi-step probabilistic-based fusion process to reliably locate and map the array of urban firing events and firepower movements and trends associated with the evolving urban battlefield situation. Described here are sensor results from live-fire experiments involving simultaneous firing of multiple sub/super-sonic weapons (2-AK47, 2-M16, 1 Beretta, 1 Mortar, 1 rocket) with high optical and acoustic clutter at ranges up to 400m. Sensor-shooter-target configurations and clutter were designed to simulate UAV sensing conditions for a high-intensity firefight in an urban environment. Sensor systems evaluated were an IR bullet tracking system by Lawrence Livermore National Laboratory (LLNL) and an acoustic gunshot detection system by Planning Systems, Inc. (PSI). The results demonstrate convincingly the ability for the LLNL and PSI sensor systems to accurately detect, separate, and localize multiple shooters and the associated shot directions during a high-intensity firefight (77 rounds in 5 sec) in a high acoustic and optical clutter environment with no false alarms. Preliminary fusion processing was also examined that demonstrated an ability to distinguish co-located shooters (shooter density), range to <0.5 m accuracy at 400m, and weapon type.

  11. Final Report Full-Scale Test of DWPF Advanced Liquid-Level and Density Measurement Bubblers

    SciTech Connect (OSTI)

    Duignan, M.R.; Weeks, G.E.

    1999-07-01

    As requested by the Technical Task Request (1), a full-scale test was carried out on several different liquid-level measurement bubblers as recommended from previous testing (2). This final report incorporates photographic evidence (Appendix B) of the bubblers at different stages of testing, along with the preliminary results (Appendix C) which were previously reported (3), and instrument calibration data (Appendix D); while this report contains more detailed information than previously reported (3) the conclusions remain the same. The test was performed under highly prototypic conditions from November 26, 1996 to January 23, 1997 using the full-scale SRAT/SME tank test facilities located in the 672-T building at TNX. Two different types of advanced bubblers were subjected to approximately 58 days of slurry operation; 14 days of which the slurry was brought to boiling temperatures.The test showed that the large diameter tube bubbler (2.64 inches inside diameter) operated successfully throughout the2-month test by not plugging with the glass-frit ladened slurry which was maintained at a minimum temperature of 50 deg Cand several days of boiling temperatures. However, a weekly blow-down with air or water is recommended to minimize the slurry which builds up.The small diameter porous tube bubbler (0.62 inch inside diameter; water flow {gt} 4 milliliters/hour = 1.5 gallons/day) operated successfully on a daily basis in the glass-frit ladened slurry which was maintained at a minimum temperature of 50 degrees C and several days of boiling temperatures. However, a daily blow-down with air, or air and water, is necessary to maintain accurate readings.For the small diameter porous tube bubbler (0.62 inch inside diameter; water flow {gt} 4 milliliters/hour = 1.5 gallons/day) there were varying levels of success with the lower water-flow tubes and these tubes would have to be cleaned by blowing with air, or air and water, several times a day to maintain them plug free. This may be too labor intensive for practical use.All of the large diameter bubbler tubes tested could be readily cleaned in place by either blowing them down with justhigh pressure air or water (approx. 90 psig). While the use of both air and water produced the cleanest bubbler, using justair removed most of the slurry build-up, and the use of water resulted in basically a slurry free surface. For the smalldiameter bubbler tubes it was necessary to use high pressure air and water (approx. 90 psig) to effectively clean them. The water was only sent through the porous jacket and not introduced down the air line. However, even under these conditions there was one case where a plug was not removed when both air and water were used.Primary recommendation: The large diameter probe is the best choice since none of the three tested plugged during the2-mouth test period to the point which compromised liquid-level measure. However, after a week`s operation at boilingtemperatures several inches of a soft sludge builds up within the tubes. This sludge can be easily removed in place witheither high pressure air or water (approx. 90 psig). A full-scale verifi-cation test should be carried out in S-area to confirm the conclusion.Secondary recommendation: The small-diameter porous tube bubbler is recommended when an access port cannot accommodate thelarger diameter probe. Bubbler {number_sign}1 operated accurately during most of the test period. This probe had the highest water flowrate (approx. 1.6 gallons/day) and had the least distance from the slurry upper surface (37 inches). This probe can be made to accurately operate at lower depths if the 8-inch-long porous tube is made longer and the water flow rate made higher.Substituting the current level and density probes (Holledge) with bubbler probes will result in a significant cost savings (inexpensive materials, less labor to manufacture, less labor to maintain, less down time due to less frequent instrument replacement).

  12. Estimation of host rock thermal conductivities using thetemperature data from the drift-scale test at Yucca Mountain,Nevada

    SciTech Connect (OSTI)

    Mukhopadhyay, Sumitra; Tsang, Y.W.

    2003-11-25

    A large volume of temperature data has been collected from a very large, underground heater test, the Drift Scale Test (DST) at Yucca Mountain, Nevada. The DST was designed to obtain thermal, hydrological, mechanical, and chemical (THMC) data in the unsaturated fractured rock of Yucca Mountain. Sophisticated numerical models have been developed to analyze the collected THMC data. In these analyses, thermal conductivities measured from core samples have been used as input parameters to the model. However, it was not known whether these core measurements represented the true field-scale thermal conductivity of the host rock. Realizing these difficulties, elaborate, computationally intensive geostatistical simulations have also been performed to obtain field-scale thermal conductivity of the host rock from the core measurements. In this paper, we use the temperature data from the DST as the input (instead of the measured core-scale thermal conductivity values) to develop an estimate of the field-scale thermal conductivity values. Assuming a conductive thermal regime, we develop an analytical solution for the temperature rise in the host rock of the DST; and using a nonlinear fitting routine, we obtain a best-fit estimate of field-scale thermal conductivity for the DST host rock. The temperature data collected from the DST shows clear evidence of two distinct thermal regimes: a zone below boiling (wet) and a zone above boiling (dry). We obtain estimates of thermal conductivity for both the wet and dry zones. We also analyze the sensitivity of these estimates to the input heating power of the DST.

  13. Microsoft PowerPoint - 2-A-3-OK-Real-Time Data Infrastructure for Large Scale Wind Fleets.pptx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Real Time Data Infrastructure for Large Real-Time Data Infrastructure for Large Scale Wind Fleets - Return on Investment vs Fundamental Business Requirements Value now. Value over time. © Copyright 2011, OSIsoft, LLC All Rights Reserved. vs. Fundamental Business Requirements Reliability - 4 Ws and an H * What is reliability? - Uptime, OEE, profitable wind plants? (OEE Availability % * Production % * Quality %) * (OEE = Availability % * Production % * Quality %) * Why should money be spent to

  14. B61-12 Life Extension Program Undergoes First Full-Scale Wind Tunnel Test |

    National Nuclear Security Administration (NNSA)

    National Nuclear Security Administration B61-12 Life Extension Program Undergoes First Full-Scale Wind Tunnel Test April 14, 2014 WASHINGTON, D.C. - The National Nuclear Security Administration (NNSA) announced today that its Sandia National Laboratories successfully completed the first full-scale wind tunnel test of the B61-12 as part of the NNSA's ongoing effort to refurbish the B61 nuclear bomb. The purpose of this test was to characterize counter torque, the interaction between the spin

  15. Dewatering Treatment Scale-up Testing Results of Hanford Tank Wastes

    SciTech Connect (OSTI)

    Tedeschi, A.R.; May, T.H.; Bryan, W.E.

    2008-07-01

    This report documents CH2M HILL Hanford Group Inc. (CH2M HILL) 2007 dryer testing results in Richland, WA at the AMEC Nuclear Ltd., GeoMelt Division (AMEC) Horn Rapids Test Site. It provides a discussion of scope and results to qualify the dryer system as a viable unit-operation in the continuing evaluation of the bulk vitrification process. A 10,000 liter (L) dryer/mixer was tested for supplemental treatment of Hanford tank low activity wastes, drying and mixing a simulated non-radioactive salt solution with glass forming minerals. Testing validated the full scale equipment for producing dried product similar to smaller scale tests, and qualified the dryer system for a subsequent integrated dryer/vitrification test using the same simulant and glass formers. The dryer system is planned for installation at the Hanford tank farms to dry/mix radioactive waste for final treatment evaluation of the supplemental bulk vitrification process. (authors)

  16. DEWATERING TREATMENT SCALE-UP TESTING RESULTS OF HANFORD TANK WASTES

    SciTech Connect (OSTI)

    TEDESCHI AR

    2008-01-23

    This report documents CH2M HILL Hanford Group Inc. (CH2M HILL) 2007 dryer testing results in Richland, WA at the AMEC Nuclear Ltd., GeoMelt Division (AMEC) Horn Rapids Test Site. It provides a discussion of scope and results to qualify the dryer system as a viable unit-operation in the continuing evaluation of the bulk vitrification process. A 10,000 liter (L) dryer/mixer was tested for supplemental treatment of Hanford tank low-activity wastes, drying and mixing a simulated non-radioactive salt solution with glass forming minerals. Testing validated the full scale equipment for producing dried product similar to smaller scale tests, and qualified the dryer system for a subsequent integrated dryer/vitrification test using the same simulant and glass formers. The dryer system is planned for installation at the Hanford tank farms to dry/mix radioactive waste for final treatment evaluation of the supplemental bulk vitrification process.

  17. Model based multivariable controller for large scale compression stations. Design and experimental validation on the LHC 18KW cryorefrigerator

    SciTech Connect (OSTI)

    Bonne, François; Bonnay, Patrick; Bradu, Benjamin

    2014-01-29

    In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsed heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  18. Large-scale Nanostructure Simulations from X-ray Scattering Data On Graphics Processor Clusters

    SciTech Connect (OSTI)

    Sarje, Abhinav; Pien, Jack; Li, Xiaoye; Chan, Elaine; Chourou, Slim; Hexemer, Alexander; Scholz, Arthur; Kramer, Edward

    2012-01-15

    X-ray scattering is a valuable tool for measuring the structural properties of materialsused in the design and fabrication of energy-relevant nanodevices (e.g., photovoltaic, energy storage, battery, fuel, and carbon capture andsequestration devices) that are key to the reduction of carbon emissions. Although today's ultra-fast X-ray scattering detectors can provide tremendousinformation on the structural properties of materials, a primary challenge remains in the analyses of the resulting data. We are developing novelhigh-performance computing algorithms, codes, and software tools for the analyses of X-ray scattering data. In this paper we describe two such HPCalgorithm advances. Firstly, we have implemented a flexible and highly efficient Grazing Incidence Small Angle Scattering (GISAXS) simulation code based on theDistorted Wave Born Approximation (DWBA) theory with C++/CUDA/MPI on a cluster of GPUs. Our code can compute the scattered light intensity from any givensample in all directions of space; thus allowing full construction of the GISAXS pattern. Preliminary tests on a single GPU show speedups over 125x compared tothe sequential code, and almost linear speedup when executing across a GPU cluster with 42 nodes, resulting in an additional 40x speedup compared to usingone GPU node. Secondly, for the structural fitting problems in inverse modeling, we have implemented a Reverse Monte Carlo simulation algorithm with C++/CUDAusing one GPU. Since there are large numbers of parameters for fitting in the in X-ray scattering simulation model, the earlier single CPU code required weeks ofruntime. Deploying the AccelerEyes Jacket/Matlab wrapper to use GPU gave around 100x speedup over the pure CPU code. Our further C++/CUDA optimization deliveredan additional 9x speedup.

  19. Reducing Plug and Process Loads for a Large Scale, Low Energy Office Building: NREL's Research Support Facility; Preprint

    SciTech Connect (OSTI)

    Lobato, C.; Pless, S.; Sheppy, M.; Torcellini, P.

    2011-02-01

    This paper documents the design and operational plug and process load energy efficiency measures needed to allow a large scale office building to reach ultra high efficiency building goals. The appendices of this document contain a wealth of documentation pertaining to plug and process load design in the RSF, including a list of equipment was selected for use.

  20. POC-scale testing of a dry triboelectrostatic separator for fine coal cleaning

    SciTech Connect (OSTI)

    R.-H. Yoon; G.H. Luttrell; A.D. Walters

    1999-10-01

    During the past quarter, the installation, testing and shakedown phases of commissioning the TES unit were completed (Tasks 4, 5.1 and 5.2). A representative from Carpco Inc. was on site to provide training in the operation of the test unit and assist with the initial test runs. Problems have been encountered with the recycle conveyor generating dust that neutralizes the particle charge. Testing has continued by batch feeding the unit while the recycle conveying problem is being solved. Good separations have been achieved while operating in this mode. Comparison tests have also been carried out using a bench-scale triboelectrostatic separator in parallel with the POC Carpco unit.

  1. TANK 18-F AND 19-F TANK FILL GROUT SCALE UP TEST SUMMARY

    SciTech Connect (OSTI)

    Stefanko, D.; Langton, C.

    2012-01-03

    High-level waste (HLW) tanks 18-F and 19-F have been isolated from FTF facilities. To complete operational closure the tanks will be filled with grout for the purpose of: (1) physically stabilizing the tanks, (2) limiting/eliminating vertical pathways to residual waste, (3) entombing waste removal equipment, (4) discouraging future intrusion, and (5) providing an alkaline, chemical reducing environment within the closure boundary to control speciation and solubility of select radionuclides. This report documents the results of a four cubic yard bulk fill scale up test on the grout formulation recommended for filling Tanks 18-F and 19-F. Details of the scale up test are provided in a Test Plan. The work was authorized under a Technical Task Request (TTR), HLE-TTR-2011-008, and was performed according to Task Technical and Quality Assurance Plan (TTQAP), SRNL-RP-2011-00587. The bulk fill scale up test described in this report was intended to demonstrate proportioning, mixing, and transportation, of material produced in a full scale ready mix concrete batch plant. In addition, the material produced for the scale up test was characterized with respect to fresh properties, thermal properties, and compressive strength as a function of curing time.

  2. Fuel-rod response during the large-break LOCA Test LOC-6. [PWR

    SciTech Connect (OSTI)

    Vinjamuri, K.; Cook, B.A.; Hobbins, R.R.

    1981-01-01

    The large break Loss of Coolant Accident (LOCA) Test LOC-6 was conducted in the Power Burst Facility (PBF) at the Idaho National Engineering Laboratory by EG and G Idaho, Inc. The objectives of the PBF LOCA tests are to obtain in-pile cladding ballooning data under blowdown and reflood conditions and assess how well out-of-pile ballooning data represent in-pile fuel rod behavior. The primary objective of the LOC-6 test was to determine the effects of internal rod pressures and prior irradiation on the deformation behavior of fuel rods that reached cladding temperatures high in the alpha phase of zircaloy. Test LOC-6 was conducted with four rods of PWR 15 x 15 design with the exception of fuel stack length (89 cm) and enrichment (12.5 W% /sup 235/U). Each rod was surrounded by an individual flow shroud.

  3. THE DETECTION OF THE LARGE-SCALE ALIGNMENT OF MASSIVE GALAXIES AT z {approx} 0.6

    SciTech Connect (OSTI)

    Li Cheng [Partner Group of the Max Planck Institute for Astrophysics at the Shanghai Astronomical Observatory and Key Laboratory for Research in Galaxies and Cosmology of Chinese Academy of Sciences, Nandan Road 80, Shanghai 200030 (China); Jing, Y. P. [Center for Astronomy and Astrophysics, Department of Physics, Shanghai Jiao Tong University, Shanghai 200240 (China); Faltenbacher, A. [School of Physics, University of the Witwatersrand, P.O. Box Wits, Johannesburg 2050 (South Africa); Wang Jie, E-mail: leech@shao.ac.cn [National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)

    2013-06-10

    We report on the detection of the alignment between galaxies and large-scale structure at z {approx} 0.6 based on the CMASS galaxy sample from the Baryon Oscillation Spectroscopy Survey Data Release 9. We use two statistics to quantify the alignment signal: (1) the alignment two-point correlation function that probes the dependence of galaxy clustering at a given separation in redshift space on the projected angle ({theta}{sub p}) between the orientation of galaxies and the line connecting to other galaxies, and (2) the cos (2{theta})-statistic that estimates the average of cos (2{theta}{sub p}) for all correlated pairs at a given separation s. We find a significant alignment signal out to about 70 h {sup -1} Mpc in both statistics. Applications of the same statistics to dark matter halos of mass above 10{sup 12} h {sup -1} M{sub Sun} in a large cosmological simulation show scale-dependent alignment signals similar to the observation, but with higher amplitudes at all scales probed. We show that this discrepancy may be partially explained by a misalignment angle between central galaxies and their host halos, though detailed modeling is needed in order to better understand the link between the orientations of galaxies and host halos. In addition, we find systematic trends of the alignment statistics with the stellar mass of the CMASS galaxies, in the sense that more massive galaxies are more strongly aligned with the large-scale structure.

  4. Efficient large-scale finite-element computations in a CRAY environment

    SciTech Connect (OSTI)

    Goudreau, G.L.; Bailey, R.A.; Hallquist, J.O.; Murray, R.C.; Sackett, S.J.

    1983-06-01

    The Lawrence Livermore National Laboratory engineering computational experience on the CRAY-1 is highlighted in the context of our large general purpose solid and structural mechanics codes. DYNA2D and DYNA3D are explicit large deformation inelastic Lagrangian codes with one point elements and hourglass control. NIKE2D and NIKE3D are implicit codes of comparable continuum formulation but use two point constant pressure elements and an optimized linear equation solver. NIKE3D has a finite rotation plastic resultant shell element. The new general purpose linear elastic structures code GEMINI is also illustrated for large static and eigenvalue analysis. 19 references.

  5. Test plan: Laboratory-scale testing of the first core sample from Tank 102-AZ

    SciTech Connect (OSTI)

    Morrey, E.V.

    1996-03-01

    The overall objectives of the Radioactive Process/Product Laboratory Testing (RPPLT), WBS 1.2.2.05.05, are to confirm that simulated HWVP feed and glass are representative of actual radioactive HWVP feed and glass and to provide radioactive leaching and glass composition data to WFQ. This study will provide data from one additional NCAW core sample (102-AZ Core 1) for these purposes.

  6. Field Scale Test and Verification of CHP System at the Ritz Carlton, San

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Francisco, August 2007 | Department of Energy Field Scale Test and Verification of CHP System at the Ritz Carlton, San Francisco, August 2007 Field Scale Test and Verification of CHP System at the Ritz Carlton, San Francisco, August 2007 DOE, the Gas Technology Institute, Oak Ridge National Laboratory, and UTC Power partnered with Host Hotels and Resorts to install and operate a PureComfort® 240M Cooling, Heating and Power (CHP) System at the Ritz-Carlton in San Francisco. This National

  7. Testing the Floor Scale Designated for Pacific Northwest National Laboratory's UF6 Cylinder Portal Monitor

    SciTech Connect (OSTI)

    Curtis, Michael M.; Weier, Dennis R.

    2009-03-12

    Pacific Northwest National Laboratory (PNNL) obtained a Mettler Toledo floor scale for the purpose of testing it to determine whether it can replace the International Atomic Energy Agency’s (IAEA) cumbersome, hanging load cell. The floor scale is intended for use as a subsystem within PNNL’s nascent UF6 Cylinder Portal Monitor. The particular model was selected for its accuracy, size, and capacity. The intent will be to use it only for 30B cylinders; consequently, testing did not proceed beyond 8,000 lb.

  8. AGU Chapman Conference Hydrogeologic Processes: Building and Testing Atomistic- to Basin-Scale Models

    SciTech Connect (OSTI)

    Weaver, B.

    1994-12-31

    This report presents details of the Chapman Conference given on June 6--9, 1994 in Lincoln, New Hampshire. This conference covered the scale of processes involved in coupled hydrogeologic mass transport and a concept of modeling and testing from the atomistic- to the basin- scale. Other topics include; the testing of fundamental atomic level parameterizations in the laboratory and field studies of fluid flow and mass transport and the next generation of hydrogeologic models. Individual papers from this conference are processed separately for the database.

  9. In-situ sampling of a large-scale particle simulation for interactive...

    Office of Scientific and Technical Information (OSTI)

    The limiting technology in this situation is analogous to the problem in many population surveys: there aren't enough human resources to query a large population. To cope with the ...

  10. The large-area hybrid-optics CLAS12 RICH detector: Tests of innovative components

    SciTech Connect (OSTI)

    Contalbrigo, M.; Baltzell, N.; Benmokhtar, F.; Barion, L.; Cisbani, E.; El Alaoui, A.; Hafidi, K.; Hoek, M.; Kubarovsky, V.; Lagamba, L.; Lucherini, V.; Malaguti, R.; Mirazita, M.; Montgomery, R.; Movsisyan, A.; Musico, P.; Orecchini, D.; Orlandi, A.; Pappalardo, L.L.; Pereira, S.; Perrino, R.; Phillips, J.; Pisano, S.; Rossi, P.; Squerzanti, S.; Tomassini, S.; Turisini, M.; Viticchiè, A.

    2014-07-01

    A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Lab to study the 3D nucleon structure in the yet poorly explored valence region by deep-inelastic scattering, and to perform precision measurements in hadronization and hadron spectroscopy. The adopted solution foresees a novel hybrid optics design based on an aerogel radiator, composite mirrors and densely packed and highly segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). The preliminary results of individual detector component tests and of the prototype performance at test-beams are reported here.

  11. A membrane-free lithium/polysulfide semi-liquid battery for large-scale energy storage

    SciTech Connect (OSTI)

    Yang, Yuan; Zheng, Guangyuan; Cui, Yi

    2013-01-01

    Large-scale energy storage represents a key challenge for renewable energy and new systems with low cost, high energy density and long cycle life are desired. In this article, we develop a new lithium/polysulfide (Li/PS) semi-liquid battery for large-scale energy storage, with lithium polysulfide (Li{sub 2}S{sub 8}) in ether solvent as a catholyte and metallic lithium as an anode. Unlike previous work on Li/S batteries with discharge products such as solid state Li{sub 2}S{sub 2} and Li{sub 2}S, the catholyte is designed to cycle only in the range between sulfur and Li{sub 2}S{sub 4}. Consequently all detrimental effects due to the formation and volume expansion of solid Li{sub 2}S{sub 2}/Li{sub 2}S are avoided. This novel strategy results in excellent cycle life and compatibility with flow battery design. The proof-of-concept Li/PS battery could reach a high energy density of 170 W h kg{sup -1} and 190 W h L{sup -1} for large scale storage at the solubility limit, while keeping the advantages of hybrid flow batteries. We demonstrated that, with a 5 M Li{sub 2}S{sub 8} catholyte, energy densities of 97 W h kg{sup -1} and 108 W h L{sup -1} can be achieved. As the lithium surface is well passivated by LiNO{sub 3} additive in ether solvent, internal shuttle effect is largely eliminated and thus excellent performance over 2000 cycles is achieved with a constant capacity of 200 mA h g{sup -1}. This new system can operate without the expensive ion-selective membrane, and it is attractive for large-scale energy storage.

  12. Multiple pollutant removal using the condensing heat exchanger. Task 2, Pilot scale IFGT testing

    SciTech Connect (OSTI)

    Jankura, B.J.

    1996-01-01

    The purpose of Task 2 (IFGT Pilot-Scale Tests at the B&W Alliance Research Center) is to evaluate the emission reduction performance of the Integrated flue Gas Treatment (IFGT) process for coal-fired applications. The IFGT system is a two-stage condensing heat exchanger that captures multiple pollutants - while recovering waste heat. The IFGT technology offers the potential of a addressing the emission of SO{sub 2} and particulate from electric utilities currently regulated under the Phase I and Phase II requirements defined in Title IV, and many of the air pollutants that will soon be regulated under Title III of the Clean Air Act. The performance data will be obtained at pilot-scale conditions similar to full-scale operating systems. The task 2 IFGT tests have been designed to investigate several aspects of IFGT process conditions at a broader range of variable than would be feasible at a larger scale facility. The performance parameters that will be investigated are as follows: SO{sub 2} removal; particulate removal; removal of mercury and other heavy metals; NO{sub x} removal; HF and HCl removal; NH{sub 3} removal; ammonia-sulfur compounds generation; and steam injection for particle removal. For all of the pollutant removal tests, removal efficiency will be based on measurements at the inlet and outlet of the IFGT facility. Heat recovery measurements will also be made during these tests to demonstrate the heat recovery provided by the IFGT technology. This report provides the Final Test Plan for the first coal tested in the Task 2 pilot-scale IFGT tests.

  13. The Continued Need for Modeling and Scaled Testing to Advance the Hanford Tank Waste Mission

    SciTech Connect (OSTI)

    Peurrung, Loni M.; Fort, James A.; Rector, David R.

    2013-09-03

    Hanford tank wastes are chemically complex slurries of liquids and solids that can exhibit changes in rheological behavior during retrieval and processing. The Hanford Waste Treatment and Immobilization Plant (WTP) recently abandoned its planned approach to use computational fluid dynamics (CFD) supported by testing at less than full scale to verify the design of vessels that process these wastes within the plant. The commercial CFD tool selected was deemed too difficult to validate to the degree necessary for use in the design of a nuclear facility. Alternative, but somewhat immature, CFD tools are available that can simulate multiphase flow of non-Newtonian fluids. Yet both CFD and scaled testing can play an important role in advancing the Hanford tank waste mission—in supporting the new verification approach, which is to conduct testing in actual plant vessels; in supporting waste feed delivery, where scaled testing is ongoing; as a fallback approach to design verification if the Full Scale Vessel Testing Program is deemed too costly and time-consuming; to troubleshoot problems during commissioning and operation of the plant; and to evaluate the effects of any proposed changes in operating conditions in the future to optimize plant performance.

  14. Economic analysis of large-scale hydrogen storage for renewable utility applications.

    SciTech Connect (OSTI)

    Schoenung, Susan M.

    2011-08-01

    The work reported here supports the efforts of the Market Transformation element of the DOE Fuel Cell Technology Program. The portfolio includes hydrogen technologies, as well as fuel cell technologies. The objective of this work is to model the use of bulk hydrogen storage, integrated with intermittent renewable energy production of hydrogen via electrolysis, used to generate grid-quality electricity. In addition the work determines cost-effective scale and design characteristics and explores potential attractive business models.

  15. Full-Scale Accident Testing in Support of Used Nuclear Fuel Transportation.

    SciTech Connect (OSTI)

    Durbin, Samuel G.; Lindgren, Eric R.; Rechard, Rob P.; Sorenson, Ken B.

    2014-09-01

    The safe transport of spent nuclear fuel and high-level radioactive waste is an important aspect of the waste management system of the United States. The Nuclear Regulatory Commission (NRC) currently certifies spent nuclear fuel rail cask designs based primarily on numerical modeling of hypothetical accident conditions augmented with some small scale testing. However, NRC initiated a Package Performance Study (PPS) in 2001 to examine the response of full-scale rail casks in extreme transportation accidents. The objectives of PPS were to demonstrate the safety of transportation casks and to provide high-fidelity data for validating the modeling. Although work on the PPS eventually stopped, the Blue Ribbon Commission on America’s Nuclear Future recommended in 2012 that the test plans be re-examined. This recommendation was in recognition of substantial public feedback calling for a full-scale severe accident test of a rail cask to verify evaluations by NRC, which find that risk from the transport of spent fuel in certified casks is extremely low. This report, which serves as the re-assessment, provides a summary of the history of the PPS planning, identifies the objectives and technical issues that drove the scope of the PPS, and presents a possible path for moving forward in planning to conduct a full-scale cask test. Because full-scale testing is expensive, the value of such testing on public perceptions and public acceptance is important. Consequently, the path forward starts with a public perception component followed by two additional components: accident simulation and first responder training. The proposed path forward presents a series of study options with several points where the package performance study could be redirected if warranted.

  16. Techno-economic Modeling of the Integration of 20% Wind and Large-scale Energy Storage in ERCOT by 2030

    SciTech Connect (OSTI)

    Ross Baldick; Michael Webber; Carey King; Jared Garrison; Stuart Cohen; Duehee Lee

    2012-12-21

    This study’s objective is to examine interrelated technical and economic avenues for the Electric Reliability Council of Texas (ERCOT) grid to incorporate up to and over 20% wind generation by 2030. Our specific interests are to look at the factors that will affect the implementation of both high level of wind power penetration (> 20% generation) and installation of large scale storage.

  17. Placement of the dam for the no. 2 kambaratinskaya HPP by large-scale blasting: some observations

    SciTech Connect (OSTI)

    Shuifer, M. I.; Argal, E. S.

    2011-11-15

    Results of complex instrument observations of large-scale blasting during construction of the dam for the No. 2 Kambaratinskaya HPP on the Naryn River in the Republic of Kirgizia are analyzed. The purpose of these observations was: to determine the actual parameters of the seismic process, evaluate the effect of air and acoustic shock waves, and investigate the kinematics of the surface formed by the blast in its core region within the mass of fractured rocks.

  18. Microsoft Word - NRAP-TRS-III-002-2012_Modeling the Performance of Large Scale CO2 Storage_20121024.docx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Modeling the Performance of Large- Scale CO 2 Storage Systems: A Comparison of Different Sensitivity Analysis Methods 24 October 2012 Office of Fossil Energy NRAP-TRS-III-002-2012 Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy,

  19. Coordinated Multi-layer Multi-domain Optical Network (COMMON) for Large-Scale Science Applications (COMMON)

    SciTech Connect (OSTI)

    Vokkarane, Vinod

    2013-09-01

    We intend to implement a Coordinated Multi-layer Multi-domain Optical Network (COMMON) Framework for Large-scale Science Applications. In the COMMON project, specific problems to be addressed include 1) anycast/multicast/manycast request provisioning, 2) deployable OSCARS enhancements, 3) multi-layer, multi-domain quality of service (QoS), and 4) multi-layer, multidomain path survivability. In what follows, we outline the progress in the above categories (Year 1, 2, and 3 deliverables).

  20. QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP),

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP), Bethesda MD, April 29-30, 2014 NY Center for Computational Science 2 Defining questions of nuclear physics research in US: Nuclear Science Advisory Committee (NSAC) "The Frontiers of Nuclear Science", 2007 Long Range Plan "What are the phases of strongly interacting matter and what roles do they play in the cosmos ?" "What does QCD predict for

  1. Reducing Plug and Process Loads for a Large Scale, Low Energy Office Building: NREL's Research Support Facility: Preprint

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Reducing Plug and Process Loads for a Large Scale, Low Energy Office Building: NREL's Research Support Facility Preprint Chad Lobato, Shanti Pless, Michael Sheppy, and Paul Torcellini Presented at the ASHRAE Winter Conference Las Vegas, Nevada January 29 - February 2, 2011 Conference Paper NREL/CP-5500-49002 February 2011 NOTICE The submitted manuscript has been offered by an employee of the Alliance for Sustainable Energy, LLC (Alliance), a contractor of the US Government under Contract No.

  2. Microsoft Word - The_Advanced_Networks_and_Services_Underpinning_Modern,Large-Scale_Science.SciDAC.v5.doc

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ESnet4: Advanced Networking and Services Supporting the Science Mission of DOE's Office of Science William E. Johnston ESnet Dept. Head and Senior Scientist Lawrence Berkeley National Laboratory May, 2007 1 Introduction In many ways, the dramatic achievements in scientific discovery through advanced computing and the discoveries of the increasingly large-scale instruments with their enormous data handling and remote collaboration requirements, have been made possible by accompanying

  3. Using Cloud-Resolving Model Simulations of Deep Convection to Inform Cloud Parmaterizations in Large-Scale Models

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Using Cloud-Resolving Model Simulations of Deep Convection to Inform Cloud Parameterizations in Large-Scale Models S. A. Klein National Oceanic and Atmospheric Administration Geophysical Fluid Dynamics Laboratory Princeton, New Jersey R. Pincus National Oceanic and Atmospheric Administration Cooperative Institute for Research in Environmental Science Climate Diagnostics Center Boulder, Colorado K. -M. Xu National Aeronautics and Space Administration Langley Research Center Hampton, Virginia

  4. Analysis of large scale tests for AP-600 passive containment cooling system

    SciTech Connect (OSTI)

    Sha, W.T.; Chien, T.H.; Sun, J.G.; Chao, B.T.

    1997-07-01

    One unique feature of the AP-600 is its passive containment cooling system (PCCS), which is designed to maintain containment pressure below the design limit for 72 hours without action by the reactor operator. During a design-basis accident, i.e., either a loss-of-coolant or a main steam-line break accident, steam escapes and comes in contact with the much cooler containment vessel wall. Heat is transferred to the inside surface of the steel containment wall by convection and condensation of steam and through the containment steel wall by conduction. Heat is then transferred from the outside of the containment surface by heating and evaporation of a thin liquid film that is formed by applying water at the top of the containment vessel dome. Air in the annual space is heated by both convection and injection of steam from the evaporating liquid film. The heated air and vapor rise as a result of natural circulation and exit the shield building through the outlets above the containment shell. All of the analytical models that are developed for and used in the COMMIX-ID code for predicting performance of the PCCS will be described. These models cover governing conservation equations for multicomponents single phase flow, transport equations for the {kappa}-{epsilon} two-equation turbulence model, auxiliary equations, liquid-film tracking model for both inside (condensate) and outside (evaporating liquid film) surfaces of the containment vessel wall, thermal coupling between flow domains inside and outside the containment vessel, and heat and mass transfer models. Various key parameters of the COMMIX-ID results and corresponding AP-600 PCCS experimental data are compared and the agreement is good. Significant findings from this study are summarized.

  5. Experimental results from pressure testing a 1:6-scale nuclear power plant containment

    SciTech Connect (OSTI)

    Horschel, D.S.

    1992-01-01

    This report discusses the testing of a 1:6-scale, reinforced-concrete containment building at Sandia National Laboratories, in Albuquerque, New Mexico. The scale-model, Light Water Reactor (LWR) containment building was designed and built to the American Society of Mechanical Engineers (ASME) code by United Engineers and Constructors, Inc., and was instrumented with over 1200 transducers to prepare for the test. The containment model was tested to failure to determine its response to static internal overpressurization. As part of the US Nuclear Regulatory Commission`s program on containment integrity, the test results will be used to assess the capability of analytical methods to predict the performance of containments under severe-accident loads. The scaled dimensions of the cylindrical wall and hemispherical dome were typical of a full-size containment. Other typical features included in the heavily reinforced model were equipment hatches, personnel air locks, several small piping penetrations, and a ihin steel liner that was attached to the concrete by headed studs. In addition to the transducers attached to the model, an acoustic detection system and several video and still cameras were used during testing to gather data and to aid in the conduct of the test. The model and its instrumentation are briefly discussed, and is followed by the testing procedures and measured response of the containment model. A summary discussion is included to aid in understanding the significance of the test as it applies to real world reinforced concrete containment structures. The data gathered during SIT and overpressure testing are included as an appendix.

  6. Modeling ramp compression experiments using large-scale molecular dynamics simulation.

    SciTech Connect (OSTI)

    Mattsson, Thomas Kjell Rene; Desjarlais, Michael Paul; Grest, Gary Stephen; Templeton, Jeremy Alan; Thompson, Aidan Patrick; Jones, Reese E.; Zimmerman, Jonathan A.; Baskes, Michael I.; Winey, J. Michael; Gupta, Yogendra Mohan; Lane, J. Matthew D.; Ditmire, Todd; Quevedo, Hernan J.

    2011-10-01

    Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.

  7. Performance upgrades to the MCNP6 burnup capability for large scale depletion calculations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Fensin, M. L.; Galloway, J. D.; James, M. R.

    2015-04-11

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. With the merger of MCNPX and MCNP5, MCNP6 combined the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. The new MCNP6 depletion capability was first showcased at the International Congress for Advancements in Nuclear Power Plants (ICAPP) meeting in 2012. At that conference the new capabilities addressed included the combined distributive and shared memory parallel architecture for the burnup capability, improved memory management, physics enhancements, and newmore » predictability as compared to the H.B Robinson Benchmark. At Los Alamos National Laboratory, a special purpose cluster named “tebow,” was constructed such to maximize available RAM per CPU, as well as leveraging swap space with solid state hard drives, to allow larger scale depletion calculations (allowing for significantly more burnable regions than previously examined). As the MCNP6 burnup capability was scaled to larger numbers of burnable regions, a noticeable slowdown was realized.This paper details two specific computational performance strategies for improving calculation speedup: (1) retrieving cross sections during transport; and (2) tallying mechanisms specific to burnup in MCNP. To combat this slowdown new performance upgrades were developed and integrated into MCNP6 1.2.« less

  8. Performance upgrades to the MCNP6 burnup capability for large scale depletion calculations

    SciTech Connect (OSTI)

    Fensin, M. L.; Galloway, J. D.; James, M. R.

    2015-04-11

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. With the merger of MCNPX and MCNP5, MCNP6 combined the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. The new MCNP6 depletion capability was first showcased at the International Congress for Advancements in Nuclear Power Plants (ICAPP) meeting in 2012. At that conference the new capabilities addressed included the combined distributive and shared memory parallel architecture for the burnup capability, improved memory management, physics enhancements, and new predictability as compared to the H.B Robinson Benchmark. At Los Alamos National Laboratory, a special purpose cluster named tebow, was constructed such to maximize available RAM per CPU, as well as leveraging swap space with solid state hard drives, to allow larger scale depletion calculations (allowing for significantly more burnable regions than previously examined). As the MCNP6 burnup capability was scaled to larger numbers of burnable regions, a noticeable slowdown was realized.This paper details two specific computational performance strategies for improving calculation speedup: (1) retrieving cross sections during transport; and (2) tallying mechanisms specific to burnup in MCNP. To combat this slowdown new performance upgrades were developed and integrated into MCNP6 1.2.

  9. Performance upgrades to the MCNP6 burnup capability for large scale depletion calculations

    SciTech Connect (OSTI)

    Fensin, M. L.; Galloway, J. D.; James, M. R.

    2015-04-11

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. With the merger of MCNPX and MCNP5, MCNP6 combined the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. The new MCNP6 depletion capability was first showcased at the International Congress for Advancements in Nuclear Power Plants (ICAPP) meeting in 2012. At that conference the new capabilities addressed included the combined distributive and shared memory parallel architecture for the burnup capability, improved memory management, physics enhancements, and new predictability as compared to the H.B Robinson Benchmark. At Los Alamos National Laboratory, a special purpose cluster named “tebow,” was constructed such to maximize available RAM per CPU, as well as leveraging swap space with solid state hard drives, to allow larger scale depletion calculations (allowing for significantly more burnable regions than previously examined). As the MCNP6 burnup capability was scaled to larger numbers of burnable regions, a noticeable slowdown was realized.This paper details two specific computational performance strategies for improving calculation speedup: (1) retrieving cross sections during transport; and (2) tallying mechanisms specific to burnup in MCNP. To combat this slowdown new performance upgrades were developed and integrated into MCNP6 1.2.

  10. Integrating large-scale functional genomics data to dissect metabolic networks for hydrogen production

    SciTech Connect (OSTI)

    Harwood, Caroline S

    2012-12-17

    The goal of this project is to identify gene networks that are critical for efficient biohydrogen production by leveraging variation in gene content and gene expression in independently isolated Rhodopseudomonas palustris strains. Coexpression methods were applied to large data sets that we have collected to define probabilistic causal gene networks. To our knowledge this a first systems level approach that takes advantage of strain-to strain variability to computationally define networks critical for a particular bacterial phenotypic trait.

  11. Large-Scale Geospatial Indexing for Image-Based Retrieval and Analysis

    SciTech Connect (OSTI)

    Tobin Jr, Kenneth William; Bhaduri, Budhendra L; Bright, Eddie A; Cheriydat, Anil; Karnowski, Thomas Paul; Palathingal, Paul J; Potok, Thomas E; Price, Jeffery R

    2005-12-01

    We describe a method for indexing and retrieving high-resolution image regions in large geospatial data libraries. An automated feature extraction method is used that generates a unique and specific structural description of each segment of a tessellated input image file. These tessellated regions are then merged into similar groups and indexed to provide flexible and varied retrieval in a query-by-example environment.

  12. Large scale two-dimensional arrays of magnesium diboride superconducting quantum interference devices

    SciTech Connect (OSTI)

    Cybart, Shane A. Dynes, R. C.; Wong, T. J.; Cho, E. Y.; Beeman, J. W.; Yung, C. S.; Moeckly, B. H.

    2014-05-05

    Magnetic field sensors based on two-dimensional arrays of superconducting quantum interference devices were constructed from magnesium diboride thin films. Each array contained over 30?000 Josephson junctions fabricated by ion damage of 30?nm weak links through an implant mask defined by nano-lithography. Current-biased devices exhibited very large voltage modulation as a function of magnetic field, with amplitudes as high as 8?mV.

  13. Research project on CO2 geological storage and groundwaterresources: Large-scale hydrological evaluation and modeling of impact ongroundwater systems

    SciTech Connect (OSTI)

    Birkholzer, Jens; Zhou, Quanlin; Rutqvist, Jonny; Jordan,Preston; Zhang,K.; Tsang, Chin-Fu

    2007-10-24

    If carbon dioxide capture and storage (CCS) technologies areimplemented on a large scale, the amounts of CO2 injected and sequesteredunderground could be extremely large. The stored CO2 then replaces largevolumes of native brine, which can cause considerable pressureperturbation and brine migration in the deep saline formations. Ifhydraulically communicating, either directly via updipping formations orthrough interlayer pathways such as faults or imperfect seals, theseperturbations may impact shallow groundwater or even surface waterresources used for domestic or commercial water supply. Possibleenvironmental concerns include changes in pressure and water table,changes in discharge and recharge zones, as well as changes in waterquality. In compartmentalized formations, issues related to large-scalepressure buildup and brine displacement may also cause storage capacityproblems, because significant pressure buildup can be produced. Toaddress these issues, a three-year research project was initiated inOctober 2006, the first part of which is summarized in this annualreport.

  14. Large-scale fabrication of BN tunnel barriers for graphene spintronics

    SciTech Connect (OSTI)

    Fu, Wangyang; Makk, Péter; Maurand, Romain; Bräuninger, Matthias; Schönenberger, Christian

    2014-08-21

    We have fabricated graphene spin-valve devices utilizing scalable materials made from chemical vapor deposition (CVD). Both the spin-transporting graphene and the tunnel barrier material are CVD-grown. The tunnel barrier is realized by Hexagonal boron nitride, used either as a monolayer or bilayer and placed over the graphene. Spin transport experiments were performed using ferromagnetic contacts deposited onto the barrier. We find that spin injection is still greatly suppressed in devices with a monolayer tunneling barrier due to resistance mismatch. This is, however, not the case for devices with bilayer barriers. For those devices, a spin relaxation time of ∼260 ps intrinsic to the CVD graphene material is deduced. This time scale is comparable to those reported for exfoliated graphene, suggesting that this CVD approach is promising for spintronic applications which require scalable materials.

  15. Tensor to scalar ratio and large scale power suppression from pre-slow roll initial conditions

    SciTech Connect (OSTI)

    Lello, Louis; Boyanovsky, Daniel, E-mail: lal81@pitt.edu, E-mail: boyan@pitt.edu [Department of Physics and Astronomy, University of Pittsburgh, 3941 O'Hara St, Pittsburgh, PA 15260 (United States)

    2014-05-01

    We study the corrections to the power spectra of curvature and tensor perturbations and the tensor-to-scalar ratio r in single field slow roll inflation with standard kinetic term due to initial conditions imprinted by a ''fast-roll'' stage prior to slow roll. For a wide range of initial inflaton kinetic energy, this stage lasts only a few e-folds and merges smoothly with slow-roll thereby leading to non-Bunch-Davies initial conditions for modes that exit the Hubble radius during slow roll. We describe a program that yields the dynamics in the fast-roll stage while matching to the slow roll stage in a manner that is independent of the inflationary potentials. Corrections to the power spectra are encoded in a ''transfer function'' for initial conditions T{sub ?}(k), P{sub ?}(k) = P{sup BD}{sub ?}(k)T{sub ?}(k), implying a modification of the ''consistency condition'' for the tensor to scalar ratio at a pivot scale k{sub 0}: r(k{sub 0}) = ?8n{sub T}(k{sub 0})[T{sub T}(k{sub 0})/T{sub R}(k{sub 0})]. We obtain T{sub ?}(k) to leading order in a Born approximation valid for modes of observational relevance today. A fit yields T{sub ?}(k) = 1+A{sub ?}k{sup ?p}cos [2??k/H{sub sr}+?{sub ?}], with 1.5?scale during slow roll inflation, where curvature and tensor perturbations feature the same p,? for a wide range of initial conditions. These corrections lead to both a suppression of the quadrupole and oscillatory features in both P{sub R}(k) and r(k{sub 0}) with a period of the order of the Hubble scale during slow roll inflation. The results are quite general and independent of the specific inflationary potentials, depending solely on the ratio of kinetic to potential energy ? and the slow roll parameters ?{sub V}, ?{sub V} to leading order in slow roll. For a wide range of ? and the values of ?{sub V};?{sub V} corresponding to the upper bounds from Planck, we find that the low quadrupole is consistent with the results from Planck, and the oscillations in r(k{sub 0}) as a function of k{sub 0} could be observable if the modes corresponding to the quadrupole and the pivot scale crossed the Hubble radius very few (23) e-folds after the onset of slow roll. We comment on possible impact on the recent BICEP2 results.

  16. Measuring the effectiveness of infrastructure-level detection of large-scale botnets

    SciTech Connect (OSTI)

    Yan, Guanhua; Eidenbenz, Stephan; Zeng, Yuanyuan; Shin, Kang G

    2010-12-16

    Botnets are one of the most serious security threats to the Internet and its end users. In recent years, utilizing P2P as a Command and Control (C&C) protocol has gained popularity due to its decentralized nature that can help hide the hotmaster's identity. Most bot detection approaches targeting P2P botnets either rely on behavior monitoring or traffic flow and packet analysis, requiring fine-grained information collected locally. This requirement limits the scale of detection. In this paper, we consider detection of P2P botnets at a high-level - the infrastructure level - by exploiting their structural properties from a graph analysis perspective. Using three different P2P overlay structures, we measure the effectiveness of detecting each structure at various locations (the Autonomous System (AS), the Point of Presence (PoP), and the router rendezvous) in the Internet infrastructure.

  17. Transient thermal analysis for radioactive liquid mixing operations in a large-scaled tank

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Lee, S. Y.; Smith, III, F. G.

    2014-07-25

    A transient heat balance model was developed to assess the impact of a Submersible Mixer Pump (SMP) on radioactive liquid temperature during the process of waste mixing and removal for the high-level radioactive materials stored in Savannah River Site (SRS) tanks. The model results will be mainly used to determine the SMP design impacts on the waste tank temperature during operations and to develop a specification for a new SMP design to replace existing longshaft mixer pumps used during waste removal. The present model was benchmarked against the test data obtained by the tank measurement to examine the quantitative thermalmore » response of the tank and to establish the reference conditions of the operating variables under no SMP operation. The results showed that the model predictions agreed with the test data of the waste temperatures within about 10%.« less

  18. Transient thermal analysis for radioactive liquid mixing operations in a large-scaled tank

    SciTech Connect (OSTI)

    Lee, S. Y. [Savannah River Site Nuclear Solutions, LLC, Aiken, SC (United States). Savannah River National Lab. (SRNL); Smith, III, F. G. [Savannah River Site Nuclear Solutions, LLC, Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2014-07-25

    A transient heat balance model was developed to assess the impact of a Submersible Mixer Pump (SMP) on radioactive liquid temperature during the process of waste mixing and removal for the high-level radioactive materials stored in Savannah River Site (SRS) tanks. The model results will be mainly used to determine the SMP design impacts on the waste tank temperature during operations and to develop a specification for a new SMP design to replace existing longshaft mixer pumps used during waste removal. The present model was benchmarked against the test data obtained by the tank measurement to examine the quantitative thermal response of the tank and to establish the reference conditions of the operating variables under no SMP operation. The results showed that the model predictions agreed with the test data of the waste temperatures within about 10%.

  19. TESTING OF A FULL-SCALE ROTARY MICROFILTER FOR THE ENHANCED PROCESS FOR RADIONUCLIDES REMOVAL

    SciTech Connect (OSTI)

    Herman, D; David Stefanko, D; Michael Poirier, M; Samuel Fink, S

    2009-01-01

    Savannah River National Laboratory (SRNL) researchers are investigating and developing a rotary microfilter for solid-liquid separation applications in the Department of Energy (DOE) complex. One application involves use in the Enhanced Processes for Radionuclide Removal (EPRR) at the Savannah River Site (SRS). To assess this application, the authors performed rotary filter testing with a full-scale, 25-disk unit manufactured by SpinTek Filtration with 0.5 micron filter media manufactured by Pall Corporation. The filter includes proprietary enhancements by SRNL. The most recent enhancement is replacement of the filter's main shaft seal with a John Crane Type 28LD gas-cooled seal. The feed material was SRS Tank 8F simulated sludge blended with monosodium titanate (MST). Testing examined total insoluble solids concentrations of 0.06 wt % (126 hours of testing) and 5 wt % (82 hours of testing). The following are conclusions from this testing.

  20. Efficient Feature-Driven Visualization of Large-Scale Scientific Data

    SciTech Connect (OSTI)

    Lu, Aidong

    2012-12-12

    Very large, complex scientific data acquired in many research areas creates critical challenges for scientists to understand, analyze, and organize their data. The objective of this project is to expand the feature extraction and analysis capabilities to develop powerful and accurate visualization tools that can assist domain scientists with their requirements in multiple phases of scientific discovery. We have recently developed several feature-driven visualization methods for extracting different data characteristics of volumetric datasets. Our results verify the hypothesis in the proposal and will be used to develop additional prototype systems.

  1. Automated Feature Generation in Large-Scale Geospatial Libraries for Content-Based Indexing.

    SciTech Connect (OSTI)

    Tobin Jr, Kenneth William; Bhaduri, Budhendra L; Bright, Eddie A; Cheriydat, Anil; Karnowski, Thomas Paul; Palathingal, Paul J; Potok, Thomas E; Price, Jeffery R

    2006-05-01

    We describe a method for indexing and retrieving high-resolution image regions in large geospatial data libraries. An automated feature extraction method is used that generates a unique and specific structural description of each segment of a tessellated input image file. These tessellated regions are then merged into similar groups, or sub-regions, and indexed to provide flexible and varied retrieval in a query-by-example environment. The methods of tessellation, feature extraction, sub-region clustering, indexing, and retrieval are described and demonstrated using a geospatial library representing a 153 km2 region of land in East Tennessee at 0.5 m per pixel resolution.

  2. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; et al

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less

  3. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    SciTech Connect (OSTI)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  4. Evaluation of Simple Causal Message Logging for Large-Scale Fault Tolerant HPC Systems

    SciTech Connect (OSTI)

    Bronevetsky, G; Meneses, E; Kale, L V

    2011-02-25

    The era of petascale computing brought machines with hundreds of thousands of processors. The next generation of exascale supercomputers will make available clusters with millions of processors. In those machines, mean time between failures will range from a few minutes to few tens of minutes, making the crash of a processor the common case, instead of a rarity. Parallel applications running on those large machines will need to simultaneously survive crashes and maintain high productivity. To achieve that, fault tolerance techniques will have to go beyond checkpoint/restart, which requires all processors to roll back in case of a failure. Incorporating some form of message logging will provide a framework where only a subset of processors are rolled back after a crash. In this paper, we discuss why a simple causal message logging protocol seems a promising alternative to provide fault tolerance in large supercomputers. As opposed to pessimistic message logging, it has low latency overhead, especially in collective communication operations. Besides, it saves messages when more than one thread is running per processor. Finally, we demonstrate that a simple causal message logging protocol has a faster recovery and a low performance penalty when compared to checkpoint/restart. Running NAS Parallel Benchmarks (CG, MG and BT) on 1024 processors, simple causal message logging has a latency overhead below 5%.

  5. Review and evaluation of literature on testing of chemical additives for scale control in geothermal fluids. Final report

    SciTech Connect (OSTI)

    Crane, C.H.; Kenkeremath, D.C.

    1981-01-01

    A selected group of reported tests of chemical additives in actual geothermal fluids are reviewed and evaluated to summarize the status of chemical scale-control testing and identify information and testing needs. The task distinguishes between scale control in the cooling system of a flash plant and elsewhere in the utilization system due to the essentially different operating environments involved. Additives for non-cooling geothermal fluids are discussed by scale type: silica, carbonate, and sulfide.

  6. Full-Scale Structural and NDI Validation Tests of Bonded Composite Doublers for Commercial Aircraft Applications

    SciTech Connect (OSTI)

    Roach, D.; Walkington, P.

    1999-02-01

    Composite doublers, or repair patches, provide an innovative repair technique which can enhance the way aircraft are maintained. Instead of riveting multiple steel or aluminum plates to facilitate an aircraft repair, it is possible to bond a single Boron-Epoxy composite doubler to the damaged structure. Most of the concerns surrounding composite doubler technology pertain to long-term survivability, especially in the presence of non-optimum installations, and the validation of appropriate inspection procedures. This report focuses on a series of full-scale structural and nondestructive inspection (NDI) tests that were conducted to investigate the performance of Boron-Epoxy composite doublers. Full-scale tests were conducted on fuselage panels cut from retired aircraft. These full-scale tests studied stress reductions, crack mitigation, and load transfer capabilities of composite doublers using simulated flight conditions of cabin pressure and axial stress. Also, structures which modeled key aspects of aircraft structure repairs were subjected to extreme tension, shear and bending loads to examine the composite laminate's resistance to disbond and delamination flaws. Several of the structures were loaded to failure in order to determine doubler design margins. Nondestructive inspections were conducted throughout the test series in order to validate appropriate techniques on actual aircraft structure. The test results showed that a properly designed and installed composite doubler is able to enhance fatigue life, transfer load away from damaged structure, and avoid the introduction of new stress risers (i.e. eliminate global reduction in the fatigue life of the structure). Comparisons with test data obtained prior to the doubler installation revealed that stresses in the parent material can be reduced 30%--60% through the use of the composite doubler. Tests to failure demonstrated that the bondline is able to transfer plastic strains into the doubler and that the parent aluminum skin must experience significant yield strains before any damage to the doubler will occur.

  7. Development of Performance Acceptance Test Guidelines for Large Commercial Parabolic Trough Solar Fields: Preprint

    SciTech Connect (OSTI)

    Kearney, D.; Mehos, M.

    2010-12-01

    Prior to commercial operation, large solar systems in utility-size power plants need to pass a performance acceptance test conducted by the EPC contractor or owners. In lieu of the present absence of engineering code developed for this purpose, NREL has undertaken the development of interim guidelines to provide recommendations for test procedures that can yield results of a high level of accuracy consistent with good engineering knowledge and practice. The fundamental differences between acceptance of a solar power plant and a conventional fossil-fired plant are the transient nature of the energy source and the necessity to utilize an analytical performance model in the acceptance process. These factors bring into play the need to establish methods to measure steady state performance, potential impacts of transient processes, comparison to performance model results, and the possible requirement to test, or model, multi-day performance within the scope of the acceptance test procedure. The power block and BOP are not within the boundaries of this guideline. The current guideline is restricted to the solar thermal performance of parabolic trough systems and has been critiqued by a broad range of stakeholders in CSP development and technology.

  8. Development of fine-resolution analyses and expanded large-scale forcing properties. Part I: Methodology and evaluation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Li, Zhijin; Vogelmann, Andrew M.; Feng, Sha; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Endo, Satoshi

    2015-01-20

    We produce fine-resolution, three-dimensional fields of meteorological and other variables for the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) Southern Great Plains site. The Community Gridpoint Statistical Interpolation system is implemented in a multiscale data assimilation (MS-DA) framework that is used within the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. The MS-DA algorithm uses existing reanalysis products and constrains fine-scale atmospheric properties by assimilating high-resolution observations. A set of experiments show that the data assimilation analysis realistically reproduces the intensity, structure, and time evolution of clouds and precipitation associated with a mesoscale convective system.more » Evaluations also show that the large-scale forcing derived from the fine-resolution analysis has an overall accuracy comparable to the existing ARM operational product. For enhanced applications, the fine-resolution fields are used to characterize the contribution of subgrid variability to the large-scale forcing and to derive hydrometeor forcing, which are presented in companion papers.« less

  9. In Situ Decommissioning Sensor Network, Meso-Scale Test Bed - Phase 3 Fluid Injection Test Summary Report

    SciTech Connect (OSTI)

    Serrato, M. G.

    2013-09-27

    The DOE Office of Environmental management (DOE EM) faces the challenge of decommissioning thousands of excess nuclear facilities, many of which are highly contaminated. A number of these excess facilities are massive and robust concrete structures that are suitable for isolating the contained contamination for hundreds of years, and a permanent decommissioning end state option for these facilities is in situ decommissioning (ISD). The ISD option is feasible for a limited, but meaningfull number of DOE contaminated facilities for which there is substantial incremental environmental, safety, and cost benefits versus alternate actions to demolish and excavate the entire facility and transport the rubble to a radioactive waste landfill. A general description of an ISD project encompasses an entombed facility; in some cases limited to the blow-grade portion of a facility. However, monitoring of the ISD structures is needed to demonstrate that the building retains its structural integrity and the contaminants remain entombed within the grout stabilization matrix. The DOE EM Office of Deactivation and Decommissioning and Facility Engineering (EM-13) Program Goal is to develop a monitoring system to demonstrate long-term performance of closed nuclear facilities using the ISD approach. The Savannah River National Laboratory (SRNL) has designed and implemented the In Situ Decommissioning Sensor Network, Meso-Scale Test Bed (ISDSN-MSTB) to address the feasibility of deploying a long-term monitoring system into an ISD closed nuclear facility. The ISDSN-MSTB goal is to demonstrate the feasibility of installing and operating a remote sensor network to assess cementitious material durability, moisture-fluid flow through the cementitious material, and resulting transport potential for contaminate mobility in a decommissioned closed nuclear facility. The original ISDSN-MSTB installation and remote sensor network operation was demonstrated in FY 2011-12 at the ISDSN-MSTB test cube located at the Florida International University Applied Research Center, Miami, FL (FIU-ARC). A follow-on fluid injection test was developed to detect fluid and ion migration in a cementitious material/grouted test cube using a limited number of existing embedded sensor systems. This In Situ Decommissioning Sensor Network, Meso-Scale Test Bed (ISDSN-MSTB) - Phase 3 Fluid Injection Test Summary Report summarizes the test implementation, acquired and processed data, and results from the activated embedded sensor systems used during the fluid injection test. The ISDSN-MSTB Phase 3 Fluid Injection Test was conducted from August 27 through September 6, 2013 at the FIU-ARC ISDSN-MSTB test cube. The fluid injection test activated a portion of the existing embedded sensor systems in the ISDSN-MSTB test cube: Electrical Resistivity Tomography-Thermocouple Sensor Arrays, Advance Tensiometer Sensors, and Fiber Loop Ringdown Optical Sensors. These embedded sensor systems were activated 15 months after initial placement. All sensor systems were remotely operated and data acquisition was completed through the established Sensor Remote Access System (SRAS) hosted on the DOE D&D Knowledge Management Information Tool (D&D DKM-IT) server. The ISDN Phase 3 Fluid Injection Test successfully demonstrated the feasibility of embedding sensor systems to assess moisture-fluid flow and resulting transport potential for contaminate mobility through a cementitious material/grout monolith. The ISDSN embedded sensor systems activated for the fluid injection test highlighted the robustness of the sensor systems and the importance of configuring systems in-depth (i.e., complementary sensors and measurements) to alleviate data acquisition gaps.

  10. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; Martin-Martinez, Sergio; Zhang, Jie; Hodge, Bri -Mathias; Molina-Garcia, Angel

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  11. GraphReduce: Large-Scale Graph Analytics on Accelerator-Based HPC Systems

    SciTech Connect (OSTI)

    Sengupta, Dipanjan; Agarwal, Kapil; Song, Shuaiwen; Schwan, Karsten

    2015-09-30

    Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of both edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host and the device.

  12. Large-scale biomass for energy, with considerations and cautions: an editorial comment

    SciTech Connect (OSTI)

    Marland, Gregg; Obersteiner, Michael

    2008-04-01

    Greenhouse gas abatement policies will increase the demand for renewable sources of energy, including bioenergy. In combination with a global growing demand for food, this could lead to a food-fuel competition for bio-productive land. Proponents of bioenergy have suggested that energy crop plantations may be established on less productive land as a way of avoiding this potential food-fuel competition. However, many of these suggestions have been made without any underlying economic analysis. In this paper, we develop a long-term economic optimization model (LUCEA) of the U.S. agricultural and energy system to analyze this possible competition for land and to examine the link between carbon prices, the energy system dynamics and the effect of the land competition on food prices. Our results indicate that bioenergy plantations will be competitive on cropland already at carbon taxes about US $20/ton C. As the carbon tax increases, food prices more than double compared to the reference scenario in which there is no climate policy. Further, bioenergy plantations appropriate significant areas of both cropland and grazing land. In model runs where we have limited the amount of grazing land that can be used for bioenergy to what many analysts consider the upper limit, most of the bioenergy plantations are established on cropland. Under the assumption that more grazing land can be used, large areas of bioenergy plantations are established on grazing land, despite the fact that yields are assumed to be much lower (less than half) than on crop land. It should be noted that this allocation on grazing land takes place as a result of a competition between food and bioenergy production and not because of lack of it. The estimated increase in food prices is largely unaffected by how much grazing land can be used for bioenergy production.

  13. Testing, Manufacturing, and Component Development Projects for Utility-Scale and Distributed Wind Energy, Fiscal Years 2006-2014

    SciTech Connect (OSTI)

    None, None

    2014-04-01

    This report covers the Wind and Water Power Technologies Office's Testing, Manufacturing, and Component Development Projects for Utility-Scale and Distributed Wind Energy from 2006 to 2014.

  14. Economic Impact of Large-Scale Deployment of Offshore Marine and Hydrokinetic Technology in Oregon Coastal Counties

    SciTech Connect (OSTI)

    Jimenez, T.; Tegen, S.; Beiter, P.

    2015-03-01

    To begin understanding the potential economic impacts of large-scale WEC technology, the Bureau of Ocean Energy Management (BOEM) commissioned the National Renewable Energy Laboratory (NREL) to conduct an economic impact analysis of largescale WEC deployment for Oregon coastal counties. This report follows a previously published report by BOEM and NREL on the jobs and economic impacts of WEC technology for the entire state (Jimenez and Tegen 2015). As in Jimenez and Tegen (2015), this analysis examined two deployment scenarios in the 2026-2045 timeframe: the first scenario assumed 13,000 megawatts (MW) of WEC technology deployed during the analysis period, and the second assumed 18,000 MW of WEC technology deployed by 2045. Both scenarios require major technology and cost improvements in the WEC devices. The study is on very large-scale deployment so readers can examine and discuss the potential of a successful and very large WEC industry. The 13,000-MW is used as the basis for the county analysis as it is the smaller of the two scenarios. Sensitivity studies examined the effects of a robust in-state WEC supply chain. The region of analysis is comprised of the seven coastal counties in Oregon—Clatsop, Coos, Curry, Douglas, Lane, Lincoln, and Tillamook—so estimates of jobs and other economic impacts are specific to this coastal county area.

  15. Simplified field-in-field technique for a large-scale implementation in breast radiation treatment

    SciTech Connect (OSTI)

    Fournier-Bidoz, Nathalie; Kirova, Youlia M.; Campana, Francois; Dendale, Remi; Fourquet, Alain

    2012-07-01

    We wanted to evaluate a simplified 'field-in-field' technique (SFF) that was implemented in our department of Radiation Oncology for breast treatment. This study evaluated 15 consecutive patients treated with a simplified field in field technique after breast-conserving surgery for early-stage breast cancer. Radiotherapy consisted of whole-breast irradiation to the total dose of 50 Gy in 25 fractions, and a boost of 16 Gy in 8 fractions to the tumor bed. We compared dosimetric outcomes of SFF to state-of-the-art electronic surface compensation (ESC) with dynamic leaves. An analysis of early skin toxicity of a population of 15 patients was performed. The median volume receiving at least 95% of the prescribed dose was 763 mL (range, 347-1472) for SFF vs. 779 mL (range, 349-1494) for ESC. The median residual 107% isodose was 0.1 mL (range, 0-63) for SFF and 1.9 mL (range, 0-57) for ESC. Monitor units were on average 25% higher in ESC plans compared with SFF. No patient treated with SFF had acute side effects superior to grade 1-NCI scale. SFF created homogenous 3D dose distributions equivalent to electronic surface compensation with dynamic leaves. It allowed the integration of a forward planned concomitant tumor bed boost as an additional multileaf collimator subfield of the tangential fields. Compared with electronic surface compensation with dynamic leaves, shorter treatment times allowed better radiation protection to the patient. Low-grade acute toxicity evaluated weekly during treatment and 2 months after treatment completion justified the pursuit of this technique for all breast patients in our department.

  16. Government commercialization of large scale technology: the United States Breeder Reactor Program 1964-1976

    SciTech Connect (OSTI)

    Stiefel, M.D.

    1981-06-01

    The US Liquid Metal Fast Breeder Reactor program was an attempt by the Atomic Energy Commission to develop, in partnership with industry, a particular nuclear technology. Not only did the AEC provide subsidies and test facilities for the private sector, but the agency attempted to direct which technological options would be developed. The national laboratories, nuclear vendors, and electric utilities were not amenable to government direction. The resulting time delays and cost overruns stalled the program until the anti-nuclear movement arose and undermined the political consensus behind the program. As a result, a breeder demonstration plant has not yet been built in the United States. The analysis of this thesis suggests two conclusions. First, future government directed commercialization programs are unlikely to succeed. Second, breeder development should be slowed down until the political problems in the nuclear industry are solved.

  17. Problems associated with large scale personnel monitoring of photons using lithium-fluoride TLD-100

    SciTech Connect (OSTI)

    Not Available

    1985-01-01

    The dosimetric properties of a large batch of lithium fluoride TLD-100 dosimeters when exposed to photons for total absorbed doses in the region from 0.1-10 mGy (10-100 mr) have been examined in this work. This region is of particular importance because in many operational health physics situations the majority (>90%) of all recorded absorbed doses to personnel lie in this region. With the possibility that occupational radiation dose limits may be reduced in the future accurate monitoring of individuals in this region will be of prime importance. The purpose of this thesis was to point out several effects which could compromise accurate dosimetric measurements in this region and to suggest some methods to minimize them. These effects include the effect of TLD batch composition, overresponse of the dosimeter to low energy photons, dose rate effects, the effects of storing the dosimeter before readout, and possible interference from ultraviolet and radiofrequency radiation. Each of these items can cause errors which can range up to 70%, depending on the total absorbed dose and the particulars of the radiation exposure. One effect which is of extreme interest is the induction of a thermoluminescent signal by radiofrequency radiation. Although this effect can cause gross errors in estimating the ionizing dose, it opens the possibility that LiF or another phosphor may have an application as a non-ionizing radiation dosimeter.

  18. Chemical cartography with apogee: Large-scale mean metallicity maps of the Milky Way disk

    SciTech Connect (OSTI)

    Hayden, Michael R.; Holtzman, Jon A.; Lee, Young Sun; Bovy, Jo; Majewski, Steven R.; García Pérez, Ana E.; Johnson, Jennifer A.; Allende Prieto, Carlos; Beers, Timothy C.; Cunha, Katia; Frinchaboy, Peter M.; Girardi, Léo; Hearty, Fred R.; Nidever, David; Schiavon, Ricardo P.; Schlesinger, Katharine J.; Schneider, Donald P.; Schultheis, Mathias E-mail: holtz@nmsu.edu E-mail: feuilldk@nmsu.edu; and others

    2014-05-01

    We present Galactic mean metallicity maps derived from the first year of the SDSS-III APOGEE experiment. Mean abundances in different zones of projected Galactocentric radius (0 < R < 15 kpc) at a range of heights above the plane (0 < |z| < 3 kpc), are derived from a sample of nearly 20,000 giant stars with unprecedented coverage, including stars in the Galactic mid-plane at large distances. We also split the sample into subsamples of stars with low- and high-[α/M] abundance ratios. We assess possible biases in deriving the mean abundances, and find that they are likely to be small except in the inner regions of the Galaxy. A negative radial metallicity gradient exists over much of the Galaxy; however, the gradient appears to flatten for R < 6 kpc, in particular near the Galactic mid-plane and for low-[α/M] stars. At R > 6 kpc, the gradient flattens as one moves off the plane, and is flatter at all heights for high-[α/M] stars than for low-[α/M] stars. Alternatively, these gradients can be described as vertical gradients that flatten at larger Galactocentric radius; these vertical gradients are similar for both low- and high-[α/M] populations. Stars with higher [α/M] appear to have a flatter radial gradient than stars with lower [α/M]. This could suggest that the metallicity gradient has grown steeper with time or, alternatively, that gradients are washed out over time by migration of stars.

  19. Direct wafer bonding technology for large-scale InGaAs-on-insulator transistors

    SciTech Connect (OSTI)

    Kim, SangHyeon E-mail: sh-kim@kist.re.kr; Ikku, Yuki; Takenaka, Mitsuru; Takagi, Shinichi; Yokoyama, Masafumi; Nakane, Ryosho; Li, Jian; Kao, Yung-Chung

    2014-07-28

    Heterogeneous integration of III-V devices on Si wafers have been explored for realizing high device performance as well as merging electrical and photonic applications on the Si platform. Existing methodologies have unavoidable drawbacks such as inferior device quality or high cost in comparison with the current Si-based technology. In this paper, we present InGaAs-on-insulator (-OI) fabrication from an InGaAs layer grown on a Si donor wafer with a III-V buffer layer instead of growth on a InP donor wafer. This technology allows us to yield large wafer size scalability of III-V-OI layers up to the Si wafer size of 300?mm with a high film quality and low cost. The high film quality has been confirmed by Raman and photoluminescence spectra. In addition, the fabricated InGaAs-OI transistors exhibit the high electron mobility of 1700?cm{sup 2}/V s and uniform distribution of the leakage current, indicating high layer quality with low defect density.

  20. LY? FOREST TOMOGRAPHY FROM BACKGROUND GALAXIES: THE FIRST MEGAPARSEC-RESOLUTION LARGE-SCALE STRUCTURE MAP AT z > 2

    SciTech Connect (OSTI)

    Lee, Khee-Gan; Hennawi, Joseph F.; Eilers, Anna-Christina [Max Planck Institute for Astronomy, Knigstuhl 17, D-69117 Heidelberg (Germany); Stark, Casey; White, Martin [Department of Astronomy, University of California at Berkeley, B-20 Hearst Field Annex 3411, Berkeley, CA 94720 (United States); Prochaska, J. Xavier [Department of Astronomy and Astrophysics, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); Schlegel, David J. [University of California Observatories, Lick Observatory, 1156 High Street, Santa Cruz, CA 95064 (United States); Arinyo-i-Prats, Andreu [Institut de Cincies del Cosmos, Universitat de Barcelona (IEEC-UB), Mart Franqus 1, E-08028 Barcelona (Spain); Suzuki, Nao [Kavli Institute for the Physics and Mathematics of the Universe (IPMU), The University of Tokyo, Kashiwano-ha 5-1-5, Kashiwa-shi, Chiba (Japan); Croft, Rupert A. C. [Department of Physics, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States); Caputi, Karina I. [Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700-AV Groningen (Netherlands); Cassata, Paolo [Instituto de Fisica y Astronomia, Facultad de Ciencias, Universidad de Valparaiso, Av. Gran Bretana 1111, Casilla 5030, Valparaiso (Chile); Ilbert, Olivier; Le Brun, Vincent; Le Fvre, Olivier [Aix Marseille Universit, CNRS, LAM (Laboratoire d'Astrophysique de Marseille) UMR 7326, F-13388 Marseille (France); Garilli, Bianca [INAF-IASF, Via Bassini 15, I-20133, Milano (Italy); Koekemoer, Anton M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Maccagni, Dario [INAF-Osservatorio Astronomico di Bologna, Via Ranzani,1, I-40127 Bologna (Italy); Nugent, Peter, E-mail: lee@mpia.de [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); and others

    2014-11-01

    We present the first observations of foreground Ly? forest absorption from high-redshift galaxies, targeting 24 star-forming galaxies (SFGs) with z ? 2.3-2.8 within a 5' 14' region of the COSMOS field. The transverse sightline separation is ?2 h {sup 1} Mpc comoving, allowing us to create a tomographic reconstruction of the three-dimensional (3D) Ly? forest absorption field over the redshift range 2.20 ? z ? 2.45. The resulting map covers 6 h {sup 1} Mpc 14 h {sup 1} Mpc in the transverse plane and 230 h {sup 1} Mpc along the line of sight with a spatial resolution of ?3.5 h {sup 1} Mpc, and is the first high-fidelity map of a large-scale structure on ?Mpc scales at z > 2. Our map reveals significant structures with ? 10 h {sup 1} Mpc extent, including several spanning the entire transverse breadth, providing qualitative evidence for the filamentary structures predicted to exist in the high-redshift cosmic web. Simulated reconstructions with the same sightline sampling, spectral resolution, and signal-to-noise ratio recover the salient structures present in the underlying 3D absorption fields. Using data from other surveys, we identified 18 galaxies with known redshifts coeval with our map volume, enabling a direct comparison with our tomographic map. This shows that galaxies preferentially occupy high-density regions, in qualitative agreement with the same comparison applied to simulations. Our results establish the feasibility of the CLAMATO survey, which aims to obtain Ly? forest spectra for ?1000 SFGs over ?1 deg{sup 2} of the COSMOS field, in order to map out the intergalactic medium large-scale structure at (z) ? 2.3 over a large volume (100 h {sup 1} Mpc){sup 3}.

  1. Hanford Waste Vitrification Plant full-scale feed preparation testing with water and process simulant slurries

    SciTech Connect (OSTI)

    Gaskill, J.R.; Larson, D.E.; Abrigo, G.P.

    1996-03-01

    The Hanford Waste Vitrification Plant was intended to convert selected, pretreated defense high-level waste and transuranic waste from the Hanford Site into a borosilicate glass. A full-scale testing program was conducted with nonradioactive waste simulants to develop information for process and equipment design of the feed-preparation system. The equipment systems tested included the Slurry Receipt and Adjustment Tank, Slurry Mix Evaporator, and Melter-Feed Tank. The areas of data generation included heat transfer (boiling, heating, and cooling), slurry mixing, slurry pumping and transport, slurry sampling, and process chemistry. 13 refs., 129 figs., 68 tabs.

  2. Hardware-in-the-Loop Testing of Utility-Scale Wind Turbine Generators

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Hardware-in-the-Loop Testing of Utility-Scale Wind Turbine Generators Ryan Schkoda, Curtiss Fox, and Ramtin Hadidi Clemson University Vahan Gevorgian, Robb Wallen, and Scott Lambert National Renewable Energy Laboratory Technical Report NREL/TP-5000-64787 January 2016 NREL is a national laboratory of the U.S. Department of Energy Office of Energy Efficiency & Renewable Energy Operated by the Alliance for Sustainable Energy, LLC This report is available at no cost from the National Renewable

  3. UCRL-ID-124563 LLNL Small-scale Friction Sensitivity (BAM) Test

    Office of Scientific and Technical Information (OSTI)

    4563 LLNL Small-scale Friction Sensitivity (BAM) Test . * - L. Richard Simpson M. Frances Foltz June 1996 DISCLAIMER This document was prepared as an account of work sponsored by an agency of the United States Government Neither the United States Government nor the University of California nor any o f their employes, makes any warranty, express or implied, or assumes any legal liability or responsibifity for the accuracy, completeness, or usefulness o f any information, apparatus, product, or

  4. POC-SCALE TESTING OF A DRY TRIBOELECTROSTATIC SEPARATOR FOR FINE COAL CLEANING

    SciTech Connect (OSTI)

    R.-H. Yoon; G.H. Luttrell; B. Luvsansambuu; A.D. Walters

    2000-10-01

    Work continued during the past quarter to improve the performance of the POC-scale unit. For the charging system, a more robust ''turbocharger'' has been fabricated and installed. All of the internal components of the charger have been constructed from the same material (i.e., Plexiglas) to prevent particles from contacting surfaces with different work functions. For the electrode system, a new set of vinyl-coated electrodes have been constructed and tested. The coated electrodes (i) allow higher field strengths to be tested without of risk of arcing and (ii) minimize the likelihood of charge reversal caused by particles colliding with the conducting surfaces of the uncoated electrodes. Tests are underway to evaluate these modifications. Several different coal samples were collected for testing during this reporting period. These samples included (i) a ''reject'' material that was collected from the pyrite trap of a pulverizer at a coal-fired power plant, (ii) an ''intermediate'' product that was selectively withdrawn from the grinding chamber of a pulverizer at a power plant, and (iii) a run-of-mine feed coal from an operating coal preparation plant. Tests were conducted with these samples to investigate the effects of several key parameters (e.g., particle size, charger type, sample history, electrode coatings, etc.) on the performance of the bench-scale separator.

  5. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect (OSTI)

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  6. Proteogenomic strategies for identification of aberrant cancer peptides using large-scale Next Generation Sequencing data

    SciTech Connect (OSTI)

    Woo, Sunghee; Cha, Seong Won; Na, Seungjin; Guest, Clark; Liu, Tao; Smith, Richard D.; Rodland, Karin D.; Payne, Samuel H.; Bafna, Vineet

    2014-11-17

    Cancer is driven by the acquisition of somatic DNA lesions. Distinguishing the early driver mutations from subsequent passenger mutations is key to molecular sub-typing of cancers, and the discovery of novel biomarkers. The availability of genomics technologies (mainly wholegenome and exome sequencing, and transcript sampling via RNA-seq, collectively referred to as NGS) have fueled recent studies on somatic mutation discovery. However, the vision is challenged by the complexity, redundancy, and errors in genomic data, and the difficulty of investigating the proteome using only genomic approaches. Recently, combination of proteomic and genomic technologies are increasingly employed. However, the complexity and redundancy of NGS data remains a challenge for proteogenomics, and various trade-offs must be made to allow for the searches to take place. This paperprovides a discussion of two such trade-offs, relating to large database search, and FDR calculations, and their implication to cancer proteogenomics. Moreover, it extends and develops the idea of a unified genomic variant database that can be searched by any mass spectrometry sample. A total of 879 BAM files downloaded from TCGA repository were used to create a 4.34 GB unified FASTA database which contained 2,787,062 novel splice junctions, 38,464 deletions, 1105 insertions, and 182,302 substitutions. Proteomic data from a single ovarian carcinoma sample (439,858 spectra) was searched against the database. By applying the most conservative FDR measure, we have identified 524 novel peptides and 65,578 known peptides at 1% FDR threshold. The novel peptides include interesting examples of doubly mutated peptides, frame-shifts, and non-sample-recruited mutations, which emphasize the strength of our approach.

  7. Neutrino Physics from the Cosmic Microwave Background and Large Scale Structure

    SciTech Connect (OSTI)

    Abazajian, K. N.; Arnold, K.; Austermann, J.; Benson, B. A.; Bischoff, C.; Bock, J.; Bond, J. R.; Borrill, J.; Calabrese, E.; Carlstrom, J. E.; Carvalho, C. S.; Chang, C. L.; Chiang, H. C.; Church, S.; Cooray, A.; Crawford, T. M.; Dawson, K. S.; Das, S.; Devlin, M. J.; Dobbs, M.; Dodelson, S.; Dore, O.; Dunkley, J.; Errard, J.; Fraisse, A.; Gallicchio, J.; Halverson, N. W.; Hanany, S.; Hildebrandt, S. R.; Hincks, A.; Hlozek, R.; Holder, G.; Holzapfel, W. L.; Honscheid, K.; Hu, W.; Hubmayr, J.; Irwin, K.; Jones, W. C.; Kamionkowski, M.; Keating, B.; Keisler, R.; Knox, L.; Komatsu, E.; Kovac, J.; Kuo, C. -L.; Lawrence, C.; Lee, A. T.; Leitch, E.; Linder, E.; Lubin, P.; McMahon, J.; Miller, A.; Newburgh, L.; Niemack, M. D.; Nguyen, H.; Nguyen, H. T.; Page, L.; Pryke, C.; Reichardt, C. L.; Ruhl, J. E.; Sehgal, N.; Seljak, U.; Sievers, J.; Silverstein, E.; Slosar, A.; Smith, K. M.; Spergel, D.; Staggs, S. T.; Stark, A.; Stompor, R.; Wang, G.; Watson, S.; Wollack, E. J.; Wu, W. L.K.; Yoon, K. W.; Zahn, O.

    2014-03-15

    This is a report on the status and prospects of the quantification of neutrino properties through the cosmological neutrino background for the Cosmic Frontier of the Division of Particles and Fields Community Summer Study long-term planning exercise. Experiments planned and underway are prepared to study the cosmological neutrino background in detail via its influence on distance-redshift relations and the growth of structure. The program for the next decade described in this document, including upcoming spectroscopic galaxy surveys eBOSS and DESI and a new Stage-IV CMB polarization experiment CMB-S4, will achieve ? (?mv) = 16 meV and ? (Neff)(Neff) = 0.020. Such a mass measurement will produce a high significance detection of non-zero ?m??m?, whose lower bound derived from atmospheric and solar neutrino oscillation data is about 58 meV. If neutrinos have a minimal normal mass hierarchy, this measurement will definitively rule out the inverted neutrino mass hierarchy, shedding light on one of the most puzzling aspects of the Standard Model of particle physics the origin of mass. This precise a measurement of Neff will allow for high sensitivity to any light and dark degrees of freedom produced in the big bang and a precision test of the standard cosmological model prediction that Neff = 3.046.

  8. Neutrino physics from the cosmic microwave background and large scale structure

    SciTech Connect (OSTI)

    Abazajian, K. N.; Arnold, K.; Austermann, J. E.; Benson, B. A.; Bischoff, C.; Brock, J.; Bond, J. R.; Borrill, J.; Calabrese, E.; Carlstrom, J. E.; Chang, C. L.

    2015-03-15

    This is a report on the status and prospects of the quantification of neutrino properties through the cosmological neutrino background for the Cosmic Frontier of the Division of Particles and Fields Community Summer Study long-term planning exercise. Experiments planned and underway are prepared to study the cosmological neutrino background in detail via its influence on distance-redshift relations and the growth of structure. The program for the next decade described in this document, including upcoming spectroscopic galaxy surveys eBOSS and DESI and a new Stage-IV CMB polarization experiment CMB-S4, will achieve σ (σmν)(σmν) = 16 meV and σ (Neff)(Neff) = 0.020. Such a mass measurement will produce a high significance detection of non-zero σmνσmν, whose lower bound derived from atmospheric and solar neutrino oscillation data is about 58 meV. If neutrinos have a minimal normal mass hierarchy, this measurement will definitively rule out the inverted neutrino mass hierarchy, shedding light on one of the most puzzling aspects of the Standard Model of particle physics — the origin of mass. This precise a measurement of NeffNeff will allow for high sensitivity to any light and dark degrees of freedom produced in the big bang and a precision test of the standard cosmological model prediction that View the MathML sourceNeff=3.046.

  9. Neutrino Physics from the Cosmic Microwave Background and Large Scale Structure

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Abazajian, K. N.; Arnold, K.; Austermann, J.; Benson, B. A.; Bischoff, C.; Bock, J.; Bond, J. R.; Borrill, J.; Calabrese, E.; Carlstrom, J. E.; et al

    2014-03-15

    This is a report on the status and prospects of the quantification of neutrino properties through the cosmological neutrino background for the Cosmic Frontier of the Division of Particles and Fields Community Summer Study long-term planning exercise. Experiments planned and underway are prepared to study the cosmological neutrino background in detail via its influence on distance-redshift relations and the growth of structure. The program for the next decade described in this document, including upcoming spectroscopic galaxy surveys eBOSS and DESI and a new Stage-IV CMB polarization experiment CMB-S4, will achieve σ (σmv) = 16 meV and σ (Neff)(Neff) = 0.020.more » Such a mass measurement will produce a high significance detection of non-zero σmνσmν, whose lower bound derived from atmospheric and solar neutrino oscillation data is about 58 meV. If neutrinos have a minimal normal mass hierarchy, this measurement will definitively rule out the inverted neutrino mass hierarchy, shedding light on one of the most puzzling aspects of the Standard Model of particle physics — the origin of mass. This precise a measurement of Neff will allow for high sensitivity to any light and dark degrees of freedom produced in the big bang and a precision test of the standard cosmological model prediction that Neff = 3.046.« less

  10. Utility-Scale Power Tower Solar Systems: Performance Acceptance Test Guidelines

    SciTech Connect (OSTI)

    Kearney, D.

    2013-03-01

    The purpose of these Guidelines is to provide direction for conducting performance acceptance testing for large power tower solar systems that can yield results of a high level of accuracy consistent with good engineering knowledge and practice. The recommendations have been developed under a National Renewable Energy Laboratory (NREL) subcontract and reviewed by stakeholders representing concerned organizations and interests throughout the concentrating solar power (CSP) community. An earlier NREL report provided similar guidelines for parabolic trough systems. These Guidelines recommend certain methods, instrumentation, equipment operating requirements, and calculation methods. When tests are run in accordance with these Guidelines, we expect that the test results will yield a valid indication of the actual performance of the tested equipment. But these are only recommendations--to be carefully considered by the contractual parties involved in the Acceptance Tests--and we expect that modifications may be required to fit the particular characteristics of a specific project.

  11. Large-scale mapping of landslides in the epicentral area Loma Prieta earthquake of October 17, 1989, Santa Cruz County

    SciTech Connect (OSTI)

    Spittler, T.E.; Sydnor, R.H.; Manson, M.W.; Levine, P.; McKittrick, M.M.

    1990-01-01

    The Loma Prieta earthquake of October 17, 1989 triggered landslides throughout the Santa Cruz Mountains in central California. The California Department of Conservation, Division of Mines and Geology (DMG) responded to a request for assistance from the County of Santa Cruz, Office of Emergency Services to evaluate the geologic hazard from major reactivated large landslides. DMG prepared a set of geologic maps showing the landslide features that resulted from the October 17 earthquake. The principal purpose of large-scale mapping of these landslides is: (1) to provide county officials with regional landslide information that can be used for timely recovery of damaged areas; (2) to identify disturbed ground which is potentially vulnerable to landslide movement during winter rains; (3) to provide county planning officials with timely geologic information that will be used for effective land-use decisions; (4) to document regional landslide features that may not otherwise be available for individual site reconstruction permits and for future development.

  12. Reduced Order Modeling for Prediction and Control of Large-Scale Systems.

    SciTech Connect (OSTI)

    Kalashnikova, Irina; Arunajatesan, Srinivasan; Barone, Matthew Franklin; van Bloemen Waanders, Bart Gustaaf; Fike, Jeffrey A.

    2014-05-01

    This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest to Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the symmetry inner product. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier-Stokes equations is derived, and it is demonstrated that if a Galerkin ROM is constructed in this inner product, the ROM system energy will be bounded in a way that is consistent with the behavior of the exact solution to these PDEs, i.e., the ROM will be energy-stable. The viability of the linear as well as nonlinear continuous projection model reduction approaches developed as a part of this project is evaluated on several test cases, including the cavity configuration of interest in the targeted application area. In the second part of this report, some POD/Galerkin approaches for building stable ROMs using discrete projection are explored. It is shown that, for generic linear time-invariant (LTI) systems, a discrete counterpart of the continuous symmetry inner product is a weighted L2 inner product obtained by solving a Lyapunov equation. This inner product was first proposed by Rowley et al., and is termed herein the Lyapunov inner product. Comparisons between the symmetry inner product and the Lyapunov inner product are made, and the performance of ROMs constructed using these inner products is evaluated on several benchmark test cases. Also in the second part of this report, a new ROM stabilization approach, termed ROM stabilization via optimization-based eigenvalue reassignment, is developed for generic LTI systems. At the heart of this method is a constrained nonlinear least-squares optimization problem that is formulated and solved numerically to ensure accuracy of the stabilized ROM. Numerical studies reveal that the optimization problem is computationally inexpensive to solve, and that the new stabilization approach delivers ROMs that are stable as well as accurate. Summaries of lessons learned and perspectives for future work motivated by this LDRD project are provided at the end of each of the two main chapters.

  13. Selection of components for the IDEALHY preferred cycle for the large scale liquefaction of hydrogen

    SciTech Connect (OSTI)

    Quack, H.; Seemann, I.; Klaus, M.; Haberstroh, Ch.; Berstad, D.; Walnum, H. T.; Neksa, P.; Decker, L.

    2014-01-29

    In a future energy scenario, in which storage and transport of liquid hydrogen in large quantities will be used, the efficiency of the liquefaction of hydrogen will be of utmost importance. The goal of the IDEALHY working party is to identify the most promising process for a 50 t/d plant and to select the components, with which such a process can be realized. In the first stage the team has compared several processes, which have been proposed or realized in the past. Based on this information a process has been selected, which is thermodynamically most promising and for which it could be assumed that good components already exist or can be developed in the foreseeable future. Main features of the selected process are the compression of the feed stream to a relatively high pressure level, o-p conversion inside plate-fin heat exchangers and expansion turbines in the supercritical region. Precooling to a temperature between 150 and 100 K will be obtained from a mixed refrigerant cycle similar to the systems used successfully in natural gas liquefaction plants. The final cooling will be produced by two Brayton cycles, both having several expansion turbines in series. The selected overall process has still a number of parameters, which can be varied. The optimum, i.e. the final choice will depend mainly on the quality of the available components. Key components are the expansion turbines of the two Brayton cycles and the main recycle compressor, which may be common to both Brayton cycles. A six-stage turbo-compressor with intercooling between the stages is expected to be the optimum choice here. Each stage may consist of several wheels in series. To make such a high efficient and cost-effective compressor feasible, one has to choose a refrigerant, which has a higher molecular weight than helium. The present preferred choice is a mixture of helium and neon with a molecular weight of about 8 kg/kmol. Such an expensive refrigerant requires that the whole refrigeration loop is extremely tight.

  14. Pilot-scale treatability test plan for the 200-BP-5 operable unit

    SciTech Connect (OSTI)

    Not Available

    1994-08-01

    This document presents the treatability test plan for pilot-scale pump and treat testing at the 200-BP-5 Operable Unit. This treatability test plan has been prepared in response to an agreement between the U.S. Department of Energy (DOE), the U.S. Environmental Protection Agency (EPA), and the State of Washington Department of Ecology (Ecology), as documented in Hanford Federal Facility Agreement and Consent Order (Tri-Party Agreement, Ecology et al. 1989a) Change Control Form M-13-93-03 (Ecology et al. 1994) and a recent 200 NPL Agreement Change Control Form (Appendix A). The agreement also requires that, following completion of the activities described in this test plan, a 200-BP-5 Operable Unit Interim Remedial Measure (IRM) Proposed Plan be developed for use in preparing an Interim Action Record of Decision (ROD). The IRM Proposed Plan will be supported by the results of this treatability test plan, as well as by other 200-BP-5 Operable Unit activities (e.g., development of a qualitative risk assessment). Once issued, the Interim Action ROD will specify the interim action(s) for groundwater contamination at the 200-BP-5 Operable Unit. The treatability test approach is to conduct a pilot-scale pump and treat test for each of the two contaminant plumes associated with the 200-BP-5 Operable Unit. Primary contaminants of concern are {sup 99}Tc and {sup 60}Co for underwater affected by past discharges to the 216-BY Cribs, and {sup 90}Sr, {sup 239/240}Pu, and Cs for groundwater affected by past discharges to the 216-B-5 Reverse Well. The purpose of the pilot-scale treatability testing presented in this testplan is to provide the data basis for preparing an IRM Proposed Plan. To achieve this objective, treatability testing must: Assess the performance of groundwater pumping with respect to the ability to extract a significant amount of the primary contaminant mass present in the two contaminant plumes.

  15. Report on full-scale horizontal cable tray fire tests, FY 1988

    SciTech Connect (OSTI)

    Riches, W.M.

    1988-09-01

    In recent years, there has been much discussion throughout industry and various governmental and fire protection agencies relative to the flammability and fire propagation characteristics of electrical cables in open cable trays. It has been acknowledged that under actual fire conditions, in the presence of other combustibles, electrical cable insulation can contribute to combustible fire loading and toxicity of smoke generation. Considerable research has been conducted on vertical cable tray fire propagation, mostly under small scale laboratory conditions. In July 1987, the Fermi National Accelerator Laboratory initiated a program of full scale, horizontal cable tray fire tests, in the absence of other building combustible loading, to determine the flammability and rate of horizontal fire propagation in cable tray configurations and cable mixes typical of those existing in underground tunnel enclosures and support buildings at the Laboratory. The series of tests addressed the effects of ventilation rates and cable tray fill, fire fighting techniques, and effectiveness and value of automatic sprinklers, smoke detection and cable coating fire barriers in detecting, controlling or extinguishing a cable tray fire. This report includes a description of the series of fire tests completed in June 1988, as well as conclusions reached from the test results.

  16. Impacts of Array Configuration on Land-Use Requirements for Large-Scale Photovoltaic Deployment in the United States: Preprint

    SciTech Connect (OSTI)

    Denholm, P.; Margolis, R. M.

    2008-05-01

    Land use is often cited as an important issue for renewable energy technologies. In this paper we examine the relationship between land-use requirements for large-scale photovoltaic (PV) deployment in the U.S. and PV-array configuration. We estimate the per capita land requirements for solar PV and find that array configuration is a stronger driver of energy density than regional variations in solar insolation. When deployed horizontally, the PV land area needed to meet 100% of an average U.S. citizen's electricity demand is about 100 m2. This requirement roughly doubles to about 200 m2 when using 1-axis tracking arrays. By comparing these total land-use requirements with other current per capita land uses, we find that land-use requirements of solar photovoltaics are modest, especially when considering the availability of zero impact 'land' on rooftops. Additional work is need to examine the tradeoffs between array spacing, self-shading losses, and land use, along with possible techniques to mitigate land-use impacts of large-scale PV deployment.

  17. Large-scale Environmental Variables and Transition to Deep Convection in Cloud Resolving Model Simulations: A Vector Representation

    SciTech Connect (OSTI)

    Hagos, Samson M.; Leung, Lai-Yung R.

    2012-11-01

    Cloud resolving model simulations and vector analysis are used to develop a quantitative method of assessing regional variations in the relationships between various large-scale environmental variables and the transition to deep convection. Results of the CRM simulations from three tropical regions are used to cluster environmental conditions under which transition to deep convection does and does not take place. Projections of the large-scale environmental variables on the difference between these two clusters are used to quantify the roles of these variables in the transition to deep convection. While the transition to deep convection is most sensitive to moisture and vertical velocity perturbations, the details of the profiles of the anomalies vary from region to region. In comparison, the transition to deep convection is found to be much less sensitive to temperature anomalies over all three regions. The vector formulation presented in this study represents a simple general framework for quantifying various aspects of how the transition to deep convection is sensitive to environmental conditions.

  18. North American extreme temperature events and related large scale meteorological patterns: A review of statistical methods, dynamics, modeling, and trends

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; et al

    2015-05-22

    This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more research is needed to understand the limitations of climate models and improve model skill in simulating extreme temperatures and their associated LSMPs. Furthermore, the paper concludes with unresolved issues and research questions.« less

  19. PILOT SCALE TESTING OF MONOSODIUM TITANATE MIXING FOR THE SRS SMALL COLUMN ION EXCHANGE PROCESS - 11224

    SciTech Connect (OSTI)

    Poirier, M.; Restivo, M.; Williams, M.; Herman, D.; Steeper, T.

    2011-01-25

    The Small Column Ion Exchange (SCIX) process is being developed to remove cesium, strontium, and select actinides from Savannah River Site (SRS) Liquid Waste using an existing waste tank (i.e., Tank 41H) to house the process. Savannah River National Laboratory (SRNL) is conducting pilot-scale mixing tests to determine the pump requirements for suspending monosodium titanate (MST), crystalline silicotitanate (CST), and simulated sludge. The purpose of this pilot scale testing is to determine the requirements for the pumps to suspend the MST particles so that they can contact the strontium and actinides in the liquid and be removed from the tank. The pilot-scale tank is a 1/10.85 linear scaled model of SRS Tank 41H. The tank diameter, tank liquid level, pump nozzle diameter, pump elevation, and cooling coil diameter are all 1/10.85 of their dimensions in Tank 41H. The pump locations correspond to the proposed locations in Tank 41H by the SCIX program (Risers B5 and B2 for two pump configurations and Risers B5, B3, and B1 for three pump configurations). The conclusions from this work follow: (i) Neither two standard slurry pumps nor two quad volute slurry pumps will provide sufficient power to initially suspend MST in an SRS waste tank. (ii) Two Submersible Mixer Pumps (SMPs) will provide sufficient power to initially suspend MST in an SRS waste tank. However, the testing shows the required pump discharge velocity is close to the maximum discharge velocity of the pump (within 12%). (iii) Three SMPs will provide sufficient power to initially suspend MST in an SRS waste tank. The testing shows the required pump discharge velocity is 66% of the maximum discharge velocity of the pump. (iv) Three SMPs are needed to resuspend MST that has settled in a waste tank at nominal 45 C for four weeks. The testing shows the required pump discharge velocity is 77% of the maximum discharge velocity of the pump. Two SMPs are not sufficient to resuspend MST that settled under these conditions.

  20. Post service examination of turbomolecular pumps after stress testing with Kg-scale tritium throughput

    SciTech Connect (OSTI)

    Priester, F.; Roelling, M.

    2015-03-15

    Turbomolecular pumps (TMP) will be used with large amounts of tritium in future fusion machines like ITER, DEMO and in the KATRIN Experiment. In this work, a stress test of a large, magnetically levitated TMP (Leybold MAG W2800) with a tritium throughput of 1.1 kg over 384 days of operation was performed at TLK. After this, the pump was dismantled and the tritium uptake in several parts was deter-mined. Especially the non-metallic parts of the pump have absorbed large amounts of tritium and are most likely responsible for the observed pollution of the process gas. The total tritium uptake of the TMP was estimated with 0.1-1.1 TBq. No radiation-induced damages were found on the inner parts of the pump. The TMP showed no signs of functional limitations during the 384 days of operation. (authors)

  1. LARGE-SCALE DISTRIBUTION OF ARRIVAL DIRECTIONS OF COSMIC RAYS DETECTED ABOVE 10{sup 18} eV AT THE PIERRE AUGER OBSERVATORY

    SciTech Connect (OSTI)

    Abreu, P.; Andringa, S.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Alves Batista, R.; Ambrosio, M.; Aramo, C.; Aminaei, A.; Anchordoqui, L.; Antici'c, T.; Arganda, E.; Collaboration: Pierre Auger Collaboration; and others

    2012-12-15

    A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above 10{sup 18} eV at the Pierre Auger Observatory is presented. This search is performed as a function of both declination and right ascension in several energy ranges above 10{sup 18} eV, and reported in terms of dipolar and quadrupolar coefficients. Within the systematic uncertainties, no significant deviation from isotropy is revealed. Assuming that any cosmic-ray anisotropy is dominated by dipole and quadrupole moments in this energy range, upper limits on their amplitudes are derived. These upper limits allow us to test the origin of cosmic rays above 10{sup 18} eV from stationary Galactic sources densely distributed in the Galactic disk and predominantly emitting light particles in all directions.

  2. ISSUANCE 2015-07-27: Energy Conservation Program: Test Procedures for Small, Large, and Very Large Air-Cooled Commercial Package Air Conditioning and Heating Equipment, Notice of Proposed Rulemaking

    Broader source: Energy.gov [DOE]

    Energy Conservation Program: Test Procedures for Small, Large, and Very Large Air-Cooled Commercial Package Air Conditioning and Heating Equipment, Notice of Proposed Rulemaking

  3. Final Report on DOE Project entitled Dynamic Optimized Advanced Scheduling of Bandwidth Demands for Large-Scale Science Applications

    SciTech Connect (OSTI)

    Ramamurthy, Byravamurthy

    2014-05-05

    In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published several conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.

  4. Scattering of electromagnetic waves by vortex density structures associated with interchange instability: Analytical and large scale plasma simulation results

    SciTech Connect (OSTI)

    Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T. A.

    2014-05-15

    The presence of plasma turbulence can strongly influence propagation properties of electromagnetic signals used for surveillance and communication. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of coherent vortex structures. Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. These types of density irregularities play an important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics, and in many other applications. We will discuss scattering of high frequency electromagnetic waves on low frequency density irregularities due to the presence of vortex density structures associated with interchange instability. We will also present particle-in-cell simulation results of electromagnetic scattering on vortex type density structures using the large scale plasma code LSP and compare them with analytical results.

  5. The absorption chiller in large scale solar pond cooling design with condenser heat rejection in the upper convecting zone

    SciTech Connect (OSTI)

    Tsilingiris, P.T. )

    1992-07-01

    The possibility of using solar ponds as low-cost solar collectors combined with commercial absorption chillers in large scale solar cooling design is investigated. The analysis is based on the combination of a steady-state solar pond mathematical model with the operational characteristics of a commercial absorption chiller, assuming condenser heat rejection in the upper convecting zone (U.C.Z.). The numerical solution of the nonlinear equations involved leads to results which relate the chiller capacity with pond design and environmental parameters, which are also employed for the investigation of the optimum pond size for a minimum capital cost. The derived cost per cooling kW for a 350 kW chiller ranges from about 300 to 500 $/kW cooling. This is almost an order of magnitude lower than using a solar collector field of evacuated tube type.

  6. Hydrogen atom temperature measured with wavelength-modulated laser absorption spectroscopy in large scale filament arc negative hydrogen ion source

    SciTech Connect (OSTI)

    Nakano, H. Goto, M.; Tsumori, K.; Kisaki, M.; Ikeda, K.; Nagaoka, K.; Osakabe, M.; Takeiri, Y.; Kaneko, O.; Nishiyama, S.; Sasaki, K.

    2015-04-08

    The velocity distribution function of hydrogen atoms is one of the useful parameters to understand particle dynamics from negative hydrogen production to extraction in a negative hydrogen ion source. Hydrogen atom temperature is one of the indicators of the velocity distribution function. To find a feasibility of hydrogen atom temperature measurement in large scale filament arc negative hydrogen ion source for fusion, a model calculation of wavelength-modulated laser absorption spectroscopy of the hydrogen Balmer alpha line was performed. By utilizing a wide range tunable diode laser, we successfully obtained the hydrogen atom temperature of ?3000?K in the vicinity of the plasma grid electrode. The hydrogen atom temperature increases as well as the arc power, and becomes constant after decreasing with the filling of hydrogen gas pressure.

  7. Utility-Scale Parabolic Trough Solar Systems: Performance Acceptance Test Guidelines, April 2009 - December 2010

    SciTech Connect (OSTI)

    Kearney, D.

    2011-05-01

    Prior to commercial operation, large solar systems in utility-size power plants need to pass a performance acceptance test conducted by the engineering, procurement, and construction (EPC) contractor or owners. In lieu of the present absence of ASME or other international test codes developed for this purpose, the National Renewable Energy Laboratory has undertaken the development of interim guidelines to provide recommendations for test procedures that can yield results of a high level of accuracy consistent with good engineering knowledge and practice. The Guidelines contained here are specifically written for parabolic trough collector systems with a heat-transport system using a high-temperature synthetic oil, but the basic principles are relevant to other CSP systems.

  8. Acceptance Performance Test Guideline for Utility Scale Parabolic Trough and Other CSP Solar Thermal Systems: Preprint

    SciTech Connect (OSTI)

    Mehos, M. S.; Wagner, M. J.; Kearney, D. W.

    2011-08-01

    Prior to commercial operation, large solar systems in utility-size power plants need to pass a performance acceptance test conducted by the engineering, procurement, and construction (EPC) contractor or owners. In lieu of the present absence of ASME or other international test codes developed for this purpose, the National Renewable Energy Laboratory has undertaken the development of interim guidelines to provide recommendations for test procedures that can yield results of a high level of accuracy consistent with good engineering knowledge and practice. Progress on interim guidelines was presented at SolarPACES 2010. Significant additions and modifications were made to the guidelines since that time, resulting in a final report published by NREL in April 2011. This paper summarizes those changes, which emphasize criteria for assuring thermal equilibrium and steady state conditions within the solar field.

  9. POC-scale testing of a dry triboelectrostatic separator for fine coal cleaning

    SciTech Connect (OSTI)

    R.-H. Yoon; G.H. Luttrell; G.T. Adel; A.D. Walters

    1999-07-01

    The Proof-of-Concept (POC) triboelectrostatic separator (TES) has now been successfully installed at the Virginia Tech pilot-plant. As a result, most of the personnel assigned to this project during the past quarter have been performing work elements associated with the installation and shakedown testing of the electrostatic separator, tribocharger system, product conveying systems and nitrogen purge system (Tasks 4, 5.1 and 5.2). A representative from Carpco also carried out training in the operating features of the unit during the past month. Most of the shakedown test work has now been successfully completed. However, several minor operational problems associated with the pilot-scale equipment are currently in the process of being resolved.

  10. Adsorption and diffusion of Ru adatoms on Ru(0001)-supported graphene: Large-scale first-principles calculations

    SciTech Connect (OSTI)

    Han, Yong; Evans, James W.

    2015-10-27

    Large-scale first-principles density functional theory calculations are performed to investigate the adsorption and diffusion of Ru adatoms on monolayer graphene (G) supported on Ru(0001). The G sheet exhibits a periodic moiré-cell superstructure due to lattice mismatch. Within a moiré cell, there are three distinct regions: fcc, hcp, and mound, in which the C6-ring center is above a fcc site, a hcp site, and a surface Ru atom of Ru(0001), respectively. The adsorption energy of a Ru adatom is evaluated at specific sites in these distinct regions. We find the strongest binding at an adsorption site above a C atom in the fcc region, next strongest in the hcp region, then the fcc-hcp boundary (ridge) between these regions, and the weakest binding in the mound region. Behavior is similar to that observed from small-unit-cell calculations of Habenicht et al. [Top. Catal. 57, 69 (2014)], which differ from previous large-scale calculations. We determine the minimum-energy path for local diffusion near the center of the fcc region and obtain a local diffusion barrier of ~0.48 eV. We also estimate a significantly lower local diffusion barrier in the ridge region. These barriers and information on the adsorption energy variation facilitate development of a realistic model for the global potential energy surface for Ru adatoms. Furthermore, this in turn enables simulation studies elucidating diffusion-mediated directed-assembly of Ru nanoclusters during deposition of Ru on G/Ru(0001).

  11. An Inexpensive Aqueous Flow Battery for Large-Scale Electrical Energy Storage Based on Water-Soluble Organic Redox Couples

    SciTech Connect (OSTI)

    Yang, B; Hoober-Burkhardt, L; Wang, F; Prakash, GKS; Narayanan, SR

    2014-05-21

    We introduce a novel Organic Redox Flow Battery (ORBAT), for Meeting the demanding requirements of cost, eco-friendliness, and durability for large-scale energy storage. ORBAT employs two different water-soluble organic redox couples on the positive and negative side of a flow battery. Redox couples such as quinones are particularly attractive for this application. No precious metal catalyst is needed because of the fast proton-coupled electron transfer processes. Furthermore, in acid media, the quinones exhibit good chemical stability. These properties render quinone-based redox couples very attractive for high-efficiency metal-free rechargeable batteries. We demonstrate the rechargeability of ORBAT with anthraquinone-2-sulfonic acid or anthraquinone-2,6-disulfonic acid on the negative side, and 1,2-dihydrobenzoquinone- 3,5-disulfonic acid on the positive side. The ORBAT cell uses a membrane-electrode assembly configuration similar to that used in polymer electrolyte fuel cells. Such a battery can be charged and discharged multiple times at high faradaic efficiency without any noticeable degradation of performance. We show that solubility and mass transport properties of the reactants and products are paramount to achieving high current densities and high efficiency. The ORBAT configuration presents a unique opportunity for developing an inexpensive and sustainable metal-free rechargeable battery for large-scale electrical energy storage. (C) The Author(s) 2014. Published by ECS. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 License (CC BY, http://creativecommons.orgilicenses/by/4.0/), which permits unrestricted reuse of the work in any medium, provided the original work is properly cited. All rights reserved.

  12. LARGE-SCALE PERIODIC VARIABILITY OF THE WIND OF THE WOLF-RAYET STAR WR 1 (HD 4004)

    SciTech Connect (OSTI)

    Chene, A.-N.

    2010-06-20

    We present the results of an intensive photometric and spectroscopic monitoring campaign of the WN4 Wolf-Rayet (WR) star WR 1 = HD 4004. Our broadband V photometry covering a timespan of 91 days shows variability with a period of P = 16.9{sup +0.6}{sub -0.3} days. The same period is also found in our spectral data. The light curve is non-sinusoidal with hints of a gradual change in its shape as a function of time. The photometric variations nevertheless remain coherent over several cycles and we estimate that the coherence timescale of the light curve is of the order of 60 days. The spectroscopy shows large-scale line-profile variability which can be interpreted as excess emission peaks moving from one side of the profile to the other on a timescale of several days. Although we cannot unequivocally exclude the unlikely possibility that WR 1 is a binary, we propose that the nature of the variability we have found strongly suggests that it is due to the presence in the wind of the WR star of large-scale structures, most likely corotating interaction regions (CIRs), which are predicted to arise in inherently unstable radiatively driven winds when they are perturbed at their base. We also suggest that variability observed in WR 6, WR 134, and WR 137 is of the same nature. Finally, assuming that the period of CIRs is related to the rotational period, we estimate the rotation rate of the four stars for which sufficient monitoring has been carried out, i.e., v{sub rot} = 6.5, 40, 70, and 275 km s{sup -1} for WR 1, WR 6, WR 134, and WR 137, respectively.

  13. Adsorption and diffusion of Ru adatoms on Ru(0001)-supported graphene: Large-scale first-principles calculations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Han, Yong; Evans, James W.

    2015-10-27

    Large-scale first-principles density functional theory calculations are performed to investigate the adsorption and diffusion of Ru adatoms on monolayer graphene (G) supported on Ru(0001). The G sheet exhibits a periodic moiré-cell superstructure due to lattice mismatch. Within a moiré cell, there are three distinct regions: fcc, hcp, and mound, in which the C6-ring center is above a fcc site, a hcp site, and a surface Ru atom of Ru(0001), respectively. The adsorption energy of a Ru adatom is evaluated at specific sites in these distinct regions. We find the strongest binding at an adsorption site above a C atom inmore » the fcc region, next strongest in the hcp region, then the fcc-hcp boundary (ridge) between these regions, and the weakest binding in the mound region. Behavior is similar to that observed from small-unit-cell calculations of Habenicht et al. [Top. Catal. 57, 69 (2014)], which differ from previous large-scale calculations. We determine the minimum-energy path for local diffusion near the center of the fcc region and obtain a local diffusion barrier of ~0.48 eV. We also estimate a significantly lower local diffusion barrier in the ridge region. These barriers and information on the adsorption energy variation facilitate development of a realistic model for the global potential energy surface for Ru adatoms. Furthermore, this in turn enables simulation studies elucidating diffusion-mediated directed-assembly of Ru nanoclusters during deposition of Ru on G/Ru(0001).« less

  14. DISCOVERY OF A LARGE NUMBER OF CANDIDATE PROTOCLUSTERS TRACED BY ?15 Mpc-SCALE GALAXY OVERDENSITIES IN COSMOS

    SciTech Connect (OSTI)

    Chiang, Yi-Kuan; Gebhardt, Karl; Overzier, Roderik

    2014-02-10

    To demonstrate the feasibility of studying the epoch of massive galaxy cluster formation in a more systematic manner using current and future galaxy surveys, we report the discovery of a large sample of protocluster candidates in the 1.62deg{sup 2} COSMOS/UltraVISTA field traced by optical/infrared selected galaxies using photometric redshifts. By comparing properly smoothed three-dimensional galaxy density maps of the observations and a set of matched simulations incorporating the dominant observational effects (galaxy selection and photometric redshift uncertainties), we first confirm that the observed ?15 comoving Mpc-scale galaxy clustering is consistent with ?CDM models. Using further the relation between high-z overdensity and the present day cluster mass calibrated in these matched simulations, we found 36 candidate structures at 1.6 < z < 3.1, showing overdensities consistent with the progenitors of M{sub z} {sub =} {sub 0} ? 10{sup 15} M {sub ?} clusters. Taking into account the significant upward scattering of lower mass structures, the probabilities for the candidates to have at least M{sub z=} {sub 0} ? 10{sup 14} M {sub ?} are ?70%. For each structure, about 15%-40% of photometric galaxy candidates are expected to be true protocluster members that will merge into a cluster-scale halo by z = 0. With solely photometric redshifts, we successfully rediscover two spectroscopically confirmed structures in this field, suggesting that our algorithm is robust. This work generates a large sample of uniformly selected protocluster candidates, providing rich targets for spectroscopic follow-up and subsequent studies of cluster formation. Meanwhile, it demonstrates the potential for probing early cluster formation with upcoming redshift surveys such as the Hobby-Eberly Telescope Dark Energy Experiment and the Subaru Prime Focus Spectrograph survey.

  15. ALUMINUM REMOVAL FROM HANFORD WASTE BY LITHIUM HYDROTALCITE PRECIPITATION - LABORATORY SCALE VALIDATION ON WASTE SIMULANTS TEST REPORT

    SciTech Connect (OSTI)

    SAMS T; HAGERTY K

    2011-01-27

    To reduce the additional sodium hydroxide and ease processing of aluminum bearing sludge, the lithium hydrotalcite (LiHT) process has been invented by AREV A and demonstrated on a laboratory scale to remove alumina and regenerate/recycle sodium hydroxide prior to processing in the WTP. The method uses lithium hydroxide (LiOH) to precipitate sodium aluminate (NaAI(OH){sub 4}) as lithium hydrotalcite (Li{sub 2}CO{sub 3}.4Al(OH){sub 3}.3H{sub 2}O) while generating sodium hydroxide (NaOH). In addition, phosphate substitutes in the reaction to a high degree, also as a filterable solid. The sodium hydroxide enriched leachate is depleted in aluminum and phosphate, and is recycled to double-shell tanks (DSTs) to leach aluminum bearing sludges. This method eliminates importing sodium hydroxide to leach alumina sludge and eliminates a large fraction of the total sludge mass to be treated by the WTP. Plugging of process equipment is reduced by removal of both aluminum and phosphate in the tank wastes. Laboratory tests were conducted to verify the efficacy of the process and confirm the results of previous tests. These tests used both single-shell tank (SST) and DST simulants.

  16. HIGH-TEMPERATURE ELECTROLYSIS FOR LARGE-SCALE HYDROGEN AND SYNGAS PRODUCTION FROM NUCLEAR ENERGY SYSTEM SIMULATION AND ECONOMICS

    SciTech Connect (OSTI)

    J. E. O'Brien; M. G. McKellar; E. A. Harvego; C. M. Stoots

    2009-05-01

    A research and development program is under way at the Idaho National Laboratory (INL) to assess the technological and scale-up issues associated with the implementation of solid-oxide electrolysis cell technology for efficient high-temperature hydrogen production from steam. This work is supported by the US Department of Energy, Office of Nuclear Energy, under the Nuclear Hydrogen Initiative. This paper will provide an overview of large-scale system modeling results and economic analyses that have been completed to date. System analysis results have been obtained using the commercial code UniSim, augmented with a custom high-temperature electrolyzer module. Economic analysis results were based on the DOE H2A analysis methodology. The process flow diagrams for the system simulations include an advanced nuclear reactor as a source of high-temperature process heat, a power cycle and a coupled steam electrolysis loop. Several reactor types and power cycles have been considered, over a range of reactor outlet temperatures. Pure steam electrolysis for hydrogen production as well as coelectrolysis for syngas production from steam/carbon dioxide mixtures have both been considered. In addition, the feasibility of coupling the high-temperature electrolysis process to biomass and coal-based synthetic fuels production has been considered. These simulations demonstrate that the addition of supplementary nuclear hydrogen to synthetic fuels production from any carbon source minimizes emissions of carbon dioxide during the production process.

  17. Scaling Analysis Techniques to Establish Experimental Infrastructure for Component, Subsystem, and Integrated System Testing

    SciTech Connect (OSTI)

    Sabharwall, Piyush; O'Brien, James E.; McKellar, Michael G.; Housley, Gregory K.; Bragg-Sitton, Shannon M.

    2015-03-01

    Hybrid energy system research has the potential to expand the application for nuclear reactor technology beyond electricity. The purpose of this research is to reduce both technical and economic risks associated with energy systems of the future. Nuclear hybrid energy systems (NHES) mitigate the variability of renewable energy sources, provide opportunities to produce revenue from different product streams, and avoid capital inefficiencies by matching electrical output to demand by using excess generation capacity for other purposes when it is available. An essential step in the commercialization and deployment of this advanced technology is scaled testing to demonstrate integrated dynamic performance of advanced systems and components when risks cannot be mitigated adequately by analysis or simulation. Further testing in a prototypical environment is needed for validation and higher confidence. This research supports the development of advanced nuclear reactor technology and NHES, and their adaptation to commercial industrial applications that will potentially advance U.S. energy security, economy, and reliability and further reduce carbon emissions. Experimental infrastructure development for testing and feasibility studies of coupled systems can similarly support other projects having similar developmental needs and can generate data required for validation of models in thermal energy storage and transport, energy, and conversion process development. Experiments performed in the Systems Integration Laboratory will acquire performance data, identify scalability issues, and quantify technology gaps and needs for various hybrid or other energy systems. This report discusses detailed scaling (component and integrated system) and heat transfer figures of merit that will establish the experimental infrastructure for component, subsystem, and integrated system testing to advance the technology readiness of components and systems to the level required for commercial application and demonstration under NHES.

  18. Implementation of a Biaxial Resonant Fatigue Test Method on a Large Wind Turbine Blade

    SciTech Connect (OSTI)

    Snowberg, D.; Dana, S.; Hughes, S.; Berling, P.

    2014-09-01

    A biaxial resonant test method was utilized to simultaneously fatigue test a wind turbine blade in the flap and edge (lead-lag) direction. Biaxial resonant blade fatigue testing is an accelerated life test method utilizing oscillating masses on the blade; each mass is independently oscillated at the respective flap and edge blade resonant frequency. The flap and edge resonant frequency were not controlled, nor were they constant for this demonstrated test method. This biaxial resonant test method presented surmountable challenges in test setup simulation, control and data processing. Biaxial resonant testing has the potential to complete test projects faster than single-axis testing. The load modulation during a biaxial resonant test may necessitate periodic load application above targets or higher applied test cycles.

  19. Using Soir Lucene for Large-Scale Metagenomics Data Retrieval and Analysis (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema (OSTI)

    Goll, Johannes [JCVI

    2013-01-22

    JCVI's Johannes Goll on "Using Solr/Lucene for Large-Scale Metagenomics Data Retrieval and Analysis" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  20. Scales

    ScienceCinema (OSTI)

    Murray Gibson

    2010-01-08

    Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain ? a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

  1. POC-SCALE TESTING OF OIL AGGLOMERATION TECHNIQUES AND EQUIPMENT FOR FINE COAL PROCESSING

    SciTech Connect (OSTI)

    1998-01-01

    This report covers the technical progress achieved from October 1, 1997 to December 31, 1997 on the POC-Scale Testing of Oil Agglomeration Techniques and Equipment for Fine Coal Processing project. Experimental test procedures and the results related to the processing of coal fines originating from process streams generated at the Shoal Creek Mine preparation plant, owned and operated by the Drummond Company Inc. of Alabama, are described. Two samples of coal fines, namely Cyclone Overflow and Pond Fines were investigated. The batch test results showed that by applying the Aglofloat technology a significant ash removal might be achieved at a very high combustible matter recovery: · for the Cyclone Overflow sample the ash reduction was in the range 50 to 55% at combustible matter recovery about 98% · for the Pond Fines sample the ash reduction was up to 48% at combustible matter recovery up to 85%. Additional tests were carried out with the Alberta origin Luscar Mine coal, which will be used for the parametric studies of agglomeration equipment at the 250 kg/h pilot plant. The Luscar coal is very similar to the Mary Lee Coal Group (processed at Shoal Creek Mine preparation plant) in terms of rank and chemical composition.

  2. Recent developments in large-scale finite-element Lagrangian hydrocode technology. [Dyna 20/dyna 30 computer code

    SciTech Connect (OSTI)

    Goudreau, G.L.; Hallquist, J.O.

    1981-10-01

    The state of Lagrangian hydrocodes for computing the large deformation dynamic response of inelastic continuua is reviewed in the context of engineering computation at the Lawrence Livermore National Laboratory, USA, and the DYNA2D/DYNA3D finite elements codes. The emphasis is on efficiency and computational cost. The simplest elements with explicit time integration. The two-dimensional four node quadrilateral and the three-dimensional hexahedron with one point quadrature are advocated as superior to other more expensive choices. Important auxiliary capabilities are a cheap but effective hourglass control, slidelines/planes with void opening/closure, and rezoning. Both strain measures and material formulation are seen as a homogeneous stress point problem and a flexible material subroutine interface admits both incremental and total strain formulation, dependent on internal energy or an arbitrary set of other internal variables. Vectorization on Class VI computers such as the CRAY-1 is a simple exercise for optimally organized primitive element formulations. Some examples of large scale computation are illustrated, including continuous tone graphic representation.

  3. Large Scale DD Simulation Results for Crystal Plasticity Parameters in Fe-Cr And Fe-Ni Systems

    SciTech Connect (OSTI)

    Zbib, Hussein M.; Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2012-04-30

    The development of viable nuclear energy source depends on ensuring structural materials integrity. Structural materials in nuclear reactors will operate in harsh radiation conditions coupled with high level hydrogen and helium production, as well as formation of high density of point defects and defect clusters, and thus will experience severe degradation of mechanical properties. Therefore, the main objective of this work is to develop a capability that predicts aging behavior and in-service lifetime of nuclear reactor components and, thus provide an instrumental tool for tailoring materials design and development for application in future nuclear reactor technologies. Towards this end goal, the long term effort is to develop a physically based multiscale modeling hierarchy, validated and verified, to address outstanding questions regarding the effects of irradiation on materials microstructure and mechanical properties during extended service in the fission and fusion environments. The focus of the current investigation is on modern steels for use in nuclear reactors including high strength ferritic-martensitic steels (Fe-Cr-Ni alloys). The effort is to develop a predicative capability for the influence of irradiation on mechanical behavior. Irradiation hardening is related to structural information crossing different length scales, such as composition, dislocation, and crystal orientation distribution. To predict effective hardening, the influence factors along different length scales should be considered. Therefore, a hierarchical upscaling methodology is implemented in this work in which relevant information is passed between models at three scales, namely, from molecular dynamics to dislocation dynamics to dislocation-based crystal plasticity. The molecular dynamics (MD) was used to predict the dislocation mobility in body centered cubic (bcc) Fe and its Ni and Cr alloys. The results are then passed on to dislocation dynamics to predict the critical resolved shear stress (CRSS) from the evolution of local dislocation and defects. In this report the focus is on the results obtained from large scale dislocation dynamics simulations. The effect of defect density, materials structure was investigated, and evolution laws are obtained. These results will form the bases for the development of evolution and hardening laws for a dislocation-based crystal plasticity framework. The hierarchical upscaling method being developed in this project can provide a guidance tool to evaluate performance of structural materials for next-generation nuclear reactors. Combined with other tools developed in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the models developed will have more impact in improving the reliability of current reactors and affordability of new reactors.

  4. Full-Scale Testing of a Mercury Oxidation Catalyst Upstream of a Wet FGD System

    SciTech Connect (OSTI)

    Gary Blythe; Jennifer Paradis

    2010-06-30

    This document presents and discusses results from Cooperative Agreement DE-FC26-06NT42778, 'Full-scale Testing of a Mercury Oxidation Catalyst Upstream of a Wet FGD System,' which was conducted over the time-period July 24, 2006 through June 30, 2010. The objective of the project was to demonstrate at full scale the use of solid honeycomb catalysts to promote the oxidation of elemental mercury in pulverized-coal-fired flue gas. Oxidized mercury is removed downstream in wet flue gas desulfurization (FGD) absorbers and collected with the byproducts from the FGD system. The project was co-funded by EPRI, the Lower Colorado River Authority (LCRA), who also provided the host site, Great River Energy, Johnson Matthey, Southern Company, Salt River Project (SRP), the Tennessee Valley Authority (TVA), NRG Energy, Ontario Power and Westar. URS Group was the prime contractor and also provided cofunding. The scope of this project included installing and testing a gold-based catalyst upstream of one full-scale wet FGD absorber module (about 200-MW scale) at LCRA's Fayette Power Project (FPP) Unit 3, which fires Powder River Basin coal. Installation of the catalyst involved modifying the ductwork upstream of one of three wet FGD absorbers on Unit 3, Absorber C. The FGD system uses limestone reagent, operates with forced sulfite oxidation, and normally runs with two FGD modules in service and one spare. The full-scale catalyst test was planned for 24 months to provide catalyst life data. Over the test period, data were collected on catalyst pressure drop, elemental mercury oxidation across the catalyst module, and mercury capture by the downstream wet FGD absorber. The demonstration period began on May 6, 2008 with plans for the catalyst to remain in service until May 5, 2010. However, because of continual increases in pressure drop across the catalyst and concerns that further increases would adversely affect Unit 3 operations, LCRA decided to end the demonstration early, during a planned unit outage. On October 2, 2009, Unit 3 was taken out of service for a fall outage and the catalyst upstream of Absorber C was removed. This ended the demonstration after approximately 17 months of the planned 24 months of operation. This report discusses reasons for the pressure drop increase and potential measures to mitigate such problems in any future application of this technology. Mercury oxidation and capture measurements were made on Unit 3 four times during the 17-month demonstration. Measurements were performed across the catalyst and Absorber C and 'baseline' measurements were performed across Absorber A or B, which did not have a catalyst upstream. Results are presented in the report from all four sets of measurements during the demonstration period. These results include elemental mercury oxidation across the catalyst, mercury capture across Absorber C downstream of the catalyst, baseline mercury capture across Absorber A or B, and mercury re-emissions across both absorbers in service. Also presented in the report are estimates of the average mercury control performance of the oxidation catalyst technology over the 17-month demonstration period and the resulting mercury control costs.

  5. 100 Area soil washing: Bench scale tests on 116-F-4 pluto crib soil

    SciTech Connect (OSTI)

    Field, J.G.

    1994-06-10

    The Pacific Northwest Laboratory conducted a bench-scale treatability study on a pluto crib soil sample from 100 Area of the Hanford Site. The objective of this study was to evaluate the use of physical separation (wet sieving), treatment processes (attrition scrubbing, and autogenous surface grinding), and chemical extraction methods as a means of separating radioactively-contaminated soil fractions from uncontaminated soil fractions. The soil washing treatability study was conducted on a soil sample from the 116-F-4 Pluto Crib that had been dug up as part of an excavation treatability study. Trace element analyses of this soil showed no elevated concentrations above typically uncontaminated soil background levels. Data on the distribution of radionuclide in various size fractions indicated that the soil-washing tests should be focused on the gravel and sand fractions of the 116-F-4 soil. The radionuclide data also showed that {sup 137}Cs was the only contaminant in this soil that exceeded the test performance goal (TPG). Therefore, the effectiveness of subsequent soil-washing tests for 116-F-4 soil was evaluated on the basis of activity attenuation of {sup 137}Cs in the gravel- and sand-size fractions.

  6. Organo-sulfur molecules enable iron-based battery electrodes to meet the challenges of large-scale electrical energy storage

    SciTech Connect (OSTI)

    Yang, B; Malkhandi, S; Manohar, AK; Prakash, GKS; Narayanan, SR

    2014-07-03

    Rechargeable iron-air and nickel-iron batteries are attractive as sustainable and inexpensive solutions for large-scale electrical energy storage because of the global abundance and eco-friendliness of iron, and the robustness of iron-based batteries to extended cycling. Despite these advantages, the commercial use of iron-based batteries has been limited by their low charging efficiency. This limitation arises from the iron electrodes evolving hydrogen extensively during charging. The total suppression of hydrogen evolution has been a significant challenge. We have found that organo-sulfur compounds with various structural motifs (linear and cyclic thiols, dithiols, thioethers and aromatic thiols) when added in milli-molar concentration to the aqueous alkaline electrolyte, reduce the hydrogen evolution rate by 90%. These organo-sulfur compounds form strongly adsorbed layers on the iron electrode and block the electrochemical process of hydrogen evolution. The charge-transfer resistance and double-layer capacitance of the iron/electrolyte interface confirm that the extent of suppression of hydrogen evolution depends on the degree of surface coverage and the molecular structure of the organo-sulfur compound. An unanticipated electrochemical effect of the adsorption of organo-sulfur molecules is "de-passivation" that allows the iron electrode to be discharged at high current values. The strongly adsorbed organo-sulfur compounds were also found to resist electro-oxidation even at the positive electrode potentials at which oxygen evolution can occur. Through testing on practical rechargeable battery electrodes we have verified the substantial improvements to the efficiency during charging and the increased capability to discharge at high rates. We expect these performance advances to enable the design of efficient, inexpensive and eco-friendly iron-based batteries for large-scale electrical energy storage.

  7. A methodology for understanding the impacts of large-scale penetration of micro-combined heat and power

    SciTech Connect (OSTI)

    Tapia-Ahumada, K.; Prez-Arriaga, I. J.; Moniz, Ernest J.

    2013-10-01

    Co-generation at small kW-e scale has been stimulated in recent years by governments and energy regulators as one way to increasing energy efficiency and reducing CO2emissions. If a widespread adoption should be realized, their effects from a system's point of view are crucial to understand the contributions of this technology. Based on a methodology that uses long-term capacity planning expansion, this paper explores some of the implications for an electric power system of having a large number of micro-CHPs. Results show that fuel cells-based micro-CHPs have the best and most consistent performance for different residential demands from the customer and system's perspectives. As the penetration increases at important levels, gas-based technologies - particularly combined cycle units - are displaced in capacity and production, which impacts the operation of the electric system during summer peak hours. Other results suggest that the tariff design impacts the economic efficiency of the system and the operation of micro-CHPs under a price-based strategy. Finally, policies aimed at micro-CHPs should consider the suitability of the technology (in size and heat-to-power ratio) to meet individual demands, the operational complexities of a large penetration, and the adequacy of the economic signals to incentivize an efficient and sustainable operation. Highlights: Capacity displacements and daily operation of an electric power system are explored; Benefits depend on energy mix, prices, and micro-CHP technology and control scheme; Benefits are observed mostly in winter when micro-CHP heat and power are fully used; Micro-CHPs mostly displace installed capacity from natural gas combined cycle units; and, Tariff design impacts economic efficiency of the system and operation of micro-CHPs.

  8. No corrosion caused by coal chlorine found in AFBC pilot scale tests

    SciTech Connect (OSTI)

    Ho, K.; Pan, W.P.; Riley, J.T.; Liu, K.; Smith, S.

    2000-07-01

    Measurements of deposition and corrosion were made in the freeboard of a 3 m inner diameter pilot scale atmospheric fluidized-bed combustor (AFBC) during seven 1,000-hours tests using coals with chlorine (Cl) contents ranging from 0.026% up to 0.47% and sulfur contents ranging from 0.897{approximately}4.4%. Uncooled coupons of alloys 304, 309, 347 and a cooled tube of A210C medium carbon steel were exposed to the hot flue gases to investigate the effects of different coal compositions on deposition and corrosion behavior, if any. The uncooled coupons were installed at the tope of the freeboard to simulate the superheater tube conditions (1,020--1,100 F surface temperature), while the temperature of the cooled A210C test tube was controlled to match the conditions of the evaporator tubes. Specimens were removed for examination after 250, 500, 750, 1,000 hours of exposure and analyzed for deposit formation and corrosion. No chlorine was found in the corrosion scale or on the metal surfaces after any of the tests. High sulfur contents were found in the outer parts of the deposits, and appeared to be associated with calcium and magnesium suggesting that the fly ash may react further after being deposited on the surface of the metal. It was concluded that the limestone bed in the AFBC not only can capture the sulfur but also can effectively capture chlorine. This effect helps being the Cl in the AFBC flue gas down to a level of <50 ppm which is significantly lower than the 300{approximately}400 ppm expected from combustion of the coal in the absence of limestone. This reduction in chlorine species in the gas phase has possible implications for decreased corrosion problems not only in the freeboard, but also in the cold end of the boiler. No evidence was found in these tests that metal wastage or corrosion was accelerated, either directly or indirectly, by chlorine in the coal.

  9. Sensitivity analysis for joint inversion of ground-penetratingradar and thermal-hydrological data from a large-scale underground heatertest

    SciTech Connect (OSTI)

    Kowalsky, M.B.; Birkholzer, J.; Peterson, J.; Finsterle, S.; Mukhopadhya y, S.; Tsang, Y.T.

    2007-06-25

    We describe a joint inversion approach that combinesgeophysical and thermal-hydrological data for the estimation of (1)thermal-hydrological parameters (such as permeability, porosity, thermalconductivity, and parameters of the capillary pressure and relativepermeability functions) that are necessary for predicting the flow offluids and heat in fractured porous media, and (2) parameters of thepetrophysical function that relates water saturation, porosity andtemperature to the dielectric constant. The approach incorporates thecoupled simulation of nonisothermal multiphase fluid flow andground-penetrating radar (GPR) travel times within an optimizationframework. We discuss application of the approach to a large-scale insitu heater test which was conducted at Yucca Mountain, Nevada, to betterunderstand the coupled thermal, hydrological, mechanical, and chemicalprocesses that may occur in the fractured rock mass around a geologicrepository for high-level radioactive waste. We provide a description ofthe time-lapse geophysical data (i.e., cross-borehole ground-penetratingradar) and thermal-hydrological data (i.e., temperature and water contentdata) collected before and during the four-year heating phase of thetest, and analyze the sensitivity of the most relevantthermal-hydrological and petrophysical parameters to the available data.To demonstrate feasibility of the approach, and as a first step towardcomprehensive inversion of the heater test data, we apply the approach toestimate one parameter, the permeability of the rock matrix.

  10. Development and Testing of Industrial Scale Coal Fired Combustion System, Phase 3

    SciTech Connect (OSTI)

    Bert Zauderer

    1998-09-30

    Coal Tech Corp's mission is to develop, license & sell innovative, lowest cost, solid fuel fired power systems & total emission control processes using proprietary and patented technology for domestic and international markets. The present project 'DEVELOPMENT & TESTING OF INDUSTRIAL SCALE, COAL FIRED COMBUSTION SYSTEM, PHASE 3' on DOE Contract DE-AC22-91PC91162 was a key element in achieving this objective. The project consisted of five tasks that were divided into three phases. The first phase, 'Optimization of First Generation 20 MMBtu/hr Air-Cooled Slagging Coal Tech Combustor', consisted of three tasks, which are detailed in Appendix 'A' of this report. They were implemented in 1992 and 1993 at the first generation, 20 MMBtu/hour, combustor-boiler test site in Williamsport, PA. It consisted of substantial combustor modifications and coal-fired tests designed to improve the combustor's wall cooling, slag and ash management, automating of its operation, and correcting severe deficiencies in the coal feeding to the combustor. The need for these changes was indicated during the prior 900-hour test effort on this combustor that was conducted as part of the DOE Clean Coal Program. A combination of combustor changes, auxiliary equipment changes, sophisticated multi-dimensional combustion analysis, computer controlled automation, and series of single and double day shift tests totaling about 300 hours, either resolved these operational issues or indicated that further corrective changes were needed in the combustor design. The key result from both analyses and tests was that the combustor must be substantially lengthened to maximize combustion efficiency and sharply increase slag retention in the combustor. A measure of the success of these modifications was realized in the third phase of this project, consisting of task 5 entitled: 'Site Demonstration with the Second Generation 20 MMBtu/hr Air-Cooled Slagging Coal Tech Combustor'. The details of the task 5 effort are contained in Appendix 'C'. It was implemented between 1994 and 1998 after the entire 20 MMBtu/hr combustor-boiler facility was relocated to Philadelphia, PA in 1994. A new test facility was designed and installed. A substantially longer combustor was fabricated. Although not in the project plan or cost plan, an entire steam turbine-electric power generating plant was designed and the appropriate new and used equipment for continuous operation was specified. Insufficient funds and the lack of a customer for any electric power that the test facility could have generated prevented the installation of the power generating equipment needed for continuous operation. All other task 5 project measures were met and exceeded. 107 days of testing in task 5, which exceeded the 63 days (about 500 hours) in the test plan, were implemented. Compared to the first generation 20 MMBtu/hr combustor in Williamsport, the 2nd generation combustor has a much higher combustion efficiency, the retention of slag inside the combustor doubled to about 75% of the coal ash, and the ash carryover into the boiler, a major problem in the Williamsport combustor was essentially eliminated. In addition, the project goals for coal-fired emissions were exceeded in task 5. SO{sub 2} was reduced by 80% to 0.2 lb/MMBtu in a combination of reagent injection in the combustion and post-combustion zones. NO{sub x} was reduced by 93% to 0.07 lb/MMBtu in a combination of staged combustion in the combustor and post-combustion reagent injection. A baghouse was installed that was rated to 0.03 lb/MMBtu stack particle emissions. The initial particle emission test by EPA Method 5 indicated substantially higher emissions far beyond that indicated by the clear emission plume. These emissions were attributed to steel particles released by wall corrosion in the baghouse, correction of which had no effect of emissions.

  11. Intermediate Scale Laboratory Testing to Understand Mechanisms of Capillary and Dissolution Trapping during Injection and Post-Injection of CO2 in Heterogeneous Geological Formations

    SciTech Connect (OSTI)

    Illangasekare, Tissa; Trevisan, Luca; Agartan, Elif; Mori, Hiroko; Vargas-Johnson, Javier; Gonzalez-Nicolas, Ana; Cihan, Abdullah; Birkholzer, Jens; Zhou, Quanlin

    2015-03-31

    Carbon Capture and Storage (CCS) represents a technology aimed to reduce atmospheric loading of CO2 from power plants and heavy industries by injecting it into deep geological formations, such as saline aquifers. A number of trapping mechanisms contribute to effective and secure storage of the injected CO2 in supercritical fluid phase (scCO2) in the formation over the long term. The primary trapping mechanisms are structural, residual, dissolution and mineralization. Knowledge gaps exist on how the heterogeneity of the formation manifested at all scales from the pore to the site scales affects trapping and parameterization of contributing mechanisms in models. An experimental and modeling study was conducted to fill these knowledge gaps. Experimental investigation of fundamental processes and mechanisms in field settings is not possible as it is not feasible to fully characterize the geologic heterogeneity at all relevant scales and gathering data on migration, trapping and dissolution of scCO2. Laboratory experiments using scCO2 under ambient conditions are also not feasible as it is technically challenging and cost prohibitive to develop large, two- or three-dimensional test systems with controlled high pressures to keep the scCO2 as a liquid. Hence, an innovative approach that used surrogate fluids in place of scCO2 and formation brine in multi-scale, synthetic aquifers test systems ranging in scales from centimeter to meter scale developed used. New modeling algorithms were developed to capture the processes controlled by the formation heterogeneity, and they were tested using the data from the laboratory test systems. The results and findings are expected to contribute toward better conceptual models, future improvements to DOE numerical codes, more accurate assessment of storage capacities, and optimized placement strategies. This report presents the experimental and modeling methods and research results.

  12. SDSS-III Baryon Oscillation Spectroscopic Survey data release 12: Galaxy target selection and large-scale structure catalogues

    SciTech Connect (OSTI)

    Reid, Beth; Ho, Shirley; Padmanabhan, Nikhil; Percival, Will J.; Tinker, Jeremy; Tojeiro, Rito; White, Marin; Daniel J. Einstein; Maraston, Claudia; Ross, Ashley J.; Sanchez, Ariel G.; Schlegel, David; Sheldon, Erin; Strauss, Michael A.; Thomas, Daniel; Wake, David; Beutler, Florian; Bizyaev, Dmitry; Bolton, Adam S.; Brownstein, Joel R.; Chuang, Chia -Hsun; Dawson, Kyle; Harding, Paul; Kitaura, Francisco -Shu; Leauthaud, Alexie; Masters, Karen; McBride, Cameron K.; More, Surhud; Olmstead, Matthew D.; Oravetz, Daniel; Nuza, Sebastian E.; Pan, Kaike; Parejko, John; Pforr, Janine; Prada, Francisco; Rodriguez-Torres, Sergio; Salazar-Albornoz, Salvador; Samushia, Lado; Schneider, Donald P.; Scoccola, Claudia G.; Simmons, Audrey; Vargas-Magana, Mariana

    2015-11-17

    The Baryon Oscillation Spectroscopic Survey (BOSS), part of the Sloan Digital Sky Survey (SDSS) III project, has provided the largest survey of galaxy redshifts available to date, in terms of both the number of galaxy redshifts measured by a single survey, and the effective cosmological volume covered. Key to analysing the clustering of these data to provide cosmological measurements is understanding the detailed properties of this sample. Potential issues include variations in the target catalogue caused by changes either in the targeting algorithm or properties of the data used, the pattern of spectroscopic observations, the spatial distribution of targets for which redshifts were not obtained, and variations in the target sky density due to observational systematics. We document here the target selection algorithms used to create the galaxy samples that comprise BOSS. We also present the algorithms used to create large-scale structure catalogues for the final Data Release (DR12) samples and the associated random catalogues that quantify the survey mask. The algorithms are an evolution of those used by the BOSS team to construct catalogues from earlier data, and have been designed to accurately quantify the galaxy sample. Furthermore, the code used, designated mksample, is released with this paper.

  13. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    SciTech Connect (OSTI)

    Messer, Bronson; Sewell, Christopher; Heitmann, Katrin; Finkel, Dr. Hal J; Fasel, Patricia; Zagaris, George; Pope, Adrian; Habib, Salman; Parete-Koon, Suzanne T

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial in situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.

  14. Large Scale Duty Cycle (LSDC) Project: Tractive Energy Analysis Methodology and Results from Long-Haul Truck Drive Cycle Evaluations

    SciTech Connect (OSTI)

    LaClair, Tim J

    2011-05-01

    This report addresses the approach that will be used in the Large Scale Duty Cycle (LSDC) project to evaluate the fuel savings potential of various truck efficiency technologies. The methods and equations used for performing the tractive energy evaluations are presented and the calculation approach is described. Several representative results for individual duty cycle segments are presented to demonstrate the approach and the significance of this analysis for the project. The report is divided into four sections, including an initial brief overview of the LSDC project and its current status. In the second section of the report, the concepts that form the basis of the analysis are presented through a discussion of basic principles pertaining to tractive energy and the role of tractive energy in relation to other losses on the vehicle. In the third section, the approach used for the analysis is formalized and the equations used in the analysis are presented. In the fourth section, results from the analysis for a set of individual duty cycle measurements are presented and different types of drive cycles are discussed relative to the fuel savings potential that specific technologies could bring if these drive cycles were representative of the use of a given vehicle or trucking application. Additionally, the calculation of vehicle mass from measured torque and speed data is presented and the accuracy of the approach is demonstrated.

  15. Using an Energy Performance Based Design-Build Process to Procure a Large Scale Low-Energy Building: Preprint

    SciTech Connect (OSTI)

    Pless, S.; Torcellini, P.; Shelton, D.

    2011-05-01

    This paper will review a procurement, acquisition, and contract process of a large-scale replicable net zero energy (ZEB) office building. The owners developed and implemented an energy performance based design-build process to procure a 220,000 ft2 office building with contractual requirements to meet demand side energy and LEED goals. We will outline the key procurement steps needed to ensure achievement of our energy efficiency and ZEB goals. The development of a clear and comprehensive Request for Proposals (RFP) that includes specific and measurable energy use intensity goals is critical to ensure energy goals are met in a cost effective manner. The RFP includes a contractual requirement to meet an absolute demand side energy use requirement of 25 kBtu/ft2, with specific calculation methods on what loads are included, how to normalize the energy goal based on increased space efficiency and data center allocation, specific plug loads and schedules, and calculation details on how to account for energy used from the campus hot and chilled water supply. Additional advantages of integrating energy requirements into this procurement process include leveraging the voluntary incentive program, which is a financial incentive based on how well the owner feels the design-build team is meeting the RFP goals.

  16. SDSS-III Baryon Oscillation Spectroscopic Survey data release 12: Galaxy target selection and large-scale structure catalogues

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Reid, Beth; Ho, Shirley; Padmanabhan, Nikhil; Percival, Will J.; Tinker, Jeremy; Tojeiro, Rito; White, Marin; Daniel J. Einstein; Maraston, Claudia; Ross, Ashley J.; et al

    2015-11-17

    The Baryon Oscillation Spectroscopic Survey (BOSS), part of the Sloan Digital Sky Survey (SDSS) III project, has provided the largest survey of galaxy redshifts available to date, in terms of both the number of galaxy redshifts measured by a single survey, and the effective cosmological volume covered. Key to analysing the clustering of these data to provide cosmological measurements is understanding the detailed properties of this sample. Potential issues include variations in the target catalogue caused by changes either in the targeting algorithm or properties of the data used, the pattern of spectroscopic observations, the spatial distribution of targets formore » which redshifts were not obtained, and variations in the target sky density due to observational systematics. We document here the target selection algorithms used to create the galaxy samples that comprise BOSS. We also present the algorithms used to create large-scale structure catalogues for the final Data Release (DR12) samples and the associated random catalogues that quantify the survey mask. The algorithms are an evolution of those used by the BOSS team to construct catalogues from earlier data, and have been designed to accurately quantify the galaxy sample. Furthermore, the code used, designated mksample, is released with this paper.« less

  17. Large-scale purification and crystallization of the endoribonuclease XendoU: troubleshooting with His-tagged proteins

    SciTech Connect (OSTI)

    Renzi, Fabiana; Panetta, Gianna; Vallone, Beatrice; Brunori, Maurizio; Arceci, Massimo; Bozzoni, Irene; Laneve, Pietro; Caffarelli, Elisa

    2006-03-01

    Recombinant His-tagged XendoU, a eukaryotic endoribonuclease, appeared to aggregate in the presence of divalent cations. Monodisperse protein which yielded crystals diffracting to 2.2 Å was obtained by addition of EDTA. XendoU is the first endoribonuclease described in higher eukaryotes as being involved in the endonucleolytic processing of intron-encoded small nucleolar RNAs. It is conserved among eukaryotes and its viral homologue is essential in SARS replication and transcription. The large-scale purification and crystallization of recombinant XendoU are reported. The tendency of the recombinant enzyme to aggregate could be reversed upon the addition of chelating agents (EDTA, imidazole): aggregation is a potential drawback when purifying and crystallizing His-tagged proteins, which are widely used, especially in high-throughput structural studies. Purified monodisperse XendoU crystallized in two different space groups: trigonal P3{sub 1}21, diffracting to low resolution, and monoclinic C2, diffracting to higher resolution.

  18. Bench Scale Development and Testing of a Novel Adsorption Process for Post-Combustion CO₂ Capture

    SciTech Connect (OSTI)

    Jain, Ravi

    2015-09-01

    A physical sorption process to produce dry CO₂ at high purity (>98%) and high recovery (>90%) from the flue gas taken before or after the FGD was demonstrated both in the lab and in the field (one ton per day scale). A CO₂ recovery of over 94% and a CO₂ purity of over 99% were obtained in the field tests. The process has a moisture, SOX, and Hg removal stage followed by a CO₂ adsorption stage. Evaluations based on field testing, process simulation and detailed engineering studies indicate that the process has the potential for more than 40% reduction in the capital and more than 40% reduction in parasitic power for CO₂ capture compared to MEA. The process has the potential to provide CO₂ at a cost (<$40/tonne) and quality (<1 ppm H₂O, <1 ppm SOX, <10 ppm O₂) suitable for EOR applications which can make CO₂ capture profitable even in the absence of climate legislation. The process is applicable to power plants without SOX, Hg and NOX removal equipment.

  19. Determination of Large-Scale Cloud Ice Water Concentration by Combining Surface Radar and Satellite Data in Support of ARM SCM Activities

    SciTech Connect (OSTI)

    Liu, Guosheng

    2013-03-15

    Single-column modeling (SCM) is one of the key elements of Atmospheric Radiation Measurement (ARM) research initiatives for the development and testing of various physical parameterizations to be used in general circulation models (GCMs). The data required for use with an SCM include observed vertical profiles of temperature, water vapor, and condensed water, as well as the large-scale vertical motion and tendencies of temperature, water vapor, and condensed water due to horizontal advection. Surface-based measurements operated at ARM sites and upper-air sounding networks supply most of the required variables for model inputs, but do not provide the horizontal advection term of condensed water. Since surface cloud radar and microwave radiometer observations at ARM sites are single-point measurements, they can provide the amount of condensed water at the location of observation sites, but not a horizontal distribution of condensed water contents. Consequently, observational data for the large-scale advection tendencies of condensed water have not been available to the ARM cloud modeling community based on surface observations alone. This lack of advection data of water condensate could cause large uncertainties in SCM simulations. Additionally, to evaluate GCMs’ cloud physical parameterization, we need to compare GCM results with observed cloud water amounts over a scale that is large enough to be comparable to what a GCM grid represents. To this end, the point-measurements at ARM surface sites are again not adequate. Therefore, cloud water observations over a large area are needed. The main goal of this project is to retrieve ice water contents over an area of 10 x 10 deg. surrounding the ARM sites by combining surface and satellite observations. Built on the progress made during previous ARM research, we have conducted the retrievals of 3-dimensional ice water content by combining surface radar/radiometer and satellite measurements, and have produced 3-D cloud ice water contents in support of cloud modeling activities. The approach of the study is to expand a (surface) point measurement to an (satellite) area measurement. That is, the study takes the advantage of the high quality cloud measurements (particularly cloud radar and microwave radiometer measurements) at the point of the ARM sites. We use the cloud ice water characteristics derived from the point measurement to guide/constrain a satellite retrieval algorithm, then use the satellite algorithm to derive the 3-D cloud ice water distributions within an 10° (latitude) x 10° (longitude) area. During the research period, we have developed, validated and improved our cloud ice water retrievals, and have produced and archived at ARM website as a PI-product of the 3-D cloud ice water contents using combined satellite high-frequency microwave and surface radar observations for SGP March 2000 IOP and TWP-ICE 2006 IOP over 10 deg. x 10 deg. area centered at ARM SGP central facility and Darwin sites. We have also worked on validation of the 3-D ice water product by CloudSat data, synergy with visible/infrared cloud ice water retrievals for better results at low ice water conditions, and created a long-term (several years) of ice water climatology in 10 x 10 deg. area of ARM SGP and TWP sites and then compared it with GCMs.

  20. SIMULTANEOUS OBSERVATIONS OF A LARGE-SCALE WAVE EVENT IN THE SOLAR ATMOSPHERE: FROM PHOTOSPHERE TO CORONA

    SciTech Connect (OSTI)

    Shen, Yuandeng; Liu, Yu

    2012-06-20

    For the first time, we report a large-scale wave that was observed simultaneously in the photosphere, chromosphere, transition region, and low corona layers of the solar atmosphere. Using the high temporal and high spatial resolution observations taken by the Solar Magnetic Activity Research Telescope at Hida Observatory and the Atmospheric Imaging Assembly (AIA) on board Solar Dynamic Observatory, we find that the wave evolved synchronously at different heights of the solar atmosphere, and it propagated at a speed of 605 km s{sup -1} and showed a significant deceleration (-424 m s{sup -2}) in the extreme-ultraviolet (EUV) observations. During the initial stage, the wave speed in the EUV observations was 1000 km s{sup -1}, similar to those measured from the AIA 1700 A (967 km s{sup -1}) and 1600 A (893 km s{sup -1}) observations. The wave was reflected by a remote region with open fields, and a slower wave-like feature at a speed of 220 km s{sup -1} was also identified following the primary fast wave. In addition, a type-II radio burst was observed to be associated with the wave. We conclude that this wave should be a fast magnetosonic shock wave, which was first driven by the associated coronal mass ejection and then propagated freely in the corona. As the shock wave propagated, its legs swept the solar surface and thereby resulted in the wave signatures observed in the lower layers of the solar atmosphere. The slower wave-like structure following the primary wave was probably caused by the reconfiguration of the low coronal magnetic fields, as predicted in the field-line stretching model.

  1. Large-scale spatial variability of riverbed temperature gradients in Snake River fall Chinook salmon spawning areas

    SciTech Connect (OSTI)

    Hanrahan, Timothy P.

    2007-02-01

    In the Snake River basin of the Pacific northwestern United States, hydroelectric dam operations are often based on the predicted emergence timing of salmon fry from the riverbed. The spatial variability and complexity of surface water and riverbed temperature gradients results in emergence timing predictions that are likely to have large errors. The objectives of this study were to quantify the thermal heterogeneity between the river and riverbed in fall Chinook salmon spawning areas and to determine the effects of thermal heterogeneity on fall Chinook salmon emergence timing. This study quantified river and riverbed temperatures at 15 fall Chinook salmon spawning sites distributed in two reaches throughout 160 km of the Snake River in Hells Canyon, Idaho, USA, during three different water years. Temperatures were measured during the fall Chinook salmon incubation period with self-contained data loggers placed in the river and at three different depths below the riverbed surface. At all sites temperature increased with depth into the riverbed, including significant differences (p<0.05) in mean water temperature of up to 3.8C between the river and the riverbed among all the sites. During each of the three water years studied, river and riverbed temperatures varied significantly among all the study sites, among the study sites within each reach, and between sites located in the two reaches. Considerable variability in riverbed temperatures among the sites resulted in fall Chinook salmon emergence timing estimates that varied by as much as 55 days, depending on the source of temperature data used for the estimate. Monitoring of riverbed temperature gradients at a range of spatial scales throughout the Snake River would provide better information for managing hydroelectric dam operations, and would aid in the design and interpretation of future empirical research into the ecological significance of physical riverine processes.

  2. Intercomparison of methods of coupling between convection and large-scale circulation. 1. Comparison over uniform surface conditions

    SciTech Connect (OSTI)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; Sessions, S.; Herman, M. J.; Sobel, A.; Wang, S.; Kim, D.; Cheng, A.; Bellon, G.; Peyrille, P.; Ferry, F.; Siebesma, P.; van Ulft, L.

    2015-10-24

    Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistent implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.

  3. Intercomparison of methods of coupling between convection and large-scale circulation. 1. Comparison over uniform surface conditions

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; Sessions, S.; Herman, M. J.; Sobel, A.; Wang, S.; Kim, D.; Cheng, A.; Bellon, G.; et al

    2015-10-24

    Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less

  4. Geomechanical effects on CO{sub 2} leakage through fault zones during large-scale underground injection

    SciTech Connect (OSTI)

    Rinaldi, A.P.; Rutqvist, J.; Cappa, F.

    2013-09-01

    The importance of geomechanicsincluding the potential for faults to reactivate during large scale geologic carbon sequestration operationshas recently become more widely recognized. However, notwithstanding the potential for triggering notable (felt) seismic events, the potential for buoyancy-driven CO{sub 2} to reach potable groundwater and the ground surface is actually more important from public safety and storage-efficiency perspectives. In this context, this work extends the previous studies on the geomechanical modeling of fault responses during underground carbon dioxide injection, focusing on the short-term integrity of the sealing caprock, and hence on the potential for leakage of either brine or CO{sub 2} to reach the shallow groundwater aquifers during active injection. We consider stress/strain-dependent permeability and study the leakage through the fault zone as its permeability changes during a reactivation, also causing seismicity. We analyze several scenarios related to the volume of CO{sub 2} injected (and hence as a function of the overpressure), involving both minor and major faults, and analyze the profile risks of leakage for different stress/strain-permeability coupling functions. We conclude that whereas it is very difficult to predict how much fault permeability could change upon reactivation, this process can have a significant impact on the leakage rate. Moreover, our analysis shows that induced seismicity associated with fault reactivation may not necessarily open up a new flow path for leakage. Results show a poor correlation between magnitude and amount of fluid leakage, meaning that a single event is generally not enough to substantially change the permeability along the entire fault length. Consequently, even if some changes in permeability occur, this does not mean that the CO{sub 2} will migrate up along the entire fault, breaking through the caprock to enter the overlying aquifer.

  5. Evaluation of Flygt Mixers for Application in Savannah River Site Tank 19 Test Results from Phase B: Mid-Scale Testing at PNNL

    SciTech Connect (OSTI)

    Powell, M.R.; Combs, W.H.; Farmer, J.R.; Gladki, H.; Hatchell, B.K.; Johnson, M.A.; Poirier, M.R.; Rodwell, P.O.

    1999-03-30

    Pacific Northwest National Laboratory (PNNL) performed mixer tests using 3-kW (4-hp) Flygt mixers in 1.8- and 5.7-m-diameter tanks at the 336 building facility in Richland, Washington to evaluate candidate scaling relationships for Flygt mixers used for sludge mobilization and particle suspension. These tests constituted the second phase of a three-phase test program involving representatives from ITT Flygt Corporation, the Savannah River Site (SRS), the Oak Ridge National Laboratory (ORNL), and PNNL. The results of the first phase of tests, which were conducted at ITT Flygt's facility in a 0.45-m-diameter tank, are documented in Powell et al. (1999). Although some of the Phase B tests were geometrically similar to selected Phase A tests (0.45-m tank), none of the Phase B tests were geometrically, cinematically, and/or dynamically similar to the planned Tank 19 mixing system. Therefore, the mixing observed during the Phase B tests is not directly indicative of the mixing expected in Tank 19 and some extrapolation of the data is required to make predictions for Tank 19 mixing. Of particular concern is the size of the mixer propellers used for the 5.7-m tank tests. These propellers were more than three times larger than required by geometric scaling of the Tank 19 mixers. The implications of the lack of geometric similarity, as well as other factors that complicate interpretation of the test results, are discussed in Section 5.4.

  6. SUPERCRITICAL WATER PARTIAL OXIDATION PHASE I - PILOT-SCALE TESTING / FEASIBILITY STUDIES FINAL REPORT

    SciTech Connect (OSTI)

    SPRITZER,M; HONG,G

    2005-01-01

    Under Cooperative Agreement No. DE-FC36-00GO10529 for the Department of Energy, General Atomics (GA) is developing Supercritical Water Partial Oxidation (SWPO) as a means of producing hydrogen from low-grade biomass and other waste feeds. The Phase I Pilot-scale Testing/Feasibility Studies have been successfully completed and the results of that effort are described in this report. The Key potential advantages of the SWPO process is the use of partial oxidation in-situ to rapidly heat the gasification medium, resulting in less char formation and improved hydrogen yield. Another major advantage is that the high-pressure, high-density aqueous environment is ideal for reaching and gasifying organics of all types. The high water content of the medium encourages formation of hydrogen and hydrogen-rich products and is especially compatible with high water content feeds such as biomass materials. The high water content of the medium is also effective for gasification of hydrogen-poor materials such as coal. A versatile pilot plant for exploring gasification in supercritical water has been established at GA's facilities in San Diego. The Phase I testing of the SWPO process with wood and ethanol mixtures demonstrated gasification efficiencies of about 90%, comparable to those found in prior laboratory-scale SCW gasification work carreid out at the University of Hawaii at Manoa (UHM) as well as other biomass gasification experience with conventional gasifiers. As in the prior work at UHM, a significant amount of the hydrogen found in the gas phase products is derived from the water/steam matrix. The studies at UHM utilized an indirectly heated gasifier with an acitvated carbon catalyst. In contrast, the GA studies utilized a directly heated gasifier without catalyst, plus a surrogate waste fuel. Attainment of comparable gasification efficiencies without catalysis is an important advancement for the GA process, and opens the way for efficient hydrogen production from low-value, dirty feed materials. The Phase I results indicate that a practical means to overcome limitations on biomass slurry feed concentration and preheat temperatuare is to coprocess an auxiliary high heating value material. SWPO coprocessing of tow hgih-water content wastes, partially dewatered sewage sludge and trap grease, yields a scenario for the production of hydrogen at highly competitive prices. It is estimated that there are hundreds if not thousands of potential sites for this technology across the US and worldwide.

  7. Second test of base hydrolysate decomposition in a 0.04 gallon per minute scale reactor

    SciTech Connect (OSTI)

    Cena, R.J.; Thorsness, C.B.; Coburn, T.T.; Watkins, B.E.

    1994-10-11

    LLNL has built and operated a pilot plant for processing oil shale using recirculating hot solids. This pilot plant, was adapted in 1993 to demonstrate the feasibility of decomposing base hydrolysate, a mixture of sodium nitrite, sodium formate and other constituents. This material is the waste stream from the base hydrolysis process for destruction of energetic materials. In the Livermore process, the waste feed is thermally treated in a moving packed bed of ceramic spheres, where constituents in the waste decompose, in the presence of carbon dioxide, to form solid sodium carbonate and a suite of gases including: methane, carbon monoxide, oxygen, nitrogen oxides, ammonia and possibly molecular nitrogen. The ceramic spheres are circulated and heated, providing the energy required for thermal decomposition. The spheres provide a large surface area for evaporation and decomposition to occur, avoiding sticking and agglomeration of the waste. We performed a 2.5 hour test of the solids recirculation system, with continuous injection of approximately 0.04 gal/min of waste. Gasses from the packed bed reactor were directed through the lift pipe and water was not condensed. Potassium carbonate (0.356 M) was added to the hydrolysate prior to its introduction to the retort. Continuous on-line gas analysis was invaluable in tracking the progress of the experiment and quantifying the decomposition products. Analyses showed the primary solid product, collected in the lift exit cyclone, was indeed sodium carbonate, as expected. For the reactor condition studied in this test, N{sub 2}O was found to be the primary nitrogen bearing gas species. In the test, approximately equal quantities of ammonia and nitrogen bearing oxide gases were produced. Under proper conditions, this ammonia and NO{sub x} can be recombined downstream to form N{sub 2} and O{sub 2} as the primary effluent gases.

  8. Co-gasification of municipal solid waste and material recovery in a large-scale gasification and melting system

    SciTech Connect (OSTI)

    Tanigaki, Nobuhiro; Manako, Kazutaka; Osada, Morihiro

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer This study evaluates the effects of co-gasification of MSW with MSW bottom ash. Black-Right-Pointing-Pointer No significant difference between MSW treatment with and without MSW bottom ash. Black-Right-Pointing-Pointer PCDD/DFs yields are significantly low because of the high carbon conversion ratio. Black-Right-Pointing-Pointer Slag quality is significantly stable and slag contains few hazardous heavy metals. Black-Right-Pointing-Pointer The final landfill amount is reduced and materials are recovered by DMS process. - Abstract: This study evaluates the effects of co-gasification of municipal solid waste with and without the municipal solid waste bottom ash using two large-scale commercial operation plants. From the viewpoint of operation data, there is no significant difference between municipal solid waste treatment with and without the bottom ash. The carbon conversion ratios are as high as 91.7% and 95.3%, respectively and this leads to significantly low PCDD/DFs yields via complete syngas combustion. The gross power generation efficiencies are 18.9% with the bottom ash and 23.0% without municipal solid waste bottom ash, respectively. The effects of the equivalence ratio are also evaluated. With the equivalence ratio increasing, carbon monoxide concentration is decreased, and carbon dioxide and the syngas temperature (top gas temperature) are increased. The carbon conversion ratio is also increased. These tendencies are seen in both modes. Co-gasification using the gasification and melting system (Direct Melting System) has a possibility to recover materials effectively. More than 90% of chlorine is distributed in fly ash. Low-boiling-point heavy metals, such as lead and zinc, are distributed in fly ash at rates of 95.2% and 92.0%, respectively. Most of high-boiling-point heavy metals, such as iron and copper, are distributed in metal. It is also clarified that slag is stable and contains few harmful heavy metals such as lead. Compared with the conventional waste management framework, 85% of the final landfill amount reduction is achieved by co-gasification of municipal solid waste with bottom ash and incombustible residues. These results indicate that the combined production of slag with co-gasification of municipal solid waste with the bottom ash constitutes an ideal approach to environmental conservation and resource recycling.

  9. Radiative Heating of the ISCCP Upper Level Cloud Regimes and its Impact on the Large-scale Tropical Circulation

    SciTech Connect (OSTI)

    Li, Wei; Schumacher, Courtney; McFarlane, Sally A.

    2013-01-31

    Radiative heating profiles of the International Satellite Cloud Climatology Project (ISCCP) cloud regimes (or weather states) were estimated by matching ISCCP observations with radiative properties derived from cloud radar and lidar measurements from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) sites at Manus, Papua New Guinea, and Darwin, Australia. Focus was placed on the ISCCP cloud regimes containing the majority of upper level clouds in the tropics, i.e., mesoscale convective systems (MCSs), deep cumulonimbus with cirrus, mixed shallow and deep convection, and thin cirrus. At upper levels, these regimes have average maximum cloud occurrences ranging from 30% to 55% near 12 km with variations depending on the location and cloud regime. The resulting radiative heating profiles have maxima of approximately 1 K/day near 12 km, with equal heating contributions from the longwave and shortwave components. Upper level minima occur near 15 km, with the MCS regime showing the strongest cooling of 0.2 K/day and the thin cirrus showing no cooling. The gradient of upper level heating ranges from 0.2 to 0.4 K/(day∙km), with the most convectively active regimes (i.e., MCSs and deep cumulonimbus with cirrus) having the largest gradient. When the above heating profiles were applied to the 25-year ISCCP data set, the tropics-wide average profile has a radiative heating maximum of 0.45Kday-1 near 250 hPa. Column-integrated radiative heating of upper level cloud accounts for about 20% of the latent heating estimated by the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR). The ISCCP radiative heating of tropical upper level cloud only slightly modifies the response of an idealized primitive equation model forced with the tropics-wide TRMM PR latent heating, which suggests that the impact of upper level cloud is more important to large-scale tropical circulation variations because of convective feedbacks rather than direct forcing by the cloud radiative heating profiles. However, the height of the radiative heating maxima and gradient of the heating profiles are important to determine the sign and patterns of the horizontal circulation anomaly driven by radiative heating at upper levels.

  10. The large-scale structure of the halo of the Andromeda galaxy. I. Global stellar density, morphology and metallicity properties

    SciTech Connect (OSTI)

    Ibata, Rodrigo A.; Martin, Nicolas F.; Lewis, Geraint F.; McConnachie, Alan W.; Irwin, Michael J.; Ferguson, Annette M. N.; Bernard, Edouard J.; Peñarrubia, Jorge; Babul, Arif; Navarro, Julio; Chapman, Scott C.; Collins, Michelle; Fardal, Mark; Mackey, A. D.; Rich, R. Michael; Tanvir, Nial; Widrow, Lawrence

    2014-01-10

    We present an analysis of the large-scale structure of the halo of the Andromeda galaxy, based on the Pan-Andromeda Archeological Survey (PAndAS), currently the most complete map of resolved stellar populations in any galactic halo. Despite the presence of copious substructures, the global halo populations follow closely power-law profiles that become steeper with increasing metallicity. We divide the sample into stream-like populations and a smooth halo component (defined as the population that cannot be resolved into spatially distinct substructures with PAndAS). Fitting a three-dimensional halo model reveals that the most metal-poor populations ([Fe/H]<−1.7) are distributed approximately spherically (slightly prolate with ellipticity c/a = 1.09 ± 0.03), with only a relatively small fraction residing in discernible stream-like structures (f {sub stream} = 42%). The sphericity of the ancient smooth component strongly hints that the dark matter halo is also approximately spherical. More metal-rich populations contain higher fractions of stars in streams, with f {sub stream} becoming as high as 86% for [Fe/H]>−0.6. The space density of the smooth metal-poor component has a global power-law slope of γ = –3.08 ± 0.07, and a non-parametric fit shows that the slope remains nearly constant from 30 kpc to ∼300 kpc. The total stellar mass in the halo at distances beyond 2° is ∼1.1 × 10{sup 10} M {sub ☉}, while that of the smooth component is ∼3 × 10{sup 9} M {sub ☉}. Extrapolating into the inner galaxy, the total stellar mass of the smooth halo is plausibly ∼8 × 10{sup 9} M {sub ☉}. We detect a substantial metallicity gradient, which declines from ([Fe/H]) = –0.7 at R = 30 kpc to ([Fe/H]) = –1.5 at R = 150 kpc for the full sample, with the smooth halo being ∼0.2 dex more metal poor than the full sample at each radius. While qualitatively in line with expectations from cosmological simulations, these observations are of great importance as they provide a prototype template that such simulations must now be able to reproduce in quantitative detail.

  11. SUMMARY PLAN FOR BENCH-SCALE REFORMER AND PRODUCT TESTING TREATABILITY STUDIES USING HANFORD TANK WASTE

    SciTech Connect (OSTI)

    DUNCAN JB

    2010-08-19

    This paper describes the sample selection, sample preparation, environmental, and regulatory considerations for shipment of Hanford radioactive waste samples for treatability studies of the FBSR process at the Savannah River National Laboratory and the Pacific Northwest National Laboratory. The U.S. Department of Energy (DOE) Hanford tank farms contain approximately 57 million gallons of wastes, most of which originated during the reprocessing of spent nuclear fuel to produce plutonium for defense purposes. DOE intends to pre-treat the tank waste to separate the waste into a high level fraction, that will be vitrified and disposed of in a national repository as high-level waste (HLW), and a low-activity waste (LAW) fraction that will be immobilized for on-site disposal at Hanford. The Hanford Waste Treatment and Immobilization Plant (WTP) is the focal point for the treatment of Hanford tank waste. However, the WTP lacks the capacity to process all of the LAW within the regulatory required timeframe. Consequently, a supplemental LAW immobilization process will be required to immobilize the remainder of the LAW. One promising supplemental technology is Fluidized Bed Steam Reforming (FBSR) to produce a sodium-alumino-silicate (NAS) waste form. The NAS waste form is primarily composed of nepheline (NaAlSiO{sub 4}), sodalite (Nas[AlSiO{sub 4}]{sub 6}Cl{sub 2}), and nosean (Na{sub 8}[AlSiO{sub 4}]{sub 6}SO{sub 4}). Semivolatile anions such as pertechnetate (TcO{sub 4}{sup -}) and volatiles such as iodine as iodide (I{sup -}) are expected to be entrapped within the mineral structures, thereby immobilizing them (Janzen 2008). Results from preliminary performance tests using surrogates, suggests that the release of semivolatile radionuclides {sup 99}Tc and volatile {sup 129}I from granular NAS waste form is limited by Nosean solubility. The predicted release of {sup 99}Tc from the NAS waste form at a 100 meters down gradient well from the Integrated Disposal Facility (IDF) was found to be comparable to immobilized low-activity waste glass waste form in the initial supplemental LAW treatment technology risk assessment (Mann 2003). To confirm this hypothesis, DOE is funding a treatability study where three actual Hanford tank waste samples (containing both {sup 99}Tc and {sup 125}I) will be processed in Savannah River National Laboratory's (SRNL) Bench-Scale Reformer (BSR) to form the mineral product, similar to the granular NAS waste form, that will then be subject to a number of waste form qualification tests. In previous tests, SRNL have demonstrated that the BSR product is chemically and physically equivalent to the FBSR product (Janzen 2005). The objective of this paper is to describe the sample selection, sample preparation, and environmental and regulatory considerations for treatability studies of the FBSR process using Hanford tank waste samples at the SNRL. The SNRL will process samples in its BSR. These samples will be decontaminated in the 222-S Laboratory to remove undissolved solids and selected radioisotopes to comply with Department of Transportation (DOT) shipping regulations and to ensure worker safety by limiting radiation exposure to As Low As Reasonably Achievable (ALARA). These decontamination levels will also meet the Nuclear Regulatory Commission's (NRC's) definition of low activity waste (LAW). After the SNRL has processed the tank samples to a granular mineral form, SRNL and Pacific Northwest National Laboratory (PNNL) will conduct waste form testing on both the granular material and monoliths prepared from the granular material. The tests being performed are outlined in Appendix A.

  12. Adequacy of Power-to-Mass Scaling in Simulating PWR Incident Transient for Reduced-Height, Reduced-Pressure and Full-Height, Full-Pressure Integral System Test Facilities

    SciTech Connect (OSTI)

    Liu, T.-J.; Lee, C.-H

    2004-03-15

    A complete scheme of scaling methods to design the reduced-height, reduced-pressure (RHRP) Institute of Nuclear Energy Research Integral System Test (IIST) facility and to specify test conditions for incident simulation was developed. In order to preserve core decay power history and coolant mass inventory during a transient, a unique power-to-mass scaling method is proposed and utilized for RHRP and full-height, full-pressure (FHFP) systems. To validate the current scaling method, three counterpart tests done at the IIST facility are compared with the FHFP tests in small-break loss-of-coolant, station blackout, and loss-of-feedwater accidents performed at the Large-Scale Test Facility (LSTF) and the BETHSY test facility. Although differences appeared in design, scaling, and operation conditions among the IIST, LSTF, and BETHSY test facilities, the important physical phenomena shown in the facilities are almost the same. The physics involved in incident transient phenomena are well measured and modeled by showing the common thermal-hydraulic behavior of key parameters and the general consistency of chronological events. The results also confirm the adequacy of power-to-mass scaling methodology.

  13. CyanoGEBA: A Better Understanding of Cynobacterial Diversity through Large-scale Genomics (JGI Seventh Annual User Meeting 2012: Genomics of Energy and Environment)

    SciTech Connect (OSTI)

    Shih, Patrick [Kerfeld Lab, UC Berkeley and JGI] [Kerfeld Lab, UC Berkeley and JGI

    2012-03-22

    Patrick Shih, representing both the University of California, Berkeley and JGI, gives a talk titled "CyanoGEBA: A Better Understanding of Cynobacterial Diversity through Large-scale Genomics" at the JGI 7th Annual Users Meeting: Genomics of Energy & Environment Meeting on March 22, 2012 in Walnut Creek, California.

  14. CyanoGEBA: A Better Understanding of Cynobacterial Diversity through Large-scale Genomics (JGI Seventh Annual User Meeting 2012: Genomics of Energy and Environment)

    ScienceCinema (OSTI)

    Shih, Patrick [Kerfeld Lab, UC Berkeley and JGI

    2013-01-22

    Patrick Shih, representing both the University of California, Berkeley and JGI, gives a talk titled "CyanoGEBA: A Better Understanding of Cynobacterial Diversity through Large-scale Genomics" at the JGI 7th Annual Users Meeting: Genomics of Energy & Environment Meeting on March 22, 2012 in Walnut Creek, California.

  15. A BRIEF DESCRIPTION OF THE SMALL-SCALE SAFETY TESTING SYSTEMS AT LAWRENCE LIVERMORE NATIONAL LABORATORY

    SciTech Connect (OSTI)

    HSU, P C

    2008-07-31

    Small-scale sensitivity testing is important for determining material response to various stimuli including impact, friction, and static spark. These tests, briefly described below, provide parameters for safety in handling. ERL Type 12 drop hammer equipment at LLNL, shown in Figure 1, was used to determine the impact sensitivity. The equipment includes a 2.5-kg drop weight, a striker (upper anvil, 2.5 kg for solid samples and 1.0 kg for liquid samples), a bottom anvil, a microphone sensor, and a peakmeter. For each drop, sample (35 mg for solid or 45 microliter for liquid) is placed on the bottom anvil surface and impacted by the drop weight from different heights. Signs of reactions upon impact are observed and recorded. These signs include noises, flashes or sparks, smoke, pressure, gas emissions, temperature rise due to exothermic reaction, color change of the sample, and changes to the anvil surface (noted by inspection). For solid samples, a 'GO' was defined as a microphone sensor (for noise detection) response of {ge} 1.3 V as measured by a peakmeter. The higher the DH{sub 50} values, the lower the impact sensitivity. The method used to calculate DH{sub 50} values is the 'up and down' or Bruceton method. PETN and RDX have impact sensitivities of 15 and 35 cm, respectively. TATB has impact sensitivity more than 177 cm. For liquid samples, a 'GO' was determined by the noise levels as measured by the peakmeter, appearance of flashes, temperature rise of the anvil, and visual inspection of the anvil surface. Two liquid samples TMETN and FEFO have impact sensitivities of 14 and 32 cm, respectively. Figure 2 shows a 'GO' event observed during the impact sensitivity test; flashes appeared as the drop weight impacted the sample. A BAM friction sensitivity test machine, as shown in Figure 3, was used to determine the frictional sensitivity. The system uses a fixed porcelain pin and a movable porcelain plate that executes a reciprocating motion. Weight affixed to a torsion arm allows for a variation in applied force between 0.5 kg to 36.0 kg. The relative measure of the frictional sensitivity of a material is based upon the smallest load (kg) at which reaction occurs for a 1-in-10 series of attempts. The lower the load values, the higher the frictional sensitivity. PETN has a frictional sensitivity of 6.4 kg. The static spark machine at LLNL is used to evaluate the electrostatic discharge hazards (human ESD) associated with the handling of explosives. The machine was custom-built almost 30 years ago and consists of a capacitor bank (up to 20,000 pF), a voltage meter, and a discharge circuit, as shown in Figure 4. An adjustable resistor up to 510 ohms (chosen to simulate human body) is wired to the discharge circuit. A 5-mg sample is placed in a Teflon washer sealed to a steel disc and covered with a Mylar tape. High static voltage (up to 10 kv) is applied and discharged to the sample. Evidence of reaction is judged from the condition of Mylar tape, smokes, and color change of the sample. Voltage, capacitance, and resistance can be adjusted to achieve the desired static energy. The results obtained are expressed as a zero in 10 or one-in-ten at a specific voltage and joules. One reaction in ten trials at {le} 0.25 joules is considered spark-sensitive. Primary explosives show reaction at 0.1 joule.

  16. Pilot-scale Tests to Vitrify Korean Low-Level Wastes

    SciTech Connect (OSTI)

    Choi, K.; Kim, C.-W.; Park, J. K.; Shin, S. W.; Song, M.-J.; Brunelot, P.; Flament, T.

    2002-02-26

    Korea is under preparation of its first commercial vitrification plant to handle LLW from her Nuclear Power Plants (NPPs). The waste streams include three categories: combustible Dry Active Wastes (DAW), borate concentrates, and spent resin. The combustible DAW in this research contains vinyl bag, paper, and protective cloth and rubber shoe. The loaded resin was used to simulate spent resin from NPPs. As a part of this project, Nuclear Environment Technology Institute (NETEC) has tested an operation mode utilizing its pilot-scale plant and the mixed waste surrogates of resin and DAW. It has also proved, with continuous operation for more than 100 hours, the consistency and operability of the plant including cold crucible melter and its off-gas treatment equipment. Resin and combustible DAW were simultaneously fed into the glass bath with periodic addition of various glass frits as additives, so that it achieved a volume reduction factor larger than 70. By adding various glass frits, this paper discusses about maintaining the viscosity and electrical conductivity of glass bath within their operable ranges, but not about obtaining a durable glass product. The operating mode starts with a batch of glass where a titanium ring is buried. When the induced power ignites the ring, the joule heat melts the surrounding glass frit along with the oxidation heat of titanium. As soon as the molten bath is prepared, in the first stage of the mode, the wastes consisting of loaded resin and combustible DAW are fed with no or minimum addition of glass frits. Then, in the second stage, the bath composition is kept as constant as possible. This operation was successful in terms of maintaining the glass bath under operable condition and produced homogeneous glass. This operation mode could be adapted in commercial stage.

  17. Development of a pilot-scale kinetic extruder feeder system and test program. Phase II. Verification testing. Final report

    SciTech Connect (OSTI)

    Not Available

    1984-01-12

    This report describes the work done under Phase II, the verification testing of the Kinetic Extruder. The main objective of the test program was to determine failure modes and wear rates. Only minor auxiliary equipment malfunctions were encountered. Wear rates indicate useful life expectancy of from 1 to 5 years for wear-exposed components. Recommendations are made for adapting the equipment for pilot plant and commercial applications. 3 references, 20 figures, 12 tables.

  18. EFRT M-12 Issue Resolution: Caustic-Leach Rate Constants from PEP and Laboratory-Scale Tests

    SciTech Connect (OSTI)

    Mahoney, Lenna A.; Rassat, Scot D.; Eslinger, Paul W.; Aaberg, Rosanne L.; Aker, Pamela M.; Golovich, Elizabeth C.; Hanson, Brady D.; Hausmann, Tom S.; Huckaby, James L.; Kurath, Dean E.; Minette, Michael J.; Sundaram, S. K.; Yokuda, Satoru T.

    2010-01-01

    Pacific Northwest National Laboratory (PNNL) has been tasked by Bechtel National Inc. (BNI) on the River Protection Project-Hanford Tank Waste Treatment and Immobilization Plant (RPP-WTP) project to perform research and development activities to resolve technical issues identified for the Pretreatment Facility (PTF). The Pretreatment Engineering Platform (PEP) was designed, constructed and operated as part of a plan to respond to issue M12, Undemonstrated Leaching Processes of the External Flowsheet Review Team (EFRT) issue response plan.( ) The PEP is a 1/4.5-scale test platform designed to simulate the WTP pretreatment caustic leaching, oxidative leaching, ultrafiltration solids concentration, and slurry washing processes. The PEP replicates the WTP leaching processes using prototypic equipment and control strategies. The PEP also includes non-prototypic ancillary equipment to support the core processing. The work described in this report addresses caustic leaching under WTP conditions, based on tests performed with a Hanford waste simulant. Because gibbsite leaching kinetics are rapid (gibbsite is expected to be dissolved by the time the final leach temperature is reached), boehmite leach kinetics are the main focus of the caustic-leach tests. The tests were completed at the laboratory-scale and in the PEP, which is a 1/4.5-scale mock-up of key PTF process equipment. Two laboratory-scale caustic-leach tests were performed for each of the PEP runs. For each PEP run, unleached slurry was taken from the PEP caustic-leach vessel for one batch and used as feed for both of the corresponding laboratory-scale tests.

  19. Using an Energy Performance Based Design-Build Process to Procure a Large Scale Low-Energy Building: PReprint

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis (Technical Report) | SciTech Connect Technical Report: Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis Citation Details In-Document Search Title: Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis We present a

  20. Large-scale real-space density-functional calculations: Moiré-induced electron localization in graphene

    SciTech Connect (OSTI)

    Oshiyama, Atsushi Iwata, Jun-Ichi; Uchida, Kazuyuki; Matsushita, Yu-Ichiro

    2015-03-21

    We show that our real-space finite-difference scheme allows us to perform density-functional calculations for nanometer-scale targets containing more than 100 000 atoms. This real-space scheme is applied to twisted bilayer graphene, clarifying that Moiré pattern induced in the slightly twisted bilayer graphene drastically modifies the atomic and electronic structures.