National Library of Energy BETA

Sample records for large-scale field test

  1. The Effective Field Theory of Cosmological Large Scale Structures...

    Office of Scientific and Technical Information (OSTI)

    The Effective Field Theory of Cosmological Large Scale Structures Citation Details In-Document Search Title: The Effective Field Theory of Cosmological Large Scale Structures...

  2. Self-consistency tests of large-scale dynamics parameterizations...

    Office of Scientific and Technical Information (OSTI)

    In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to ...

  3. Large-Scale Industrial CCS Projects Selected for Continued Testing |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy CCS Projects Selected for Continued Testing Large-Scale Industrial CCS Projects Selected for Continued Testing June 10, 2010 - 1:00pm Addthis Washington, DC - Three Recovery Act funded projects have been selected by the U.S. Department of Energy (DOE) to continue testing large-scale carbon capture and storage (CCS) from industrial sources. The projects - located in Texas, Illinois, and Louisiana - were initially selected for funding in October 2009 as part of a $1.4

  4. Relic vector field and CMB large scale anomalies

    SciTech Connect (OSTI)

    Chen, Xingang; Wang, Yi E-mail: yw366@cam.ac.uk

    2014-10-01

    We study the most general effects of relic vector fields on the inflationary background and density perturbations. Such effects are observable if the number of inflationary e-folds is close to the minimum requirement to solve the horizon problem. We show that this can potentially explain two CMB large scale anomalies: the quadrupole-octopole alignment and the quadrupole power suppression. We discuss its effect on the parity anomaly. We also provide analytical template for more detailed data comparison.

  5. The effective field theory of cosmological large scale structures

    SciTech Connect (OSTI)

    Carrasco, John Joseph M.; Hertzberg, Mark P.; Senatore, Leonardo

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ? 106c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations ?(k) for all the observables. As an example, we calculate the correction to the power spectrum at order ?(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ? 0.24h Mpc1.

  6. Scalable parallel distance field construction for large-scale applications

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; Kolla, Hemanth; Chen, Jacqueline H.

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less

  7. Large-Scale Spray Releases: Additional Aerosol Test Results

    SciTech Connect (OSTI)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  8. The IR-resummed Effective Field Theory of Large Scale Structures...

    Office of Scientific and Technical Information (OSTI)

    IR-resummed Effective Field Theory of Large Scale Structures Citation Details In-Document Search Title: The IR-resummed Effective Field Theory of Large Scale Structures We present a ...

  9. Large-Scale Spray Releases: Initial Aerosol Test Results

    SciTech Connect (OSTI)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and

  10. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    SciTech Connect (OSTI)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  11. Single-field consistency relations of large scale structure

    SciTech Connect (OSTI)

    Creminelli, Paolo; Norea, Jorge; Simonovi?, Marko; Vernizzi, Filippo E-mail: jorge.norena@icc.ub.edu E-mail: filippo.vernizzi@cea.fr

    2013-12-01

    We derive consistency relations for the late universe (CDM and ?CDM): relations between an n-point function of the density contrast ? and an (n+1)-point function in the limit in which one of the (n+1) momenta becomes much smaller than the others. These are based on the observation that a long mode, in single-field models of inflation, reduces to a diffeomorphism since its freezing during inflation all the way until the late universe, even when the long mode is inside the horizon (but out of the sound horizon). These results are derived in Newtonian gauge, at first and second order in the small momentum q of the long mode and they are valid non-perturbatively in the short-scale ?. In the non-relativistic limit our results match with [1]. These relations are a consequence of diffeomorphism invariance; they are not satisfied in the presence of extra degrees of freedom during inflation or violation of the Equivalence Principle (extra forces) in the late universe.

  12. Large-Scale Field Study of Landfill Covers at Sandia National Laboratories

    SciTech Connect (OSTI)

    Dwyer, S.F.

    1998-09-01

    A large-scale field demonstration comparing final landfill cover designs has been constructed and is currently being monitored at Sandia National Laboratories in Albuquerque, New Mexico. Two conventional designs (a RCRA Subtitle `D' Soil Cover and a RCRA Subtitle `C' Compacted Clay Cover) were constructed side-by-side with four alternative cover test plots designed for dry environments. The demonstration is intended to evaluate the various cover designs based on their respective water balance performance, ease and reliability of construction, and cost. This paper presents an overview of the ongoing demonstration.

  13. Goethite Bench-scale and Large-scale Preparation Tests

    SciTech Connect (OSTI)

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the ferrous

  14. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    SciTech Connect (OSTI)

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, but WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.

  15. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less

  16. Development of explosive event scale model testing capability at Sandia`s large scale centrifuge facility

    SciTech Connect (OSTI)

    Blanchat, T.K.; Davie, N.T.; Calderone, J.J.

    1998-02-01

    Geotechnical structures such as underground bunkers, tunnels, and building foundations are subjected to stress fields produced by the gravity load on the structure and/or any overlying strata. These stress fields may be reproduced on a scaled model of the structure by proportionally increasing the gravity field through the use of a centrifuge. This technology can then be used to assess the vulnerability of various geotechnical structures to explosive loading. Applications of this technology include assessing the effectiveness of earth penetrating weapons, evaluating the vulnerability of various structures, counter-terrorism, and model validation. This document describes the development of expertise in scale model explosive testing on geotechnical structures using Sandia`s large scale centrifuge facility. This study focused on buried structures such as hardened storage bunkers or tunnels. Data from this study was used to evaluate the predictive capabilities of existing hydrocodes and structural dynamics codes developed at Sandia National Laboratories (such as Pronto/SPH, Pronto/CTH, and ALEGRA). 7 refs., 50 figs., 8 tabs.

  17. PARTICLE ACCELERATION BY COLLISIONLESS SHOCKS CONTAINING LARGE-SCALE MAGNETIC-FIELD VARIATIONS

    SciTech Connect (OSTI)

    Guo, F.; Jokipii, J. R.; Kota, J. E-mail: jokipii@lpl.arizona.ed

    2010-12-10

    Diffusive shock acceleration at collisionless shocks is thought to be the source of many of the energetic particles observed in space. Large-scale spatial variations of the magnetic field have been shown to be important in understanding observations. The effects are complex, so here we consider a simple, illustrative model. Here we solve numerically the Parker transport equation for a shock in the presence of large-scale sinusoidal magnetic-field variations. We demonstrate that the familiar planar-shock results can be significantly altered as a consequence of large-scale, meandering magnetic lines of force. Because the perpendicular diffusion coefficient {kappa}{sub perpendicular} is generally much smaller than the parallel diffusion coefficient {kappa}{sub ||}, the energetic charged particles are trapped and preferentially accelerated along the shock front in the regions where the connection points of magnetic field lines intersecting the shock surface converge, and thus create the 'hot spots' of the accelerated particles. For the regions where the connection points separate from each other, the acceleration to high energies will be suppressed. Further, the particles diffuse away from the 'hot spot' regions and modify the spectra of downstream particle distribution. These features are qualitatively similar to the recent Voyager observations in the Heliosheath. These results are potentially important for particle acceleration at shocks propagating in turbulent magnetized plasmas as well as those which contain large-scale nonplanar structures. Examples include anomalous cosmic rays accelerated by the solar wind termination shock, energetic particles observed in propagating heliospheric shocks, galactic cosmic rays accelerated by supernova blast waves, etc.

  18. On the possible origin of the large scale cosmic magnetic field

    SciTech Connect (OSTI)

    Coroniti, F. V.

    2014-01-10

    The possibility that the large scale cosmic magnetic field is directly generated at microgauss, equipartition levels during the reionization epoch by collisionless shocks that are forced to satisfy a downstream shear flow boundary condition is investigated through the development of two modelsthe accretion of an ionized plasma onto a weakly ionized cool galactic disk and onto a cool filament of the cosmic web. The dynamical structure and the physical parameters of the models are synthesized from recent cosmological simulations of the early reionization era after the formation of the first stars. The collisionless shock stands upstream of the disk and filament, and its dissipation is determined by ion inertial length Weibel turbulence. The downstream shear boundary condition is determined by the rotational neutral gas flow in the disk and the inward accretion flow along the filament. The shocked plasma is accelerated to the downstream shear flow velocity by the Weibel turbulence, and the relative shearing motion between the electrons and ions produces a strong, ion inertial scale current sheet that generates an equipartition strength, large scale downstream magnetic field, ?10{sup 6} G for the disk and ?6 10{sup 8} G for the filament. By assumption, hydrodynamic turbulence transports the shear-shock generated magnetic flux throughout the disk and filament volume.

  19. Development of an integrated in-situ remediation technology. Topical report for task No. 12 and 13 entitled: Large scale field test of the Lasagna{trademark} process, September 26, 1994--May 25, 1996

    SciTech Connect (OSTI)

    Athmer, C.J.; Ho, Sa V.; Hughes, B.M.

    1997-04-01

    Contamination in low permeability soils poses a significant technical challenge to in-situ remediation efforts. Poor accessibility to the contaminants and difficulty in delivery of treatment reagents have rendered existing in-situ treatments such as bioremediation, vapor extraction, pump and treat rather ineffective when applied to low permeability soils present at many contaminated sites. This technology is an integrated in-situ treatment in which established geotechnical methods are used to instant degradation zones directly in the contaminated soil and electroosmosis is utilized to move the contaminants back and forth through those zones until the treatment is completed. This topical report summarizes the results of the field experiment conducted at the Paducah Gaseous Diffusion Plant in Paducah, KY. The test site covered 15 feet wide by 10 feet across and 15 feet deep with steel panels as electrodes and wickdrains containing granular activated carbon as treatment zone& The electrodes and treatment zones were installed utilizing innovative adaptation of existing emplacement technologies. The unit was operated for four months, flushing TCE by electroosmosis from the soil into the treatment zones where it was trapped by the activated carbon. The scale up from laboratory units to this field scale was very successful with respect to electrical parameters as weft as electroosmotic flow. Soil samples taken throughout the site before and after the test showed over 98% TCE removal, with most samples showing greater than 99% removal.

  20. Nonlinear Seismic Correlation Analysis of the JNES/NUPEC Large-Scale Piping System Tests.

    SciTech Connect (OSTI)

    Nie,J.; DeGrassi, G.; Hofmayer, C.; Ali, S.

    2008-06-01

    The Japan Nuclear Energy Safety Organization/Nuclear Power Engineering Corporation (JNES/NUPEC) large-scale piping test program has provided valuable new test data on high level seismic elasto-plastic behavior and failure modes for typical nuclear power plant piping systems. The component and piping system tests demonstrated the strain ratcheting behavior that is expected to occur when a pressurized pipe is subjected to cyclic seismic loading. Under a collaboration agreement between the US and Japan on seismic issues, the US Nuclear Regulatory Commission (NRC)/Brookhaven National Laboratory (BNL) performed a correlation analysis of the large-scale piping system tests using derailed state-of-the-art nonlinear finite element models. Techniques are introduced to develop material models that can closely match the test data. The shaking table motions are examined. The analytical results are assessed in terms of the overall system responses and the strain ratcheting behavior at an elbow. The paper concludes with the insights about the accuracy of the analytical methods for use in performance assessments of highly nonlinear piping systems under large seismic motions.

  1. Testing the big bang: Light elements, neutrinos, dark matter and large-scale structure

    SciTech Connect (OSTI)

    Schramm, D.N. Fermi National Accelerator Lab., Batavia, IL )

    1991-06-01

    In this series of lectures, several experimental and observational tests of the standard cosmological model are examined. In particular, detailed discussion is presented regarding nucleosynthesis, the light element abundances and neutrino counting; the dark matter problems; and the formation of galaxies and large-scale structure. Comments will also be made on the possible implications of the recent solar neutrino experimental results for cosmology. An appendix briefly discusses the 17 keV thing'' and the cosmological and astrophysical constraints on it. 126 refs., 8 figs., 2 tabs.

  2. Aerodynamic force measurement on a large-scale model in a short duration test facility

    SciTech Connect (OSTI)

    Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.

    2005-03-01

    A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3 m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350 {mu}s is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1 ms.

  3. Calculation of large scale relative permeabilities from stochastic properties of the permeability field and fluid properties

    SciTech Connect (OSTI)

    Lenormand, R.; Thiele, M.R.

    1997-08-01

    The paper describes the method and presents preliminary results for the calculation of homogenized relative permeabilities using stochastic properties of the permeability field. In heterogeneous media, the spreading of an injected fluid is mainly sue to the permeability heterogeneity and viscosity fingering. At large scale, when the heterogeneous medium is replaced by a homogeneous one, we need to introduce a homogenized (or pseudo) relative permeability to obtain the same spreading. Generally, is derived by using fine-grid numerical simulations (Kyte and Berry). However, this operation is time consuming and cannot be performed for all the meshes of the reservoir. We propose an alternate method which uses the information given by the stochastic properties of the field without any numerical simulation. The method is based on recent developments on homogenized transport equations (the {open_quotes}MHD{close_quotes} equation, Lenormand SPE 30797). The MHD equation accounts for the three basic mechanisms of spreading of the injected fluid: (1) Dispersive spreading due to small scale randomness, characterized by a macrodispersion coefficient D. (2) Convective spreading due to large scale heterogeneities (layers) characterized by a heterogeneity factor H. (3) Viscous fingering characterized by an apparent viscosity ration M. In the paper, we first derive the parameters D and H as functions of variance and correlation length of the permeability field. The results are shown to be in good agreement with fine-grid simulations. The are then derived a function of D, H and M. The main result is that this approach lead to a time dependent . Finally, the calculated are compared to the values derived by history matching using fine-grid numerical simulations.

  4. Re-evaluation of the 1995 Hanford Large Scale Drum Fire Test Results

    SciTech Connect (OSTI)

    Yang, J M

    2007-05-02

    A large-scale drum performance test was conducted at the Hanford Site in June 1995, in which over one hundred (100) 55-gal drums in each of two storage configurations were subjected to severe fuel pool fires. The two storage configurations in the test were pallet storage and rack storage. The description and results of the large-scale drum test at the Hanford Site were reported in WHC-SD-WM-TRP-246, ''Solid Waste Drum Array Fire Performance,'' Rev. 0, 1995. This was one of the main references used to develop the analytical methodology to predict drum failures in WHC-SD-SQA-ANAL-501, 'Fire Protection Guide for Waste Drum Storage Array,'' September 1996. Three drum failure modes were observed from the test reported in WHC-SD-WM-TRP-246. They consisted of seal failure, lid warping, and catastrophic lid ejection. There was no discernible failure criterion that distinguished one failure mode from another. Hence, all three failure modes were treated equally for the purpose of determining the number of failed drums. General observations from the results of the test are as follows: {lg_bullet} Trash expulsion was negligible. {lg_bullet} Flame impingement was identified as the main cause for failure. {lg_bullet} The range of drum temperatures at failure was 600 C to 800 C. This is above the yield strength temperature for steel, approximately 540 C (1,000 F). {lg_bullet} The critical heat flux required for failure is above 45 kW/m{sup 2}. {lg_bullet} Fire propagation from one drum to the next was not observed. The statistical evaluation of the test results using, for example, the student's t-distribution, will demonstrate that the failure criteria for TRU waste drums currently employed at nuclear facilities are very conservative relative to the large-scale test results. Hence, the safety analysis utilizing the general criteria described in the five bullets above will lead to a technically robust and defensible product that bounds the potential consequences from postulated

  5. Large-scale exploratory tests of sodium/limestone concrete interactions. [LMFBR

    SciTech Connect (OSTI)

    Randich, E.; Smaardyk, J.E.; Acton, R.U.

    1983-02-01

    Eleven large-scale tests examining the interaction of molten sodium and limestone (calcite) concrete were performed. The tests typically used between 100 and 200 kg of sodium at temperatures between 723 K and 973 K and a total sodium/concrete contact area of approx. 1.0m/sup 2/. The results show that energetic reactions can occur between sodium and limestone concrete. Delay times of less than 30 minutes were observed before the onset of the energetic phase. Not all tests exhibited energetic reactions and the results indicate that there is a sodium temperature threshold of 723 K to 773 K which is necessary to initiate the energetic phase. Maximum heat fluxes during the energetic phase were measured at 3.6 x 10/sup 5/ J/m/sup 2/-s. Maximum penetration rates were 4 mm/min. Total concrete erosion varied from 1 to 15 cm.

  6. Testing of Large-Scale ICV Glasses with Hanford LAW Simulant

    SciTech Connect (OSTI)

    Hrma, Pavel R.; Kim, Dong-Sang; Vienna, John D.; Matyas, Josef; Smith, Donald E.; Schweiger, Michael J.; Yeager, John D.

    2005-03-01

    Preliminary glass compositions for immobilizing Hanford low-activity waste (LAW) by the in-container vitrification (ICV) process were initially fabricated at crucible- and engineering-scale, including simulants and actual (radioactive) LAW. Glasses were characterized for vapor hydration test (VHT) and product consistency test (PCT) responses and crystallinity (both quenched and slow-cooled samples). Selected glasses were tested for toxicity characteristic leach procedure (TCLP) responses, viscosity, and electrical conductivity. This testing showed that glasses with LAW loading of 20 mass% can be made readily and meet all product constraints by a far margin. Glasses with over 22 mass% Na2O can be made to meet all other product quality and process constraints. Large-scale testing was performed at the AMEC, Geomelt Division facility in Richland. Three tests were conducted using simulated LAW with increasing loadings of 12, 17, and 20 mass% Na2O. Glass samples were taken from the test products in a manner to represent the full expected range of product performance. These samples were characterized for composition, density, crystalline and non-crystalline phase assemblage, and durability using the VHT, PCT, and TCLP tests. The results, presented in this report, show that the AMEC ICV product with meets all waste form requirements with a large margin. These results provide strong evidence that the Hanford LAW can be successfully vitrified by the ICV technology and can meet all the constraints related to product quality. The economic feasibility of the ICV technology can be further enhanced by subsequent optimization.

  7. Primordial Magnetic Field Effects on the CMB and Large-Scale Structure

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yamazaki, Dai G.; Ichiki, Kiyotomo; Kajino, Toshitaka; Mathews, Grant J.

    2010-01-01

    Mmore » agnetic fields are everywhere in nature, and they play an important role in every astronomical environment which involves the formation of plasma and currents. It is natural therefore to suppose that magnetic fields could be present in the turbulent high-temperature environment of the big bang. Such a primordial magnetic field (PMF) would be expected to manifest itself in the cosmic microwave background (CMB) temperature and polarization anisotropies, and also in the formation of large-scale structure. In this paper, we summarize the theoretical framework which we have developed to calculate the PMF power spectrum to high precision. Using this formulation, we summarize calculations of the effects of a PMF which take accurate quantitative account of the time evolution of the cutoff scale. We review the constructed numerical program, which is without approximation, and an improvement over the approach used in a number of previous works for studying the effect of the PMF on the cosmological perturbations. We demonstrate how the PMF is an important cosmological physical process on small scales. We also summarize the current constraints on the PMF amplitude B λ and the power spectral index n B which have been deduced from the available CMB observational data by using our computational framework.« less

  8. Aerosols released during large-scale integral MCCI tests in the ACE Program

    SciTech Connect (OSTI)

    Fink, J.K.; Thompson, D.H.; Spencer, B.W.; Sehgal, B.R.

    1992-04-01

    As part of the internationally sponsored Advanced Containment Experiments (ACE) program, seven large-scale experiments on molten core concrete interactions (MCCIs) have been performed at Argonne National Laboratory. One of the objectives of these experiments is to collect and characterize all the aerosols released from the MCCIs. Aerosols released from experiments using four types of concrete (siliceous, limestone/common sand, serpentine, and limestone/limestone) and a range of metal oxidation for both BWR and PWR reactor core material have been collected and characterized. Release fractions were determined for UO{sup 2}, Zr, the fission-products: BaO, SrO, La{sub 2}O{sub 3}, CeO{sub 2}, MoO{sub 2}, Te, Ru, and control materials: Ag, In, and B{sub 4}C. Release fractions of UO{sub 2} and the fission products other than Te were small in all tests. However, release of control materials was significant.

  9. Aerosols released during large-scale integral MCCI tests in the ACE Program

    SciTech Connect (OSTI)

    Fink, J.K.; Thompson, D.H.; Spencer, B.W. ); Sehgal, B.R. )

    1992-01-01

    As part of the internationally sponsored Advanced Containment Experiments (ACE) program, seven large-scale experiments on molten core concrete interactions (MCCIs) have been performed at Argonne National Laboratory. One of the objectives of these experiments is to collect and characterize all the aerosols released from the MCCIs. Aerosols released from experiments using four types of concrete (siliceous, limestone/common sand, serpentine, and limestone/limestone) and a range of metal oxidation for both BWR and PWR reactor core material have been collected and characterized. Release fractions were determined for UO{sup 2}, Zr, the fission-products: BaO, SrO, La{sub 2}O{sub 3}, CeO{sub 2}, MoO{sub 2}, Te, Ru, and control materials: Ag, In, and B{sub 4}C. Release fractions of UO{sub 2} and the fission products other than Te were small in all tests. However, release of control materials was significant.

  10. Anisotropic diffusion across an external magnetic field and large-scale fluctuations in magnetized plasmas

    SciTech Connect (OSTI)

    Holod, I.; Weiland, J.; Zagorodny, A.

    2005-04-01

    The problem of random motion of charged particles in an external magnetic field is studied under the assumption that the Langevin sources produce anisotropic diffusion in velocity space and the friction force is dependent on the direction of particle motion. It is shown that in the case under consideration, the kinetic equation describing particle transitions in phase space is reduced to the equation with a Fokker-Planck collision term in the general form (nonisotropic friction coefficient and nonzero off-diagonal elements of the diffusion tensor in the velocity space). The solution of such an equation has been obtained and the explicit form of the transition probability is found. Using the obtained transition probability, the mean-square particle displacements in configuration and velocity space were calculated and compared with the results of numerical simulations, showing good agreement. The obtained results are used to generalize the theory of large-scale fluctuations in plasmas to the case of anisotropic diffusion across an external magnetic field. Such diffusion is expected to be observed in the case of an anisotropic k spectrum of fluctuations generating random particle motion (for example, in the case of drift-wave turbulence)

  11. The Lagrangian-space Effective Field Theory of large scale structures

    SciTech Connect (OSTI)

    Porto, Rafael A.; Zaldarriaga, Matias; Senatore, Leonardo E-mail: senatore@stanford.edu

    2014-05-01

    We introduce a Lagrangian-space Effective Field Theory (LEFT) formalism for the study of cosmological large scale structures. Unlike the previous Eulerian-space construction, it is naturally formulated as an effective field theory of extended objects in Lagrangian space. In LEFT the resulting finite size effects are described using a multipole expansion parameterized by a set of time dependent coefficients and organized in powers of the ratio of the wavenumber of interest k over the non-linear scale k{sub NL}. The multipoles encode the effects of the short distance modes on the long-wavelength universe and absorb UV divergences when present. There are no IR divergences in LEFT. Some of the parameters that control the perturbative approach are not assumed to be small and can be automatically resummed. We present an illustrative one-loop calculation for a power law universe. We describe the dynamics both at the level of the equations of motion and through an action formalism.

  12. Large-Scale Test of Dynamic Correlation Processors: Implications for Correlation-Based Seismic Pipelines

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Dodge, D. A.; Harris, D. B.

    2016-03-15

    Correlation detectors are of considerable interest to the seismic monitoring communities because they offer reduced detection thresholds and combine detection, location and identification functions into a single operation. They appear to be ideal for applications requiring screening of frequent repeating events. However, questions remain about how broadly empirical correlation methods are applicable. We describe the effectiveness of banks of correlation detectors in a system that combines traditional power detectors with correlation detectors in terms of efficiency, which we define to be the fraction of events detected by the correlators. This paper elaborates and extends the concept of a dynamic correlationmore » detection framework – a system which autonomously creates correlation detectors from event waveforms detected by power detectors; and reports observed performance on a network of arrays in terms of efficiency. We performed a large scale test of dynamic correlation processors on an 11 terabyte global dataset using 25 arrays in the single frequency band 1-3 Hz. The system found over 3.2 million unique signals and produced 459,747 screened detections. A very satisfying result is that, on average, efficiency grows with time and, after nearly 16 years of operation, exceeds 47% for events observed over all distance ranges and approaches 70% for near regional and 90% for local events. This observation suggests that future pipeline architectures should make extensive use of correlation detectors, principally for decluttering observations of local and near-regional events. Our results also suggest that future operations based on correlation detection will require commodity large-scale computing infrastructure, since the numbers of correlators in an autonomous system can grow into the hundreds of thousands.« less

  13. Lotung large-scale seismic test strong motion records. Volume 1, General description: Final report

    SciTech Connect (OSTI)

    Not Available

    1992-03-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4 scale and 1/12 scale) of a nuclear plant concrete containment structure at a seismically active site in Lotung, Taiwan. Extensive instrumentation was deployed to record both structural and ground responses during earthquakes. The experiment, generally referred to as the Lotung Large-Scale Seismic Test (LSST), was used to gather data for soil-structure interaction (SSI) analysis method evaluation and validation as well as for site ground response investigation. A number of earthquakes having local magnitudes ranging from 4.5 to 7.0 have been recorded at the LSST site since the completion of the test facility in September 1985. This report documents the earthquake data, both raw and processed, collected from the LSST experiment. Volume 1 of the report provides general information on site location, instrument types and layout, data acquisition and processing, and data file organization. The recorded data are described chronologically in subsequent volumes of the report.

  14. Test of the CLAS12 RICH large-scale prototype in the direct proximity focusing configuration

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Anefalos Pereira, S.; Baltzell, N.; Barion, L.; Benmokhtar, F.; Brooks, W.; Cisbani, E.; Contalbrigo, M.; El Alaoui, A.; Hafidi, K.; Hoek, M.; et al

    2016-02-11

    A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c up to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Laboratory. The adopted solution foresees a novel hybrid optics design based on aerogel radiator, composite mirrors and high-packed and high-segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). We report here the results of the tests of a large scale prototype of the RICH detector performed withmore » the hadron beam of the CERN T9 experimental hall for the direct detection configuration. As a result, the tests demonstrated that the proposed design provides the required pion-to-kaon rejection factor of 1:500 in the whole momentum range.« less

  15. Simplified field-in-field technique for a large-scale implementation in breast radiation treatment

    SciTech Connect (OSTI)

    Fournier-Bidoz, Nathalie; Kirova, Youlia M.; Campana, Francois; Dendale, Remi; Fourquet, Alain

    2012-07-01

    We wanted to evaluate a simplified 'field-in-field' technique (SFF) that was implemented in our department of Radiation Oncology for breast treatment. This study evaluated 15 consecutive patients treated with a simplified field in field technique after breast-conserving surgery for early-stage breast cancer. Radiotherapy consisted of whole-breast irradiation to the total dose of 50 Gy in 25 fractions, and a boost of 16 Gy in 8 fractions to the tumor bed. We compared dosimetric outcomes of SFF to state-of-the-art electronic surface compensation (ESC) with dynamic leaves. An analysis of early skin toxicity of a population of 15 patients was performed. The median volume receiving at least 95% of the prescribed dose was 763 mL (range, 347-1472) for SFF vs. 779 mL (range, 349-1494) for ESC. The median residual 107% isodose was 0.1 mL (range, 0-63) for SFF and 1.9 mL (range, 0-57) for ESC. Monitor units were on average 25% higher in ESC plans compared with SFF. No patient treated with SFF had acute side effects superior to grade 1-NCI scale. SFF created homogenous 3D dose distributions equivalent to electronic surface compensation with dynamic leaves. It allowed the integration of a forward planned concomitant tumor bed boost as an additional multileaf collimator subfield of the tangential fields. Compared with electronic surface compensation with dynamic leaves, shorter treatment times allowed better radiation protection to the patient. Low-grade acute toxicity evaluated weekly during treatment and 2 months after treatment completion justified the pursuit of this technique for all breast patients in our department.

  16. Large scale magnetic fields and coherent structures in nonuniform unmagnetized plasma

    SciTech Connect (OSTI)

    Jucker, Martin; Andrushchenko, Zhanna N.; Pavlenko, Vladimir P.

    2006-07-15

    The properties of streamers and zonal magnetic structures in magnetic electron drift mode turbulence are investigated. The stability of such large scale structures is investigated in the kinetic and the hydrodynamic regime, for which an instability criterion similar to the Lighthill criterion for modulational instability is found. Furthermore, these large scale flows can undergo further nonlinear evolution after initial linear growth, which can lead to the formation of long-lived coherent structures consisting of self-bound wave packets between the surfaces of two different flow velocities with an expected modification of the anomalous electron transport properties.

  17. LyMAS: Predicting large-scale Ly? forest statistics from the dark matter density field

    SciTech Connect (OSTI)

    Peirani, Sbastien; Colombi, Stphane; Dubois, Yohan; Pichon, Christophe; Weinberg, David H.; Blaizot, Jrmy

    2014-03-20

    We describe Ly? Mass Association Scheme (LyMAS), a method of predicting clustering statistics in the Ly? forest on large scales from moderate-resolution simulations of the dark matter (DM) distribution, with calibration from high-resolution hydrodynamic simulations of smaller volumes. We use the 'Horizon-MareNostrum' simulation, a 50 h {sup 1} Mpc comoving volume evolved with the adaptive mesh hydrodynamic code RAMSES, to compute the conditional probability distribution P(F{sub s} |? {sub s}) of the transmitted flux F{sub s} , smoothed (one-dimensionally, 1D) over the spectral resolution scale, on the DM density contrast ? {sub s}, smoothed (three-dimensionally, 3D) over a similar scale. In this study we adopt the spectral resolution of the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS) at z = 2.5, and we find optimal results for a DM smoothing length ? = 0.3 h {sup 1} Mpc (comoving). In its simplest form, LyMAS draws randomly from the hydro-calibrated P(F{sub s} |? {sub s}) to convert DM skewers into Ly? forest pseudo-spectra, which are then used to compute cross-sightline flux statistics. In extended form, LyMAS exactly reproduces both the 1D power spectrum and one-point flux distribution of the hydro simulation spectra. Applied to the MareNostrum DM field, LyMAS accurately predicts the two-point conditional flux distribution and flux correlation function of the full hydro simulation for transverse sightline separations as small as 1 h {sup 1} Mpc, including redshift-space distortion effects. It is substantially more accurate than a deterministic density-flux mapping ({sup F}luctuating Gunn-Peterson Approximation{sup )}, often used for large-volume simulations of the forest. With the MareNostrum calibration, we apply LyMAS to 1024{sup 3} N-body simulations of a 300 h {sup 1} Mpc and 1.0 h {sup 1} Gpc cube to produce large, publicly available catalogs of mock BOSS spectra that probe a large comoving volume. LyMAS will be a powerful tool for

  18. Active and passive acoustic imaging inside a large-scale polyaxial hydraulic fracture test

    SciTech Connect (OSTI)

    Glaser, S.D.; Dudley, J.W. II; Shlyapobersky, J.

    1999-07-01

    An automated laboratory hydraulic fracture experiment has been assembled to determine what rock and treatment parameters are crucial to improving the efficiency and effectiveness of field hydraulic fractures. To this end a large (460 mm cubic sample) polyaxial cell, with servo-controlled X,Y,Z, pore pressure, crack-mouth-opening-displacement, and bottom hole pressure, was built. Active imaging with embedded seismic diffraction arrays images the geometry of the fracture. Preliminary tests indicate fracture extent can be imaged to within 5%. Unique embeddible high-fidelity particle velocity AE sensors were designed and calibrated to allow determination of fracture source kinematics.

  19. A large scale coherent magnetic field: interactions with free streaming particles and limits from the CMB

    SciTech Connect (OSTI)

    Adamek, Julian; Durrer, Ruth; Fenu, Elisa; Vonlanthen, Marc E-mail: ruth.durrer@unige.ch E-mail: marc.vonlanthen@unige.ch

    2011-06-01

    We study a homogeneous and nearly-isotropic Universe permeated by a homogeneous magnetic field. Together with an isotropic fluid, the homogeneous magnetic field, which is the primary source of anisotropy, leads to a plane-symmetric Bianchi I model of the Universe. However, when free-streaming relativistic particles are present, they generate an anisotropic pressure which counteracts the one from the magnetic field such that the Universe becomes isotropized. We show that due to this effect, the CMB temperature anisotropy from a homogeneous magnetic field is significantly suppressed if the neutrino masses are smaller than 0.3 eV.

  20. VP 100: New Facility in Boston to Test Large-Scale Wind Blades

    Broader source: Energy.gov [DOE]

    Thanks in part to funding from the Recovery Act, the Wind Technology Testing Center in Massachusetts will be first in the U.S. to test wind turbine blades up to 300 feet in length -- creating 300 construction jobs and 30 permanent design jobs in the process.

  1. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running Large Scale Jobs Running Large Scale Jobs Users face various challenges with running and scaling large scale jobs on peta-scale production systems. For example, certain applications may not have enough memory per core, the default environment variables may need to be adjusted, or I/O dominates run time. This page lists some available programming and run time tuning options and tips users can try on their large scale applications on Hopper for better performance. Try different compilers

  2. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    try on their large scale applications on Hopper for better performance. Try different compilers and compiler options The available compilers on Hopper are PGI, Cray, Intel, GNU,...

  3. LARGE-SCALE MECURY CONTROL TECHNOLOGY TESTING FOR LIGNITE-FIRED UTILITIES-OXIDATION SYSTEMS FOR WET FGD

    SciTech Connect (OSTI)

    Michael J. Holmes; Steven A. Benson; Jeffrey S. Thompson

    2004-03-01

    The Energy & Environmental Research Center (EERC) is conducting a consortium-based effort directed toward resolving the mercury (Hg) control issues facing the lignite industry. Specifically, the EERC team--the EERC, EPRI, URS, ADA-ES, Babcock & Wilcox, the North Dakota Industrial Commission, SaskPower, and the Mercury Task Force, which includes Basin Electric Power Cooperative, Otter Tail Power Company, Great River Energy, Texas Utilities (TXU), Montana-Dakota Utilities Co., Minnkota Power Cooperative, BNI Coal Ltd., Dakota Westmoreland Corporation, and the North American Coal Company--has undertaken a project to significantly and cost-effectively oxidize elemental mercury in lignite combustion gases, followed by capture in a wet scrubber. This approach will be applicable to virtually every lignite utility in the United States and Canada and potentially impact subbituminous utilities. The oxidation process is proven at the pilot-scale and in short-term full-scale tests. Additional optimization is continuing on oxidation technologies, and this project focuses on longer-term full-scale testing. The lignite industry has been proactive in advancing the understanding of and identifying control options for Hg in lignite combustion flue gases. Approximately 1 year ago, the EERC and EPRI began a series of Hg-related discussions with the Mercury Task Force as well as utilities firing Texas and Saskatchewan lignites. This project is one of three being undertaken by the consortium to perform large-scale Hg control technology testing to address the specific needs and challenges to be met in controlling Hg from lignite-fired power plants. This project involves Hg oxidation upstream of a system equipped with an electrostatic precipitator (ESP) followed by wet flue gas desulfurization (FGD). The team involved in conducting the technical aspects of the project includes the EERC, Babcock & Wilcox, URS, and ADA-ES. The host sites include Minnkota Power Cooperative Milton R. Young

  4. Data Analysis, Pre-Ignition Assessment, and Post-Ignition Modeling of the Large-Scale Annular Cookoff Tests

    SciTech Connect (OSTI)

    G. Terrones; F.J. Souto; R.F. Shea; M.W.Burkett; E.S. Idar

    2005-09-30

    In order to understand the implications that cookoff of plastic-bonded explosive-9501 could have on safety assessments, we analyzed the available data from the large-scale annular cookoff (LSAC) assembly series of experiments. In addition, we examined recent data regarding hypotheses about pre-ignition that may be relevant to post-ignition behavior. Based on the post-ignition data from Shot 6, which had the most complete set of data, we developed an approximate equation of state (EOS) for the gaseous products of deflagration. Implementation of this EOS into the multimaterial hydrodynamics computer program PAGOSA yielded good agreement with the inner-liner collapse sequence for Shot 6 and with other data, such as velocity interferometer system for any reflector and resistance wires. A metric to establish the degree of symmetry based on the concept of time of arrival to pin locations was used to compare numerical simulations with experimental data. Several simulations were performed to elucidate the mode of ignition in the LSAC and to determine the possible compression levels that the metal assembly could have been subjected to during post-ignition.

  5. NONLINEAR FORCE-FREE FIELD EXTRAPOLATION OF A CORONAL MAGNETIC FLUX ROPE SUPPORTING A LARGE-SCALE SOLAR FILAMENT FROM A PHOTOSPHERIC VECTOR MAGNETOGRAM

    SciTech Connect (OSTI)

    Jiang, Chaowei; Wu, S. T.; Hu, Qiang; Feng, Xueshang E-mail: wus@uah.edu E-mail: fengx@spaceweather.ac.cn

    2014-05-10

    Solar filaments are commonly thought to be supported in magnetic dips, in particular, in those of magnetic flux ropes (FRs). In this Letter, based on the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is a first, in the sense that current NLFFF extrapolations including the presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion lines (PILs) with strong transverse field and magnetic shear, and the existence of an FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength ? 100G), where the PIL is very fragmented due to small parasitic polarities on both sides of the PIL and the transverse field has a low signal-to-noise ratio. Thus, extrapolating a large-scale FR in such a case represents a far more difficult challenge. We demonstrate that our CESE-MHD-NLFFF code is sufficient for the challenge. The numerically reproduced magnetic dips of the extrapolated FR match observations of the filament and its barbs very well, which strongly supports the FR-dip model for filaments. The filament is stably sustained because the FR is weakly twisted and strongly confined by the overlying closed arcades.

  6. Proceedings of the Joint IAEA/CSNI Specialists` Meeting on Fracture Mechanics Verification by Large-Scale Testing held at Pollard Auditorium, Oak Ridge, Tennessee

    SciTech Connect (OSTI)

    Pugh, C.E.; Bass, B.R.; Keeney, J.A.

    1993-10-01

    This report contains 40 papers that were presented at the Joint IAEA/CSNI Specialists` Meeting Fracture Mechanics Verification by Large-Scale Testing held at the Pollard Auditorium, Oak Ridge, Tennessee, during the week of October 26--29, 1992. The papers are printed in the order of their presentation in each session and describe recent large-scale fracture (brittle and/or ductile) experiments, analyses of these experiments, and comparisons between predictions and experimental results. The goal of the meeting was to allow international experts to examine the fracture behavior of various materials and structures under conditions relevant to nuclear reactor components and operating environments. The emphasis was on the ability of various fracture models and analysis methods to predict the wide range of experimental data now available. The individual papers have been cataloged separately.

  7. Effects of the scatter in sunspot group tilt angles on the large-scale magnetic field at the solar surface

    SciTech Connect (OSTI)

    Jiang, J.; Cameron, R. H.; Schüssler, M.

    2014-08-10

    The tilt angles of sunspot groups represent the poloidal field source in Babcock-Leighton-type models of the solar dynamo and are crucial for the build-up and reversals of the polar fields in surface flux transport (SFT) simulations. The evolution of the polar field is a consequence of Hale's polarity rules, together with the tilt angle distribution which has a systematic component (Joy's law) and a random component (tilt-angle scatter). We determine the scatter using the observed tilt angle data and study the effects of this scatter on the evolution of the solar surface field using SFT simulations with flux input based upon the recorded sunspot groups. The tilt angle scatter is described in our simulations by a random component according to the observed distributions for different ranges of sunspot group size (total umbral area). By performing simulations with a number of different realizations of the scatter we study the effect of the tilt angle scatter on the global magnetic field, especially on the evolution of the axial dipole moment. The average axial dipole moment at the end of cycle 17 (a medium-amplitude cycle) from our simulations was 2.73 G. The tilt angle scatter leads to an uncertainty of 0.78 G (standard deviation). We also considered cycle 14 (a weak cycle) and cycle 19 (a strong cycle) and show that the standard deviation of the axial dipole moment is similar for all three cycles. The uncertainty mainly results from the big sunspot groups which emerge near the equator. In the framework of Babcock-Leighton dynamo models, the tilt angle scatter therefore constitutes a significant random factor in the cycle-to-cycle amplitude variability, which strongly limits the predictability of solar activity.

  8. Results of Large-Scale Testing on Effects of Anti-Foam Agent on Gas Retention and Release

    SciTech Connect (OSTI)

    Stewart, Charles W.; Guzman-Leong, Consuelo E.; Arm, Stuart T.; Butcher, Mark G.; Golovich, Elizabeth C.; Jagoda, Lynette K.; Park, Walter R.; Slaugh, Ryan W.; Su, Yin-Fong; Wend, Christopher F.; Mahoney, Lenna A.; Alzheimer, James M.; Bailey, Jeffrey A.; Cooley, Scott K.; Hurley, David E.; Johnson, Christian D.; Reid, Larry D.; Smith, Harry D.; Wells, Beric E.; Yokuda, Satoru T.

    2008-01-03

    The U.S. Department of Energy (DOE) Office of River Protections Waste Treatment Plant (WTP) will process and treat radioactive waste that is stored in tanks at the Hanford Site. The waste treatment process in the pretreatment facility will mix both Newtonian and non-Newtonian slurries in large process tanks. Process vessels mixing non-Newtonian slurries will use pulse jet mixers (PJMs), air sparging, and recirculation pumps. An anti-foam agent (AFA) will be added to the process streams to prevent surface foaming, but may also increase gas holdup and retention within the slurry. The work described in this report addresses gas retention and release in simulants with AFA through testing and analytical studies. Gas holdup and release tests were conducted in a 1/4-scale replica of the lag storage vessel operated in the Pacific Northwest National Laboratory (PNNL) Applied Process Engineering Laboratory using a kaolin/bentonite clay and AZ-101 HLW chemical simulant with non-Newtonian rheological properties representative of actual waste slurries. Additional tests were performed in a small-scale mixing vessel in the PNNL Physical Sciences Building using liquids and slurries representing major components of typical WTP waste streams. Analytical studies were directed at discovering how the effect of AFA might depend on gas composition and predicting the effect of AFA on gas retention and release in the full-scale plant, including the effects of mass transfer to the sparge air. The work at PNNL was part of a larger program that included tests conducted at Savannah River National Laboratory (SRNL) that is being reported separately. SRNL conducted gas holdup tests in a small-scale mixing vessel using the AZ-101 high-level waste (HLW) chemical simulant to investigate the effects of different AFAs, their components, and of adding noble metals. Full-scale, single-sparger mass transfer tests were also conducted at SRNL in water and AZ-101 HLW simulant to provide data for PNNL

  9. The effect of the geomagnetic field on cosmic ray energy estimates and large scale anisotropy searches on data from the Pierre Auger Observatory

    SciTech Connect (OSTI)

    Abreu, P.; Aglietta, M.; Ahn, E.J.; Albuquerque, I.F.M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Ambrosio, M.; ,

    2011-11-01

    We present a comprehensive study of the influence of the geomagnetic field on the energy estimation of extensive air showers with a zenith angle smaller than 60{sup o}, detected at the Pierre Auger Observatory. The geomagnetic field induces an azimuthal modulation of the estimated energy of cosmic rays up to the {approx} 2% level at large zenith angles. We present a method to account for this modulation of the reconstructed energy. We analyse the effect of the modulation on large scale anisotropy searches in the arrival direction distributions of cosmic rays. At a given energy, the geomagnetic effect is shown to induce a pseudo-dipolar pattern at the percent level in the declination distribution that needs to be accounted for. In this work, we have identified and quantified a systematic uncertainty affecting the energy determination of cosmic rays detected by the surface detector array of the Pierre Auger Observatory. This systematic uncertainty, induced by the influence of the geomagnetic field on the shower development, has a strength which depends on both the zenith and the azimuthal angles. Consequently, we have shown that it induces distortions of the estimated cosmic ray event rate at a given energy at the percent level in both the azimuthal and the declination distributions, the latter of which mimics an almost dipolar pattern. We have also shown that the induced distortions are already at the level of the statistical uncertainties for a number of events N {approx_equal} 32 000 (we note that the full Auger surface detector array collects about 6500 events per year with energies above 3 EeV). Accounting for these effects is thus essential with regard to the correct interpretation of large scale anisotropy measurements taking explicitly profit from the declination distribution.

  10. Effects of band-limited white-noise excitation on liquefaction potential in large-scale tests. Master's thesis

    SciTech Connect (OSTI)

    Jasinski, D.L.

    1987-01-01

    During earthquakes, ground movement can cause soils to lose strength or stiffness resulting in structures settling and embankments sliding. A phenomenon contributing to this loss in strength and subsequent failures is called soil liquefaction. This title, however, does not refer to a single well-defined event, but rather to a complex set of interrelated phenomena which contribute to the occurrence of damage and and failures during an earthquake. Numerous investigators have tried to model and predict the potential and probability of liquefaction occurring in soils. Since the early 1960's considerable attention has been given to the development of laboratory testing procedures to provide improved methods of characterizing the liquefaction properties of soils. Various test apparatus have been designed or modified in an attempt to provide an accurate representation of the stress state generated in-situ by earthquakes. To this end a number of experimental devices, including the cyclic triaxial and cyclic simple shear, with repeatable representation of conditions in-situ during an actual earthquake are discussed.

  11. Large scale tracking algorithms.

    SciTech Connect (OSTI)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  12. Large-Scale Testing of Effects of Anti-Foam Agent on Gas Holdup in Process Vessels in the Hanford Waste Treatment Plant - 8280

    SciTech Connect (OSTI)

    Mahoney, Lenna A.; Alzheimer, James M.; Arm, Stuart T.; Guzman-Leong, Consuelo E.; Jagoda, Lynette K.; Stewart, Charles W.; Wells, Beric E.; Yokuda, Satoru T.

    2008-06-03

    The Hanford Waste Treatment Plant (WTP) will vitrify the radioactive wastes stored in underground tanks. These wastes generate and retain hydrogen and other flammable gases that create safety concerns for the vitrification process tanks in the WTP. An anti-foam agent (AFA) will be added to the WTP process streams. Prior testing in a bubble column and a small-scale impeller-mixed vessel indicated that gas holdup in a high-level waste chemical simulant with AFA was up to 10 times that in clay simulant without AFA. This raised a concern that major modifications to the WTP design or qualification of an alternative AFA might be required to satisfy plant safety criteria. However, because the mixing and gas generation mechanisms in the small-scale tests differed from those expected in WTP process vessels, additional tests were performed in a large-scale prototypic mixing system with in situ gas generation. This paper presents the results of this test program. The tests were conducted at Pacific Northwest National Laboratory in a -scale model of the lag storage process vessel using pulse jet mixers and air spargers. Holdup and release of gas bubbles generated by hydrogen peroxide decomposition were evaluated in waste simulants containing an AFA over a range of Bingham yield stresses and gas gen geration rates. Results from the -scale test stand showed that, contrary to the small-scale impeller-mixed tests, gas holdup in clay without AFA is comparable to that in the chemical waste simulant with AFA. The test stand, simulants, scaling and data-analysis methods, and results are described in relation to previous tests and anticipated WTP operating conditions.

  13. Large-Scale Testing of Effects of Anti-Foam Agent on Gas Holdup in Process Vessels in the Hanford Waste Treatment Plant

    SciTech Connect (OSTI)

    Mahoney, L.A.; Alzheimer, J.M.; Arm, S.T.; Guzman-Leong, C.E.; Jagoda, L.K.; Stewart, C.W.; Wells, B.E.; Yokuda, S.T. [Pacific Northwest National Laboratory, Richland, WA (United States)

    2008-07-01

    The Hanford Waste Treatment and Immobilization Plant (WTP) will vitrify the radioactive wastes stored in underground tanks. These wastes generate and retain hydrogen and other flammable gases that create safety concerns for the vitrification process tanks in the WTP. An anti-foam agent (AFA) will be added to the WTP process streams. Previous testing in a bubble column and a small-scale impeller-mixed vessel indicated that gas holdup in a high-level waste chemical simulant with AFA was as much as 10 times higher than in clay simulant without AFA. This raised a concern that major modifications to the WTP design or qualification of an alternative AFA might be required to satisfy plant safety criteria. However, because the mixing and gas generation mechanisms in the small-scale tests differed from those expected in WTP process vessels, additional tests were performed in a large-scale prototypic mixing system with in situ gas generation. This paper presents the results of this test program. The tests were conducted at Pacific Northwest National Laboratory in a 1/4-scale model of the lag storage process vessel using pulse jet mixers and air spargers. Holdup and release of gas bubbles generated by hydrogen peroxide decomposition were evaluated in waste simulants containing an AFA over a range of Bingham yield stresses and gas generation rates. Results from the 1/4-scale test stand showed that, contrary to the small-scale impeller-mixed tests, holdup in the chemical waste simulant with AFA was not so greatly increased compared to gas holdup in clay without AFA. The test stand, simulants, scaling and data-analysis methods, and results are described in relation to previous tests and anticipated WTP operating conditions. (authors)

  14. Performance of powder-filled evacuated panel insulation in a manufactured home roof cavity: Tests in the Large Scale Climate Simulator

    SciTech Connect (OSTI)

    Petrie, T.W.; Kosny, J.; Childs, P.W.

    1996-03-01

    A full-scale section of half the top of a single-wide manufactured home has been studied in the Large Scale Climate Simulator (LSCS) at the Oak Ridge National Laboratory. A small roof cavity with little room for insulation at the eaves is often the case with single-wide units and limits practical ways to improve thermal performance. The purpose of the current tests was to obtain steady-state performance data for the roof cavity of the manufactured home test section when the roof cavity was insulated with fiberglass batts, blown-in rock wool insulation or combinations of these insulations and powder-filled evacuated panel (PEP) insulation. Four insulation configurations were tested: (A) a configuration with two layers of nominal R{sub US}-7 h {center_dot} ft{sup 2} {center_dot} F/BTU (R{sub SI}-1.2 m{sup 2} {center_dot} K/W) fiberglass batts; (B) a layer of PEPs and one layer of the fiberglass batts; (C) four layers of the fiberglass batts; and (D) an average 4.1 in. (10.4 cm) thick layer of blown-in rock wool at an average density of 2.4 lb/ft{sup 3} (38 kg/m{sup 3}). Effects of additional sheathing were determined for Configurations B and C. With Configuration D over the ceiling, two layers of expanded polystyrene (EPS) boards, each about the same thickness as the PEPs, were installed over the trusses instead of the roof. Aluminum foils facing the attic and over the top layer of EPS were added. The top layer of EPS was then replaced by PEPs.

  15. Large-Scale Computational Fluid Dynamics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large-Scale Computational Fluid Dynamics - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management

  16. Method and infrastructure for cycle-reproducible simulation on large scale digital circuits on a coordinated set of field-programmable gate arrays (FPGAs)

    DOE Patents [OSTI]

    Asaad, Sameh W; Bellofatto, Ralph E; Brezzo, Bernard; Haymes, Charles L; Kapur, Mohit; Parker, Benjamin D; Roewer, Thomas; Tierno, Jose A

    2014-01-28

    A plurality of target field programmable gate arrays are interconnected in accordance with a connection topology and map portions of a target system. A control module is coupled to the plurality of target field programmable gate arrays. A balanced clock distribution network is configured to distribute a reference clock signal, and a balanced reset distribution network is coupled to the control module and configured to distribute a reset signal to the plurality of target field programmable gate arrays. The control module and the balanced reset distribution network are cooperatively configured to initiate and control a simulation of the target system with the plurality of target field programmable gate arrays. A plurality of local clock control state machines reside in the target field programmable gate arrays. The local clock state machines are configured to generate a set of synchronized free-running and stoppable clocks to maintain cycle-accurate and cycle-reproducible execution of the simulation of the target system. A method is also provided.

  17. Large-Scale Information Systems

    SciTech Connect (OSTI)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  18. Large-Scale Renewable Energy Guide Webinar

    Broader source: Energy.gov [DOE]

    Webinar introduces the “Large Scale Renewable Energy Guide." The webinar will provide an overview of this important FEMP guide, which describes FEMP's approach to large-scale renewable energy projects and provides guidance to Federal agencies and the private sector on how to develop a common process for large-scale renewable projects.

  19. Large-Scale First-Principles Molecular Dynamics Simulations on...

    Office of Scientific and Technical Information (OSTI)

    for large-scale parallel platforms such as BlueGeneL. Strong scaling tests for a Materials Science application show an 86% scaling efficiency between 1024 and 32,768 CPUs. ...

  20. Revised Environmental Assessment Large-Scale, Open-Air Explosive

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Environmental Assessment Large-Scale, Open-Air Explosive Detonation, DIVINE STRAKE, at the Nevada Test Site May 2006 Prepared by Department of Energy National Nuclear Security Administration Nevada Site Office Environmental Assessment May 2006 Large-Scale, Open-Air Explosive Detonation, DIVINE STRAKE, at the Nevada Test Site TABLE OF CONTENTS 1.0 PURPOSE AND NEED FOR ACTION.....................................................1-1 1.1 Introduction and

  1. Investigation of CO2 plume behavior for a large-scale pilot test of geologic carbon storage in a saline formation

    SciTech Connect (OSTI)

    Doughty, C.

    2009-04-01

    The hydrodynamic behavior of carbon dioxide (CO{sub 2}) injected into a deep saline formation is investigated, focusing on trapping mechanisms that lead to CO{sub 2} plume stabilization. A numerical model of the subsurface at a proposed power plant with CO{sub 2} capture is developed to simulate a planned pilot test, in which 1,000,000 metric tons of CO{sub 2} is injected over a four-year period, and the subsequent evolution of the CO{sub 2} plume for hundreds of years. Key measures are plume migration distance and the time evolution of the partitioning of CO{sub 2} between dissolved, immobile free-phase, and mobile free-phase forms. Model results indicate that the injected CO{sub 2} plume is effectively immobilized at 25 years. At that time, 38% of the CO{sub 2} is in dissolved form, 59% is immobile free phase, and 3% is mobile free phase. The plume footprint is roughly elliptical, and extends much farther up-dip of the injection well than down-dip. The pressure increase extends far beyond the plume footprint, but the pressure response decreases rapidly with distance from the injection well, and decays rapidly in time once injection ceases. Sensitivity studies that were carried out to investigate the effect of poorly constrained model parameters permeability, permeability anisotropy, and residual CO{sub 2} saturation indicate that small changes in properties can have a large impact on plume evolution, causing significant trade-offs between different trapping mechanisms.

  2. Accelerated Stress Testing, Qualification Testing, HAST, Field...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Accelerated Stress Testing, Qualification Testing, HAST, Field Experience This presentation, which was the opening session of the NREL 2013 Photovoltaic Module Reliability Workshop ...

  3. Large-Scale PCA for Climate

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large-Scale PCA for Climate Large-Scale PCA for Climate The most widely used tool for extracting important patterns from the measurements of atmospheric and oceanic variables is the Empirical Orthogonal Function (EOF) technique. EOFs are popular because of their simplicity and their ability to reduce the dimensionality of large nonlinear, high-dimensional systems into fewer dimensions while preserving the most important patterns of variations in the measurements. Because EOFs are a particular

  4. Large-Scale Liquid Hydrogen Handling Equipment

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    8, 2007 Jerry Gillette Large-Scale Liquid Hydrogen Handling Equipment Hydrogen Delivery Analysis Meeting Argonne National Laboratory Some Delivery Pathways Will Necessitate the Use of Large- Scale Liquid Hydrogen Handling Equipment „ Potential Scenarios include: - Production plant shutdowns - Summer-peak storage „ Equipment Needs include: - Storage tanks - Liquid Pumps - Vaporizers - Ancillaries 2 1 Concern is that Scaling up from Small Units Could Significantly Underestimate Costs of Larger

  5. DOE/NNSA Participates in Large-Scale CTBT On-Site Inspection Exercise in

    National Nuclear Security Administration (NNSA)

    Jordan | National Nuclear Security Administration | (NNSA) Participates in Large-Scale CTBT On-Site Inspection Exercise in Jordan Friday, November 28, 2014 - 9:05am Experts from U.S. Department of Energy National Laboratories, including Sandia National Laboratories, Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Pacific Northwest National Laboratory, are participating in the Comprehensive Nuclear-Test-Ban Treaty (CTBT) Integrated Field Exercise 2014 (IFE14), a

  6. Large scale, urban decontamination; developments, historical examples and lessons learned

    SciTech Connect (OSTI)

    Demmer, R.L.

    2007-07-01

    Recent terrorist threats and actions have lead to a renewed interest in the technical field of large scale, urban environment decontamination. One of the driving forces for this interest is the prospect for the cleanup and removal of radioactive dispersal device (RDD or 'dirty bomb') residues. In response, the United States Government has spent many millions of dollars investigating RDD contamination and novel decontamination methodologies. The efficiency of RDD cleanup response will be improved with these new developments and a better understanding of the 'old reliable' methodologies. While an RDD is primarily an economic and psychological weapon, the need to cleanup and return valuable or culturally significant resources to the public is nonetheless valid. Several private companies, universities and National Laboratories are currently developing novel RDD cleanup technologies. Because of its longstanding association with radioactive facilities, the U. S. Department of Energy National Laboratories are at the forefront in developing and testing new RDD decontamination methods. However, such cleanup technologies are likely to be fairly task specific; while many different contamination mechanisms, substrate and environmental conditions will make actual application more complicated. Some major efforts have also been made to model potential contamination, to evaluate both old and new decontamination techniques and to assess their readiness for use. There are a number of significant lessons that can be gained from a look at previous large scale cleanup projects. Too often we are quick to apply a costly 'package and dispose' method when sound technological cleaning approaches are available. Understanding historical perspectives, advanced planning and constant technology improvement are essential to successful decontamination. (authors)

  7. Interagency Field Test & Evaluation: Field Test 2 Public Fact Sheet

    SciTech Connect (OSTI)

    Brian Connor

    2013-03-30

    This fact sheet summarizes the second field tests of technologies intended to address wind turbine interference with land-based surveillance radar, which took place in Lubbock, TX.

  8. Large-Scale Renewable Energy Guide | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Renewable Energy Guide Large-Scale Renewable Energy Guide Presentation covers the Large-scale RE Guide: Developing Renewable Energy Projects Larger than 10 MWs at...

  9. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect (OSTI)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  10. Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives PDF icon nanoparticulate-basedlubricati...

  11. Creating Large Scale Database Servers (Technical Report) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    Creating Large Scale Database Servers Citation Details In-Document Search Title: Creating Large Scale Database Servers The BaBar experiment at the Stanford Linear Accelerator ...

  12. Rapid Software Prototyping Into Large Scale Control Systems ...

    Office of Scientific and Technical Information (OSTI)

    Rapid Software Prototyping Into Large Scale Control Systems Citation Details In-Document Search Title: Rapid Software Prototyping Into Large Scale Control Systems Authors: Fishler, ...

  13. Determination of Large-Scale Cloud Ice Water Concentration by...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Determination of Large-Scale Cloud Ice Water Concentration by Combining ... Title: Determination of Large-Scale Cloud Ice Water Concentration by Combining Surface ...

  14. Large-Scale Renewable Energy Guide: Developing Renewable Energy...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities Large-Scale Renewable Energy Guide: Developing Renewable Energy ...

  15. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and ...

  16. Large-Scale Residential Energy Efficiency Programs Based on CFLs...

    Open Energy Info (EERE)

    Large-Scale Residential Energy Efficiency Programs Based on CFLs Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Large-Scale Residential Energy Efficiency Programs Based...

  17. ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and Analyses of Automotive Engines Title ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and...

  18. DLFM library tools for large scale dynamic applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    applications DLFM library tools for large scale dynamic applications Large scale Python and other dynamic applications may spend huge time at startup. The DLFM library,...

  19. Large Scale Computing and Storage Requirements for Advanced Scientific...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for ...

  20. Electron drift in a large scale solid xenon

    SciTech Connect (OSTI)

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  1. Electron drift in a large scale solid xenon

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  2. Large-Scale PV Integration Study

    SciTech Connect (OSTI)

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  3. ARM - Field Campaign - UAV Field Test IOP

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Send us a note below or call us at 1-888-ARM-DATA. Send Campaign : UAV Field Test IOP 1993.10.01 - 1993.10.31 Lead Scientist : John Vitko For data sets, see below. Abstract The UAV ...

  4. Batteries for Large Scale Energy Storage

    SciTech Connect (OSTI)

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  5. (Sparsity in large scale scientific computation)

    SciTech Connect (OSTI)

    Ng, E.G.

    1990-08-20

    The traveler attended a conference organized by the 1990 IBM Europe Institute at Oberlech, Austria. The theme of the conference was on sparsity in large scale scientific computation. The conference featured many presentations and other activities of direct interest to ORNL research programs on sparse matrix computations and parallel computing, which are funded by the Applied Mathematical Sciences Subprogram of the DOE Office of Energy Research. The traveler presented a talk on his work at ORNL on the development of efficient algorithms for solving sparse nonsymmetric systems of linear equations. The traveler held numerous technical discussions on issues having direct relevance to the research programs on sparse matrix computations and parallel computing at ORNL.

  6. Supporting large-scale computational science

    SciTech Connect (OSTI)

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  7. Large-Scale Algal Cultivation, Harvesting and Downstream Processing Workshop

    Office of Energy Efficiency and Renewable Energy (EERE)

    ATP3 (Algae Testbed Public-Private Partnership) is hosting the Large-Scale Algal Cultivation, Harvesting and Downstream Processing Workshop on November 2–6, 2015, at the Arizona Center for Algae Technology and Innovation in Mesa, Arizona. Topics will include practical applications of growing and managing microalgal cultures at production scale (such as methods for handling cultures, screening strains for desirable characteristics, identifying and mitigating contaminants, scaling up cultures for outdoor growth, harvesting and processing technologies, and the analysis of lipids, proteins, and carbohydrates). Related training will include hands-on laboratory and field opportunities.

  8. The North Carolina Field Test

    SciTech Connect (OSTI)

    Sharp, T.R.; Ternes, M.P.

    1990-08-01

    The North Carolina Field Test will test the effectiveness of two weatherization approaches: the current North Carolina Low-Income Weatherization Assistance Program and the North Carolina Field Test Audit. The Field Test Audit will differ from North Carolina's current weatherization program in that it will incorporate new weatherization measures and techniques, a procedure for basing measure selection of the characteristics of the individual house and the cost-effectiveness of the measure, and also emphasize cooling energy savings. The field test will determine the differences of the two weatherization approaches from the viewpoints of energy savings, cost effectiveness, and implementation ease. This Experimental Plan details the steps in performing the field test. The field test will be a group effort by several participating organizations. Pre- and post-weatherization data will be collected over a two-year period (November 1989 through August 1991). The 120 houses included in the test will be divided into a control group and two treatment groups (one for each weatherization procedure) of 40 houses each. Weekly energy use data will be collected for each house representing whole-house electric, space heating and cooling, and water heating energy uses. Corresponding outdoor weather and house indoor temperature data will also be collected. The energy savings of each house will be determined using linear-regression based models. To account for variations between the pre- and post-weatherization periods, house energy savings will be normalized for differences in outdoor weather conditions and indoor temperatures. Differences between the average energy savings of treatment groups will be identified using an analysis of variance approach. Differences between energy savings will be quantified using multiple comparison techniques. 9 refs., 8 figs., 5 tabs.

  9. Locations of Smart Grid Demonstration and Large-Scale Energy...

    Office of Environmental Management (EM)

    Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Map of the United States ...

  10. Stimulated forward Raman scattering in large scale-length laser...

    Office of Scientific and Technical Information (OSTI)

    in large scale-length laser-produced plasmas Citation Details In-Document Search Title: Stimulated forward Raman scattering in large scale-length laser-produced plasmas You ...

  11. SimFS: A Large Scale Parallel File System Simulator

    Energy Science and Technology Software Center (OSTI)

    2011-08-30

    The software provides both framework and tools to simulate a large-scale parallel file system such as Lustre.

  12. DLFM library tools for large scale dynamic applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    DLFM library tools for large scale dynamic applications DLFM library tools for large scale dynamic applications Large scale Python and other dynamic applications may spend huge time at startup. The DLFM library, developed by Mike Davis at Cray, Inc., is a set of functions that can be incorporated into a dynamically-linked application to provide improved performance during the loading of dynamic libraries when running the application at large scale on Edison. To access this library, do module

  13. Sensitivity technologies for large scale simulation.

    SciTech Connect (OSTI)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  14. Large-Scale Computational Fluid Dynamics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing ... Heavy Duty Fuels DISI Combustion HCCISCCI Fundamentals Spray Combustion Modeling ...

  15. 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Simulations 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale Simulations 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale Simulations Halo Finder Enhancement Puts Supercomputer Users in the Driver's Seat November 2, 2015 Contact: Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov Nyxfilamentsandreeberhalos In this Reeber halo finder simulation, the blueish haze is a volume rendering of the density field that Nyx calculates every time step. The light blue and

  16. Large-scale anomalies from primordial dissipation

    SciTech Connect (OSTI)

    D'Amico, Guido; Gobbetti, Roberto; Kleban, Matthew; Schillo, Marjorie E-mail: rg1509@nyu.edu E-mail: mls604@nyu.edu

    2013-11-01

    We analyze an inflationary model in which part of the power in density perturbations arises due to particle production. The amount of particle production is modulated by an auxiliary field. Given an initial gradient for the auxiliary field, this model produces a hemispherical power asymmetry and a suppression of power at low multipoles similar to those observed by WMAP and Planck in the CMB temperature. It also predicts an additive contribution to ?T with support only at very small l that is aligned with the direction of the power asymmetry and has a definite sign, as well as small oscillations in the power spectrum at all l.

  17. Large-Scale Federal Renewable Energy Projects | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Federal Renewable Energy Projects Large-Scale Federal Renewable Energy Projects Renewable energy projects larger than 10 megawatts (MW), also known as utility-scale projects, are complex and typically require private-sector financing. The Federal Energy Management Program (FEMP) developed a guide to help federal agencies, and the developers and financiers that work with them, to successfully install these projects at federal facilities. FEMP's Large-Scale Renewable Energy Guide,

  18. Energy Department Applauds Nation's First Large-Scale Industrial Carbon

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Capture and Storage Facility | Department of Energy Nation's First Large-Scale Industrial Carbon Capture and Storage Facility Energy Department Applauds Nation's First Large-Scale Industrial Carbon Capture and Storage Facility August 24, 2011 - 6:23pm Addthis Washington, D.C. - The U.S. Department of Energy issued the following statement in support of today's groundbreaking for construction of the nation's first large-scale industrial carbon capture and storage (ICCS) facility in Decatur,

  19. Large Scale Computing and Storage Requirements for Advanced Scientific

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Research: Target 2014 Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research An ASCR / NERSC Review January 5-6, 2011 Final Report Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research, Report of the Joint ASCR / NERSC Workshop conducted January 5-6, 2011 Goals This workshop is being

  20. Large Scale Computing and Storage Requirements for Basic Energy Sciences:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Target 2014 Large Scale Computing and Storage Requirements for Basic Energy Sciences: Target 2014 BESFrontcover.png Final Report Large Scale Computing and Storage Requirements for Basic Energy Sciences, Report of the Joint BES/ ASCR / NERSC Workshop conducted February 9-10, 2010 Workshop Agenda The agenda for this workshop is presented here: including presentation times and speaker information. Read More » Workshop Presentations Large Scale Computing and Storage Requirements for Basic

  1. Large Scale Computing and Storage Requirements for High Energy Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for High Energy Physics HEPFrontcover.png Large Scale Computing and Storage Requirements for High Energy Physics An HEP / ASCR / NERSC Workshop November 12-13, 2009 Report Large Scale Computing and Storage Requirements for High Energy Physics, Report of the Joint HEP / ASCR / NERSC Workshop conducted Nov. 12-13, 2009 https://www.nersc.gov/assets/HPC-Requirements-for-Science/HEPFrontcover.png Goals This workshop was organized by the Department of

  2. Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction August 24, 2011 - 1:00pm Addthis Washington, DC - Construction activities have begun at an Illinois ethanol plant that will demonstrate carbon capture and storage. The project, sponsored by the U.S. Department of Energy's Office of Fossil Energy, is the first large-scale integrated carbon capture and storage (CCS) demonstration

  3. Stimulated forward Raman scattering in large scale-length laser...

    Office of Scientific and Technical Information (OSTI)

    Stimulated forward Raman scattering in large scale-length laser-produced plasmas Citation Details In-Document Search Title: Stimulated forward Raman scattering in large ...

  4. Strategies to Finance Large-Scale Deployment of Renewable Energy...

    Open Energy Info (EERE)

    to Finance Large-Scale Deployment of Renewable Energy Projects: An Economic Development and Infrastructure Approach Jump to: navigation, search Tool Summary LAUNCH TOOL Name:...

  5. Understanding large scale HPC systems through scalable monitoring...

    Office of Scientific and Technical Information (OSTI)

    HPC systems through scalable monitoring and analysis. Citation Details In-Document Search Title: Understanding large scale HPC systems through scalable monitoring and analysis. ...

  6. FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy...

    Broader source: Energy.gov (indexed) [DOE]

    jobs, and advancing national goals for energy security. The guide describes the fundamentals of deploying financially attractive, large-scale renewable energy projects and...

  7. Optimizing Cluster Heads for Energy Efficiency in Large-Scale...

    Office of Scientific and Technical Information (OSTI)

    Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks Gu, Yi; Wu, Qishi; Rao, Nageswara S. V. Hindawi Publishing Corporation None...

  8. Energy Department Applauds Nation's First Large-Scale Industrial...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... News Media Contact: 202-586-4940 Addthis Related Articles Large-Scale Industrial Carbon ... designed National Sequestration Education Center, located at Richland Community ...

  9. Optimizing Cluster Heads for Energy Efficiency in Large-Scale...

    Office of Scientific and Technical Information (OSTI)

    clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy...

  10. A Model for Turbulent Combustion Simulation of Large Scale Hydrogen...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A Model for Turbulent Combustion Simulation of Large Scale Hydrogen Explosions Event Sponsor: Argonne Leadership Computing Facility Seminar Start Date: Oct 6 2015 - 10:00am...

  11. Harvey Wasserman! Large Scale Computing and Storage Requirements...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for High Energy Physics Research: Target 2017 ...www.nersc.govsciencerequirementsHEP * Mid---morning a

  12. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 ... Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific ...

  13. Overcoming the Barrier to Achieving Large-Scale Production -...

    Broader source: Energy.gov (indexed) [DOE]

    Semprius Confidential 1 Overcoming the Barriers to Achieving Large-Scale Production - A ... August 31, 2011 Semprius Confidential 2 Semprius Overview Background Company: * Leading ...

  14. Large-Scale Hydropower Basics | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Renewable Energy » Hydropower » Large-Scale Hydropower Basics Large-Scale Hydropower Basics August 14, 2013 - 3:11pm Addthis Large-scale hydropower plants are generally developed to produce electricity for government or electric utility projects. These plants are more than 30 megawatts (MW) in size, and there is more than 80,000 MW of installed generation capacity in the United States today. Most large-scale hydropower projects use a dam and a reservoir to retain water from a river. When the

  15. Large-scale quasi-geostrophic magnetohydrodynamics

    SciTech Connect (OSTI)

    Balk, Alexander M.

    2014-12-01

    We consider the ideal magnetohydrodynamics (MHD) of a shallow fluid layer on a rapidly rotating planet or star. The presence of a background toroidal magnetic field is assumed, and the 'shallow water' beta-plane approximation is used. We derive a single equation for the slow large length scale dynamics. The range of validity of this equation fits the MHD of the lighter fluid at the top of Earth's outer core. The form of this equation is similar to the quasi-geostrophic (Q-G) equation (for usual ocean or atmosphere), but the parameters are essentially different. Our equation also implies the inverse cascade; but contrary to the usual Q-G situation, the energy cascades to smaller length scales, while the enstrophy cascades to the larger scales. We find the Kolmogorov-type spectrum for the inverse cascade. The spectrum indicates the energy accumulation in larger scales. In addition to the energy and enstrophy, the obtained equation possesses an extra (adiabatic-type) invariant. Its presence implies energy accumulation in the 30° sector around zonal direction. With some special energy input, the extra invariant can lead to the accumulation of energy in zonal magnetic field; this happens if the input of the extra invariant is small, while the energy input is considerable.

  16. Superconducting materials for large scale applications

    SciTech Connect (OSTI)

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-05-06

    Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.

  17. Superconductivity for Large Scale Wind Turbines

    SciTech Connect (OSTI)

    R. Fair; W. Stautner; M. Douglass; R. Rajput-Ghoshal; M. Moscinski; P. Riley; D. Wagner; J. Kim; S. Hou; F. Lopez; K. Haran; J. Bray; T. Laskaris; J. Rochford; R. Duckworth

    2012-10-12

    A conceptual design has been completed for a 10MW superconducting direct drive wind turbine generator employing low temperature superconductors for the field winding. Key technology building blocks from the GE Wind and GE Healthcare businesses have been transferred across to the design of this concept machine. Wherever possible, conventional technology and production techniques have been used in order to support the case for commercialization of such a machine. Appendices A and B provide further details of the layout of the machine and the complete specification table for the concept design. Phase 1 of the program has allowed us to understand the trade-offs between the various sub-systems of such a generator and its integration with a wind turbine. A Failure Modes and Effects Analysis (FMEA) and a Technology Readiness Level (TRL) analysis have been completed resulting in the identification of high risk components within the design. The design has been analyzed from a commercial and economic point of view and Cost of Energy (COE) calculations have been carried out with the potential to reduce COE by up to 18% when compared with a permanent magnet direct drive 5MW baseline machine, resulting in a potential COE of 0.075 $/kWh. Finally, a top-level commercialization plan has been proposed to enable this technology to be transitioned to full volume production. The main body of this report will present the design processes employed and the main findings and conclusions.

  18. GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-06-11

    Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes themmore » useful for physics simulations in many fields.« less

  19. GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations

    SciTech Connect (OSTI)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-06-11

    Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes them useful for physics simulations in many fields.

  20. Test Proposal Document for Phased Field Thermal Testing in Salt |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Test Proposal Document for Phased Field Thermal Testing in Salt Test Proposal Document for Phased Field Thermal Testing in Salt The document summarizes how a new round of staged thermal field testing will help to augment the safety case for disposal of heat generating nuclear waste in salt. The objectives of the proposed test plan are to: (1) address features, events, and processes (FEPs), (2) build scientific and public confidence, (3) foster international

  1. E3T Emerging Technology Field Tests

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Field Test 1 February 5, 2015 Brown Bag Mira Vowles, BPA Wesley Saway, BPA 2 BPA is seeking utilities to participate in an ET Field Test that will fully fund up to 30 retrofits of...

  2. Longwall dust control field tests

    SciTech Connect (OSTI)

    Not Available

    1984-06-14

    Following are highlights of the observations and conclusions drawn from the certain field tests: The use of Kaiser's wing curtain significantly reduced the average air velocity around the face corner, particularly when the gob curtain was removed (Kaiser's existing procedure). The velocity reduction will result in a lessened tendency for headgate cutout dust to be blown directly into the walkway negatively impacting the operator's exposure. Dust will be more effectively channeled around the face corner and downstream past the shearer body. At shield No. 10, average air velocities were within 15 percent of each other over all four of the curtain configurations tested. This indicated that curtain usage had a minimal effect on airflow levels along the face for Kaiser's given condition of gob consolidation. The high air velocity (spot) readings occurring just downstream of the installed wing curtain were due to leakage through gaps in the curtain around the stageloader. The volume of leakage was not significant, but proved to be helpful in sweeping the downstream headgate region clean of contaminants and preventing recirculation. Kaiser's curtain system design has proven to be very rugged, practical and effective and would be suitable for use in many coal mines. Used in conjunction with the semipermanent stageloader curtain, the wing curtain can remain in position for nearly 30 to 40 feet of face advance before needing to be repositioned. The stageloader curtain itself could be mounted directly to the stageloader for automatic advance with headgate equipment. This would only be feasible if the headgate roof horizon was consistent. Kaiser currently hangs the curtain independently and repositions it as required.

  3. Testing coupled dark energy with large scale structure observation

    SciTech Connect (OSTI)

    Yang, Weiqiang; Xu, Lixin, E-mail: d11102004@mail.dlut.edu.cn, E-mail: lxxu@dlut.edu.cn [Institute of Theoretical Physics, School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian, 116024 (China)

    2014-08-01

    The coupling between the dark components provides a new approach to mitigate the coincidence problem of cosmological standard model. In this paper, dark energy is treated as a fluid with a constant equation of state, whose coupling with dark matter is Q-bar =3H?{sub x}?-bar {sub x}. In the frame of dark energy, we derive the evolution equations for the density and velocity perturbations. According to the Markov Chain Monte Carlo method, we constrain the model by currently available cosmic observations which include cosmic microwave background radiation, baryon acoustic oscillation, type Ia supernovae, and f?{sub 8}(z) data points from redshift-space distortion. The results show the interaction rate in ? regions: ?{sub x}=0.00328{sub -0.00328-0.00328-0.00328}{sup +0.000736+0.00549+0.00816}, which means that the recently cosmic observations favor a small interaction rate which is up to the order of 10{sup -2}, meanwhile, the measurement of redshift-space distortion could rule out the large interaction rate in the ? region.

  4. ARM - Evaluation Product - Vertical Air Motion during Large-Scale...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ProductsVertical Air Motion during Large-Scale Stratiform Rain ARM Data Discovery Browse ... Send us a note below or call us at 1-888-ARM-DATA. Send Evaluation Product : Vertical Air ...

  5. Towards a Large-Scale Recording System: Demonstration of Polymer...

    Office of Scientific and Technical Information (OSTI)

    of Polymer-Based Penetrating Array for Chronic Neural Recording Citation Details In-Document Search Title: Towards a Large-Scale Recording System: Demonstration of Polymer-Based ...

  6. COLLOQUIUM: Liquid Metal Batteries for Large-scale Energy Storage...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    June 22, 2016, 4:15pm to 5:30pm Colloquia MBG Auditorium, PPPL (284 cap.) COLLOQUIUM: Liquid Metal Batteries for Large-scale Energy Storage Dr. Hojong Kim Pennsylvania State ...

  7. How Three Retail Buyers Source Large-Scale Solar Electricity

    Broader source: Energy.gov [DOE]

    Large-scale, non-utility solar power purchase agreements (PPAs) are still a rarity despite the growing popularity of PPAs across the country. In this webinar, participants will learn more about how...

  8. Cosmological Simulations for Large-Scale Sky Surveys | Argonne...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The focus of cosmology today is on its two mysterious pillars, dark matter and dark energy. Large-scale sky surveys are the current drivers of precision cosmology and have been ...

  9. Cosmological Simulations for Large-Scale Sky Surveys | Argonne...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The focus of cosmology today revolves around two mysterious pillars, dark matter and dark energy. Large-scale sky surveys are the current drivers of precision cosmology and have ...

  10. Breakthrough Large-Scale Industrial Project Begins Carbon Capture and

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Utilization | Department of Energy Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization January 25, 2013 - 12:00pm Addthis Washington, DC - A breakthrough carbon capture, utilization, and storage (CCUS) project in Texas has begun capturing carbon dioxide (CO2) and piping it to an oilfield for use in enhanced oil recovery (EOR). Read the project factsheet The project at Air Products

  11. The Cielo Petascale Capability Supercomputer: Providing Large-Scale

    Office of Scientific and Technical Information (OSTI)

    Computing for Stockpile Stewardship (Conference) | SciTech Connect Conference: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship Citation Details In-Document Search Title: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship Authors: Vigil, Benny Manuel [1] ; Doerfler, Douglas W. [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2013-03-11 OSTI Identifier:

  12. Large Scale Computing and Storage Requirements for Biological and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Environmental Research: Target 2014 Large Scale Computing and Storage Requirements for Biological and Environmental Research: Target 2014 BERFrontcover.png A BER / ASCR / NERSC Workshop May 7-8, 2009 Final Report Large Scale Computing and Storage Requirements for Biological and Environmental Research, Report of the Joint BER / NERSC Workshop Conducted May 7-8, 2009 Rockville, MD Goals This workshop was jointly organized by the Department of Energy's Office of Biological & Environmental

  13. Large Scale Computing and Storage Requirements for Nuclear Physics: Target

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2014 Large Scale Computing and Storage Requirements for Nuclear Physics: Target 2014 NPFrontcover.png May 26-27, 2011 Hyatt Regency Bethesda One Bethesda Metro Center (7400 Wisconsin Ave) Bethesda, Maryland, USA 20814 Final Report Large Scale Computing and Storage Requirements for Nuclear Physics Research, Report of the Joint NP / NERSC Workshop Conducted May 26-27, 2011 Bethesda, MD Sponsored by the U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing

  14. Large Scale Production Computing and Storage Requirements for Fusion Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences" is organized by the Department of Energy's Office of Fusion Energy Sciences (FES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to

  15. Large Scale Production Computing and Storage Requirements for High Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 HEPlogo.jpg The NERSC Program Requirements Review "Large Scale Computing and Storage Requirements for High Energy Physics" is organized by the Department of Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to characterize

  16. Cosmological Simulations for Large-Scale Sky Surveys | Argonne Leadership

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Facility Cosmological Simulations for Large-Scale Sky Surveys PI Name: Salman Habib PI Email: habib@anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 100 Million Year: 2014 Research Domain: Physics The next generation of large-scale sky surveys aims to establish a new regime of cosmic discovery through fundamental measurements of the universe's geometry and the growth of structure. The aim of this project is to accurately

  17. COLLOQUIUM: Large Scale Superconducting Magnets for Variety of Applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    | Princeton Plasma Physics Lab October 15, 2014, 4:00pm to 5:30pm Colloquia MBG Auditorium COLLOQUIUM: Large Scale Superconducting Magnets for Variety of Applications Professor Joseph Minervini Massachusetts Institute of Technology Presentation: PDF icon Superconducting_Magnet_Technology_for_Fusion_and_Large_Scale_Applications.pdf Over the past several decades the U. S. magnetic confinement fusion program, working in collaboration with international partners, has developed superconductor and

  18. Ferroelectric opening switches for large-scale pulsed power drivers.

    SciTech Connect (OSTI)

    Brennecka, Geoffrey L.; Rudys, Joseph Matthew; Reed, Kim Warren; Pena, Gary Edward; Tuttle, Bruce Andrew; Glover, Steven Frank

    2009-11-01

    Fast electrical energy storage or Voltage-Driven Technology (VDT) has dominated fast, high-voltage pulsed power systems for the past six decades. Fast magnetic energy storage or Current-Driven Technology (CDT) is characterized by 10,000 X higher energy density than VDT and has a great number of other substantial advantages, but it has all but been neglected for all of these decades. The uniform explanation for neglect of CDT technology is invariably that the industry has never been able to make an effective opening switch, which is essential for the use of CDT. Most approaches to opening switches have involved plasma of one sort or another. On a large scale, gaseous plasmas have been used as a conductor to bridge the switch electrodes that provides an opening function when the current wave front propagates through to the output end of the plasma and fully magnetizes the plasma - this is called a Plasma Opening Switch (POS). Opening can be triggered in a POS using a magnetic field to push the plasma out of the A-K gap - this is called a Magnetically Controlled Plasma Opening Switch (MCPOS). On a small scale, depletion of electron plasmas in semiconductor devices is used to affect opening switch behavior, but these devices are relatively low voltage and low current compared to the hundreds of kilo-volts and tens of kilo-amperes of interest to pulsed power. This work is an investigation into an entirely new approach to opening switch technology that utilizes new materials in new ways. The new materials are Ferroelectrics and using them as an opening switch is a stark contrast to their traditional applications in optics and transducer applications. Emphasis is on use of high performance ferroelectrics with the objective of developing an opening switch that would be suitable for large scale pulsed power applications. Over the course of exploring this new ground, we have discovered new behaviors and properties of these materials that were here to fore unknown. Some of

  19. ANALYSIS OF TURBULENT MIXING JETS IN LARGE SCALE TANK

    SciTech Connect (OSTI)

    Lee, S; Richard Dimenna, R; Robert Leishear, R; David Stefanko, D

    2007-03-28

    Flow evolution models were developed to evaluate the performance of the new advanced design mixer pump for sludge mixing and removal operations with high-velocity liquid jets in one of the large-scale Savannah River Site waste tanks, Tank 18. This paper describes the computational model, the flow measurements used to provide validation data in the region far from the jet nozzle, the extension of the computational results to real tank conditions through the use of existing sludge suspension data, and finally, the sludge removal results from actual Tank 18 operations. A computational fluid dynamics approach was used to simulate the sludge removal operations. The models employed a three-dimensional representation of the tank with a two-equation turbulence model. Both the computational approach and the models were validated with onsite test data reported here and literature data. The model was then extended to actual conditions in Tank 18 through a velocity criterion to predict the ability of the new pump design to suspend settled sludge. A qualitative comparison with sludge removal operations in Tank 18 showed a reasonably good comparison with final results subject to significant uncertainties in actual sludge properties.

  20. Ground movements associated with large-scale underground coal gasification

    SciTech Connect (OSTI)

    Siriwardane, H.J.; Layne, A.W.

    1989-09-01

    The primary objective of this work was to predict the surface and underground movement associated with large-scale multiwell burn sites in the Illinois Basin and Appalachian Basin by using the subsidence/thermomechanical model UCG/HEAT. This code is based on the finite element method. In particular, it can be used to compute (1) the temperature field around an underground cavity when the temperature variation of the cavity boundary is known, and (2) displacements and stresses associated with body forces (gravitational forces) and a temperature field. It is hypothesized that large Underground Coal Gasification (UCG) cavities generated during the line-drive process will be similar to those generated by longwall mining. If that is the case, then as a UCG process continues, the roof of the cavity becomes unstable and collapses. In the UCG/HEAT computer code, roof collapse is modeled using a simplified failure criterion (Lee 1985). It is anticipated that roof collapse would occur behind the burn front; therefore, forward combustion can be continued. As the gasification front propagates, the length of the cavity would become much larger than its width. Because of this large length-to-width ratio in the cavity, ground response behavior could be analyzed by considering a plane-strain idealization. In a plane-strain idealization of the UCG cavity, a cross-section perpendicular to the axis of propagation could be considered, and a thermomechanical analysis performed using a modified version of the two-dimensional finite element code UCG/HEAT. 15 refs., 9 figs., 3 tabs.

  1. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect (OSTI)

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ?CDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ?. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ?, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ?. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  2. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    SciTech Connect (OSTI)

    Margaret Riley; Merry Buckley

    2009-01-01

    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin

  3. Initial field testing definition of subsurface sealing and backfilling tests in unsaturated tuff; Yucca Mountain Site Characterization Project

    SciTech Connect (OSTI)

    Fernandez, J.A.; Case, J.B.; Tyburski, J.R.

    1993-05-01

    This report contains an initial definition of the field tests proposed for the Yucca Mountain Project repository sealing program. The tests are intended to resolve various performance and emplacement concerns. Examples of concerns to be addressed include achieving selected hydrologic and structural requirements for seals, removing portions of the shaft liner, excavating keyways, emplacing cementitious and earthen seals, reducing the impact of fines on the hydraulic conductivity of fractures, efficient grouting of fracture zones, sealing of exploratory boreholes, and controlling the flow of water by using engineered designs. Ten discrete tests are proposed to address these and other concerns. These tests are divided into two groups: Seal component tests and performance confirmation tests. The seal component tests are thorough small-scale in situ tests, the intermediate-scale borehole seal tests, the fracture grouting tests, the surface backfill tests, and the grouted rock mass tests. The seal system tests are the seepage control tests, the backfill tests, the bulkhead test in the Calico Hills unit, the large-scale shaft seal and shaft fill tests, and the remote borehole sealing tests. The tests are proposed to be performed in six discrete areas, including welded and non-welded environments, primarily located outside the potential repository area. The final selection of sealing tests will depend on the nature of the geologic and hydrologic conditions encountered during the development of the Exploratory Studies Facility and detailed numerical analyses. Tests are likely to be performed both before and after License Application.

  4. DOE Completes Large-Scale Carbon Sequestration Project Awards | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy Large-Scale Carbon Sequestration Project Awards DOE Completes Large-Scale Carbon Sequestration Project Awards November 17, 2008 - 4:58pm Addthis Regional Partner to Demonstrate Safe and Permanent Storage of 2 Million Tons of CO2 at Wyoming Site WASHINGTON, DC - Completing a series of awards through its Regional Carbon Sequestration Partnership Program, the U.S. Department of Energy (DOE) today awarded $66.9 million to the Big Sky Regional Carbon Sequestration Partnership for the

  5. Report of the Workshop on Petascale Systems Integration for LargeScale Facilities

    SciTech Connect (OSTI)

    Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin

    2007-10-01

    There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.

  6. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect (OSTI)

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  7. Cosmological implications of the CMB large-scale structure

    SciTech Connect (OSTI)

    Melia, Fulvio

    2015-01-01

    The Wilkinson Microwave Anisotropy Probe (WMAP) and Planck may have uncovered several anomalies in the full cosmic microwave background (CMB) sky that could indicate possible new physics driving the growth of density fluctuations in the early universe. These include an unusually low power at the largest scales and an apparent alignment of the quadrupole and octopole moments. In a ?CDM model where the CMB is described by a Gaussian Random Field, the quadrupole and octopole moments should be statistically independent. The emergence of these low probability features may simply be due to posterior selections from many such possible effects, whose occurrence would therefore not be as unlikely as one might naively infer. If this is not the case, however, and if these features are not due to effects such as foreground contamination, their combined statistical significance would be equal to the product of their individual significances. In the absence of such extraneous factors, and ignoring the biasing due to posterior selection, the missing large-angle correlations would have a probability as low as ?0.1% and the low-l multipole alignment would be unlikely at the ?4.9% level; under the least favorable conditions, their simultaneous observation in the context of the standard model could then be likely at only the ?0.005% level. In this paper, we explore the possibility that these features are indeed anomalous, and show that the corresponding probability of CMB multipole alignment in the R{sub h}=ct universe would then be ?710%, depending on the number of large-scale SachsWolfe induced fluctuations. Since the low power at the largest spatial scales is reproduced in this cosmology without the need to invoke cosmic variance, the overall likelihood of observing both of these features in the CMB is ?7%, much more likely than in ?CDM, if the anomalies are real. The key physical ingredient responsible for this difference is the existence in the former of a maximum fluctuation

  8. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    SciTech Connect (OSTI)

    Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  9. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect (OSTI)

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  10. LARGE-SCALE MOTIONS IN THE PERSEUS GALAXY CLUSTER

    SciTech Connect (OSTI)

    Simionescu, A.; Werner, N.; Urban, O.; Allen, S. W.; Fabian, A. C.; Sanders, J. S.; Mantz, A.; Nulsen, P. E. J.; Takei, Y.

    2012-10-01

    By combining large-scale mosaics of ROSAT PSPC, XMM-Newton, and Suzaku X-ray observations, we present evidence for large-scale motions in the intracluster medium of the nearby, X-ray bright Perseus Cluster. These motions are suggested by several alternating and interleaved X-ray bright, low-temperature, low-entropy arcs located along the east-west axis, at radii ranging from {approx}10 kpc to over a Mpc. Thermodynamic features qualitatively similar to these have previously been observed in the centers of cool-core clusters, and were successfully modeled as a consequence of the gas sloshing/swirling motions induced by minor mergers. Our observations indicate that such sloshing/swirling can extend out to larger radii than previously thought, on scales approaching the virial radius.

  11. Robust, Multifunctional Joint for Large Scale Power Production Stacks -

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Energy Innovation Portal Robust, Multifunctional Joint for Large Scale Power Production Stacks Lawrence Berkeley National Laboratory Contact LBL About This Technology DIAGRAM OF BERKELEY LAB'S MULTIFUNCTIONAL JOINT DIAGRAM OF BERKELEY LAB'S MULTIFUNCTIONAL JOINT Technology Marketing SummaryBerkeley Lab scientists have developed a multifunctional joint for metal supported, tubular SOFCs that divides various joint functions so that materials and methods optimizing each function can be chosen

  12. Large-scale ab initio configuration interaction calculations for light

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    nuclei | Argonne Leadership Computing Facility Large-scale ab initio configuration interaction calculations for light nuclei Authors: Pieter Maris, H Metin Aktulga, Mark A Caprio, Ümit V Çatalyürek, Esmond G Ng, Dossay Oryspayev, Hugh Potter, Erik Saule, Masha Sosonkina, James P Vary, Chao Yang Zheng Zhou In ab-initio Configuration Interaction calculations, the nuclear wavefunction is expanded in Slater determinants of single-nucleon wavefunctions and the many-body Schrodinger equation

  13. Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect (OSTI)

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their

  14. UNIVERSITY OF CALIFORNIA The Future of Large Scale Visual Data

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CALIFORNIA The Future of Large Scale Visual Data Analysis Joint Facilities User Forum on Data Intensive Computing Oakland, CA E. Wes Bethel Lawrence Berkeley National Laboratory 16 June 2014 16 June 2014 The World that Was: Computational Architectures * Machine architectures - Single CPU, single core - Vector, then single-core MPPs - "Large" SMP platforms - Relatively well balanced: memory, FLOPS,I/O 16 June 2014 The World that Was: Software Architecture * Data Analysis and

  15. Large Scale Computing and Storage Requirements for Fusion Energy Sciences:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Target 2014 High Energy Physics (HEP) Nuclear Physics (NP) Overview Published Reports Case Study FAQs NERSC HPC Achievement Awards Share Your Research User Submitted Research Citations NERSC Citations Home » Science at NERSC » HPC Requirements Reviews » Requirements Reviews: Target 2014 » Fusion Energy Sciences (FES) Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2014 FESFrontcover.png An FES / ASCR / NERSC Workshop August 3-4, 2010 Final Report Large

  16. Large Scale Production Computing and Storage Requirements for Advanced

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scientific Computing Research: Target 2017 Large Scale Production Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2017 ASCRLogo.png This is an invitation-only review organized by the Department of Energy's Office of Advanced Scientific Computing Research (ASCR) and NERSC. The general goal is to determine production high-performance computing, storage, and services that will be needed for ASCR to achieve its science goals through 2017. A specific focus

  17. Large Scale Production Computing and Storage Requirements for Basic Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Basic Energy Sciences: Target 2017 BES-Montage.png This is an invitation-only review organized by the Department of Energy's Office of Basic Energy Sciences (BES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The goal is to determine production high-performance computing, storage, and services that will be needed for BES to

  18. Large Scale Production Computing and Storage Requirements for Biological

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Environmental Research: Target 2017 Large Scale Production Computing and Storage Requirements for Biological and Environmental Research: Target 2017 BERmontage.gif September 11-12, 2012 Hilton Rockville Hotel and Executive Meeting Center 1750 Rockville Pike Rockville, MD, 20852-1699 TEL: 1-301-468-1100 Sponsored by: U.S. Department of Energy Office of Science Office of Advanced Scientific Computing Research (ASCR) Office of Biological and Environmental Research (BER) National Energy

  19. Large Scale Production Computing and Storage Requirements for Nuclear

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for Nuclear Physics: Target 2017 NPicon.png This invitation-only review is organized by the Department of Energy's Offices of Nuclear Physics (NP) and Advanced Scientific Computing Research (ASCR) and by NERSC. The goal is to determine production high-performance computing, storage, and services that will be needed for NP to achieve its science goals through 2017. The review brings together DOE Program Managers,

  20. Computational Fluid Dynamics & Large-Scale Uncertainty Quantification for

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Wind Energy Fluid Dynamics & Large-Scale Uncertainty Quantification for Wind Energy - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery

  1. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect (OSTI)

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  2. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect (OSTI)

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  3. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect (OSTI)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  4. Towards physics responsible for large-scale Lyman-α forest bias parameters

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Agnieszka M. Cieplak; Slosar, Anze

    2016-03-08

    Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density (bδ) and velocity gradient (bη) biases of the Lyman-α forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamically simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit of no thermalmore » broadening and linear redshift-space distortions. We also show that his bη formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of bη and the small scale flux PDF might be a sensitive probe of the thermal state of the IGM. Lastly, we find that large-scale biases derived from the smoothed total matter field are within 10–20% to those based on hydrodynamical quantities, in line with other measurements in the literature.« less

  5. HyLights -- Tools to Prepare the Large-Scale European Demonstration...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    HYLIGHTS - TOOLS TO PREPARE THE LARGE-SCALE EUROPEAN DEMONSTRATION PROJECTS ON HYDROGEN ... Assist the European Commission and European industry to plan the large-scale demonstration ...

  6. Reducing Data Center Loads for a Large-scale, Low Energy Office...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Data Center Loads for a Large- scale, Low-energy Office Building: NREL's Research Support ... National Renewable Energy Laboratory Reducing Data Center Loads for a Large-Scale, ...

  7. Trip Report-Produced-Water Field Testing

    SciTech Connect (OSTI)

    Sullivan, Enid J.

    2012-05-25

    Los Alamos National Laboratory (LANL) conducted field testing of a produced-water pretreatment apparatus with assistance from faculty at the Texas A&M University (TAMU) protein separation sciences laboratory located on the TAMU main campus. The following report details all of the logistics surrounding the testing. The purpose of the test was to use a new, commercially-available filter media housing containing modified zeolite (surfactant-modified zeolite or SMZ) porous medium for use in pretreatment of oil and gas produced water (PW) and frac-flowback waters. The SMZ was tested previously in October, 2010 in a lab-constructed configuration ('old multicolumn system'), and performed well for removal of benzene, toluene, ethylbenzene, and xylenes (BTEX) from PW. However, a less-expensive, modular configuration is needed for field use. A modular system will allow the field operator to add or subtract SMZ filters as needed to accommodate site specific conditions, and to swap out used filters easily in a multi-unit system. This test demonstrated the use of a commercial filter housing with a simple flow modification and packed with SMZ for removing BTEX from a PW source in College Station, Texas. The system will be tested in June 2012 at a field site in Pennsylvania for treating frac-flowback waters. The goals of this test are: (1) to determine sorption efficiency of BTEX in the new configuration; and (2) to observe the range of flow rates, backpressures, and total volume treated at a given flow rate.

  8. Large-Scale Urban Decontamination; Developments, Historical Examples and Lessons Learned

    SciTech Connect (OSTI)

    Rick Demmer

    2007-02-01

    Recent terrorist threats and actual events have lead to a renewed interest in the technical field of large scale, urban environment decontamination. One of the driving forces for this interest is the real potential for the cleanup and removal of radioactive dispersal device (RDD or dirty bomb) residues. In response the U. S. Government has spent many millions of dollars investigating RDD contamination and novel decontamination methodologies. Interest in chemical and biological (CB) cleanup has also peaked with the threat of terrorist action like the anthrax attack at the Hart Senate Office Building and with catastrophic natural events such as Hurricane Katrina. The efficiency of cleanup response will be improved with these new developments and a better understanding of the old reliable methodologies. Perhaps the most interesting area of investigation for large area decontamination is that of the RDD. While primarily an economic and psychological weapon, the need to cleanup and return valuable or culturally significant resources to the public is nonetheless valid. Several private companies, universities and National Laboratories are currently developing novel RDD cleanup technologies. Because of its longstanding association with radioactive facilities, the U. S. Department of Energy National Laboratories are at the forefront in developing and testing new RDD decontamination methods. However, such cleanup technologies are likely to be fairly task specific; while many different contamination mechanisms, substrate and environmental conditions will make actual application more complicated. Some major efforts have also been made to model potential contamination, to evaluate both old and new decontamination techniques and to assess their readiness for use. Non-radioactive, CB threats each have unique decontamination challenges and recent events have provided some examples. The U. S. Environmental Protection Agency (EPA), as lead agency for these emergency cleanup

  9. High Fidelity Simulations of Large-Scale Wireless Networks (Plus-Up)

    SciTech Connect (OSTI)

    Onunkwo, Uzoma

    2015-11-01

    Sandia has built a strong reputation in scalable network simulation and emulation for cyber security studies to protect our nation’s critical information infrastructures. Georgia Tech has preeminent reputation in academia for excellence in scalable discrete event simulations, with strong emphasis on simulating cyber networks. Many of the experts in this field, such as Dr. Richard Fujimoto, Dr. George Riley, and Dr. Chris Carothers, have strong affiliations with Georgia Tech. The collaborative relationship that we intend to immediately pursue is in high fidelity simulations of practical large-scale wireless networks using ns-3 simulator via Dr. George Riley. This project will have mutual benefits in bolstering both institutions’ expertise and reputation in the field of scalable simulation for cyber-security studies. This project promises to address high fidelity simulations of large-scale wireless networks. This proposed collaboration is directly in line with Georgia Tech’s goals for developing and expanding the Communications Systems Center, the Georgia Tech Broadband Institute, and Georgia Tech Information Security Center along with its yearly Emerging Cyber Threats Report. At Sandia, this work benefits the defense systems and assessment area with promise for large-scale assessment of cyber security needs and vulnerabilities of our nation’s critical cyber infrastructures exposed to wireless communications.

  10. Large scale obscuration and related climate effects open literature bibliography

    SciTech Connect (OSTI)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  11. Accelerated Stress Testing, Qualification Testing, HAST, Field Experience

    Broader source: Energy.gov [DOE]

    This presentation, which was the opening session of the NREL 2013 Photovoltaic Module Reliability Workshop held on February 26, 2013 in Golden, CO, was presented by John Wohlgemuth. Entitled "Accelerated Stress Testing, Qualification Testing, HAST, Field Experience -- What Do They All Mean?" the presentation details efforts to develop accelerated stress tests beyond the qualification test levels, which are necessary to predict PV module wear-out. The commercial success of PVs is ultimately based on the long-term reliability and safety of the deployed PV modules.

  12. SMART Wind Turbine Rotor: Design and Field Test | Department...

    Office of Environmental Management (EM)

    Design and Field Test SMART Wind Turbine Rotor: Design and Field Test This report documents the design, fabrication, and testing of the SMART Wind Turbine Rotor. This work ...

  13. Combustion Safety Simplified Test Protocol Field Study

    SciTech Connect (OSTI)

    Brand, L.; Cautley, D.; Bohac, D.; Francisco, P.; Shen, L.; Gloss, S.

    2015-11-01

    Combustions safety is an important step in the process of upgrading homes for energy efficiency. There are several approaches used by field practitioners, but researchers have indicated that the test procedures in use are complex to implement and provide too many false positives. Field failures often mean that the house is not upgraded until after remediation or not at all, if not include in the program. In this report the PARR and NorthernSTAR DOE Building America Teams provide a simplified test procedure that is easier to implement and should produce fewer false positives. A survey of state weatherization agencies on combustion safety issues, details of a field data collection instrumentation package, summary of data collected over seven months, data analysis and results are included. The project team collected field data on 11 houses in 2015.

  14. Large-scale anisotropy in stably stratified rotating flows

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  15. Detecting differential protein expression in large-scale population proteomics

    SciTech Connect (OSTI)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  16. NREL: Performance and Reliability R&D - Field Testing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Field Testing Photo of an aerial view of the Outdoor Test Facility and array field. The Outdoor Test Facility forms the backbone of our field-testing capabilities. Photo of some of ...

  17. Numerical simulations of capillary barrier field tests

    SciTech Connect (OSTI)

    Morris, C.E.; Stormont, J.C.

    1997-12-31

    Numerical simulations of two capillary barrier systems tested in the field were conducted to determine if an unsaturated flow model could accurately represent the observed results. The field data was collected from two 7-m long, 1.2-m thick capillary barriers built on a 10% grade that were being tested to investigate their ability to laterally divert water downslope. One system had a homogeneous fine layer, while the fine soil of the second barrier was layered to increase its ability to laterally divert infiltrating moisture. The barriers were subjected first to constant infiltration while minimizing evaporative losses and then were exposed to ambient conditions. The continuous infiltration period of the field tests for the two barrier systems was modelled to determine the ability of an existing code to accurately represent capillary barrier behavior embodied in these two designs. Differences between the field test and the model data were found, but in general the simulations appeared to adequately reproduce the response of the test systems. Accounting for moisture retention hysteresis in the layered system will potentially lead to more accurate modelling results and is likely to be important when developing reasonable predictions of capillary barrier behavior.

  18. Presentation on the Large-Scale Renewable Energy Guide | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Presentation on the Large-Scale Renewable Energy Guide Presentation on the Large-Scale Renewable Energy Guide Presentation covers the Large-Scale RE Guide: Developing Renewable Energy Projects Larger than 10 MWs at Federal Facilities for the FUPWG Spring meeting, held on May 22, 2013, in San Francisco, California. Download FEMP's Large-Scale Renewable Energy Guide - Presented by Brad Gustafson (1.75 MB) More Documents & Publications Large-Scale Federal Renewable Energy Projects

  19. Nuclear-pumped lasers for large-scale applications

    SciTech Connect (OSTI)

    Anderson, R.E.; Leonard, E.M.; Shea, R.F.; Berggren, R.R.

    1989-05-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficiently short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system; to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to demonstrate the performance of large-scale optics and the beam quality that may be obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 8 figs., 5 tabs.

  20. Nuclear-pumped lasers for large-scale applications

    SciTech Connect (OSTI)

    Anderson, R.E.; Leonard, E.M.; Shea, R.E.; Berggren, R.R.

    1988-01-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficient short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system: to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to determine the performance of large-scale optics and the beam quality that may bo obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 7 figs., 5 tabs.

  1. Performance Health Monitoring of Large-Scale Systems

    SciTech Connect (OSTI)

    Rajamony, Ram

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­‐scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  2. Large-Scale All-Dielectric Metamaterial Perfect Reflectors

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Moitra, Parikshit; Slovick, Brian A.; li, Wei; Kravchencko, Ivan I.; Briggs, Dayrl P.; Krishnamurthy, S.; Valentine, Jason

    2015-05-08

    All-dielectric metamaterials offer a potential low-loss alternative to plasmonic metamaterials at optical frequencies. In this paper, we take advantage of the low absorption loss as well as the simple unit cell geometry to demonstrate large-scale (centimeter-sized) all-dielectric metamaterial perfect reflectors made from silicon cylinder resonators. These perfect reflectors, operating in the telecommunications band, were fabricated using self-assembly based nanosphere lithography. In spite of the disorder originating from the self-assembly process, the average reflectance of the metamaterial perfect reflectors is 99.7% at 1530 nm, surpassing the reflectance of metallic mirrors. Moreover, the spectral separation of the electric and magnetic resonances canmore » be chosen to achieve the required reflection bandwidth while maintaining a high tolerance to disorder. Finally, the scalability of this design could lead to new avenues of manipulating light for low-loss and large-area photonic applications.« less

  3. Planning under uncertainty solving large-scale stochastic linear programs

    SciTech Connect (OSTI)

    Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  4. High Fidelity Simulations of Large-Scale Wireless Networks

    SciTech Connect (OSTI)

    Onunkwo, Uzoma; Benz, Zachary

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  5. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    SciTech Connect (OSTI)

    Bonne, François; Bonnay, Patrick

    2014-01-29

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  6. INTERAGENCY FIELD TEST & EVALUATION OF WIND TURBINE - RADAR INTERFEREN...

    Office of Environmental Management (EM)

    INTERAGENCY FIELD TEST & EVALUATION OF WIND TURBINE - RADAR INTERFERENCE MITIGATION TECHNOLOGIES INTERAGENCY FIELD TEST & EVALUATION OF WIND TURBINE - RADAR INTERFERENCE MITIGATION ...

  7. Design and Installation of a Disposal Cell Cover Field Test ...

    Office of Environmental Management (EM)

    Design and Installation of a Disposal Cell Cover Field Test Design and Installation of a Disposal Cell Cover Field Test Paper presented at the Waste Management 2011 Conference. ...

  8. DOE-Sponsored Field Test Demonstrates Viability of Simultaneous...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Field Test Demonstrates Viability of Simultaneous CO2 Storage and Enhanced Oil Recovery in Carbonate Reservoirs DOE-Sponsored Field Test Demonstrates Viability of Simultaneous CO2 ...

  9. Text-Alternative Version of Building America Webinar: Field Test...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Field Test Best Practices, BEopt, and the National Residential Efficiency Measures Database Text-Alternative Version of Building America Webinar: Field Test Best Practices, BEopt, ...

  10. Project Impact Assessments … Building America FY14 Field Test...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Impact Assessments - Building America FY14 Field Test Technical Support 2014 Building ... 9302014 Key Milestones : 1. Launch Field Test Best Practices web- based facilitated ...

  11. Project Impact Assessments: Building America FY14 Field Test...

    Energy Savers [EERE]

    Project Impact Assessments: Building America FY14 Field Test Technical Support - 2014 BTO Peer Review Project Impact Assessments: Building America FY14 Field Test Technical Support ...

  12. Field Testing of Pre-Production Prototype Residential Heat Pump...

    Energy Savers [EERE]

    Field Testing of Pre-Production Prototype Residential Heat Pump Water Heaters Field Testing of Pre-Production Prototype Residential Heat Pump Water Heaters Provides and overview of ...

  13. Upcoming Funding Opportunity to Develop and Field Test Wind Energy...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    and Field Test Wind Energy Bat Impact Minimization Technologies Upcoming Funding Opportunity to Develop and Field Test Wind Energy Bat Impact Minimization Technologies October 6, ...

  14. Combustion Safety Simplified Test Protocol Field Study

    SciTech Connect (OSTI)

    Brand, L; Cautley, D.; Bohac, D.; Francisco, P.; Shen, L.; Gloss, S.

    2015-11-05

    "9Combustions safety is an important step in the process of upgrading homes for energy efficiency. There are several approaches used by field practitioners, but researchers have indicated that the test procedures in use are complex to implement and provide too many false positives. Field failures often mean that the house is not upgraded until after remediation or not at all, if not include in the program. In this report the PARR and NorthernSTAR DOE Building America Teams provide a simplified test procedure that is easier to implement and should produce fewer false positives. A survey of state weatherization agencies on combustion safety issues, details of a field data collection instrumentation package, summary of data collected over seven months, data analysis and results are included. The project provides several key results. State weatherization agencies do not generally track combustion safety failures, the data from those that do suggest that there is little actual evidence that combustion safety failures due to spillage from non-dryer exhaust are common and that only a very small number of homes are subject to the failures. The project team collected field data on 11 houses in 2015. Of these homes, two houses that demonstrated prolonged and excessive spillage were also the only two with venting systems out of compliance with the National Fuel Gas Code. The remaining homes experienced spillage that only occasionally extended beyond the first minute of operation. Combustion zone depressurization, outdoor temperature, and operation of individual fans all provide statistically significant predictors of spillage.

  15. Design advanced for large-scale, economic, floating LNG plant

    SciTech Connect (OSTI)

    Naklie, M.M.

    1997-06-30

    A floating LNG plant design has been developed which is technically feasible, economical, safe, and reliable. This technology will allow monetization of small marginal fields and improve the economics of large fields. Mobil`s world-scale plant design has a capacity of 6 million tons/year of LNG and up to 55,000 b/d condensate produced from 1 bcfd of feed gas. The plant would be located on a large, secure, concrete barge with a central moonpool. LNG storage is provided for 250,000 cu m and condensate storage for 650,000 bbl. And both products are off-loaded from the barge. Model tests have verified the stability of the barge structure: barge motions are low enough to permit the plant to continue operation in a 100-year storm in the Pacific Rim. Moreover, the barge is spread-moored, eliminating the need for a turret and swivel. Because the design is generic, the plant can process a wide variety of feed gases and operate in different environments, should the plant be relocated. This capability potentially gives the plant investment a much longer project life because its use is not limited to the life of only one producing area.

  16. Comparison of prestellar core elongations and large-scale molecular cloud structures in the Lupus I region

    SciTech Connect (OSTI)

    Poidevin, Frdrick; Ade, Peter A. R.; Hargrave, Peter C.; Nutter, David; Angile, Francesco E.; Devlin, Mark J.; Klein, Jeffrey; Benton, Steven J.; Netterfield, Calvin B.; Chapin, Edward L.; Fissel, Laura M.; Gandilo, Natalie N.; Fukui, Yasuo; Gundersen, Joshua O.; Korotkov, Andrei L.; Matthews, Tristan G.; Novak, Giles; Moncelsi, Lorenzo; Mroczkowski, Tony K.; Olmi, Luca; and others

    2014-08-10

    Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 ?m maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of the morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 ?m with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamicsincluding secondary filaments that often run orthogonally to the primary filamentand possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.

  17. Large scale electromechanical transistor with application in mass sensing

    SciTech Connect (OSTI)

    Jin, Leisheng; Li, Lijie

    2014-12-07

    Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibrationan external force has to be used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.

  18. Parallel Index and Query for Large Scale Data Analysis

    SciTech Connect (OSTI)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  19. Large-scale BAO signatures of the smallest galaxies

    SciTech Connect (OSTI)

    Dalal, Neal; Pen, Ue-Li; Seljak, Uros E-mail: pen@cita.utoronto.ca

    2010-11-01

    Recent work has shown that at high redshift, the relative velocity between dark matter and baryonic gas is typically supersonic. This relative velocity suppresses the formation of the earliest baryonic structures like minihalos, and the suppression is modulated on large scales. This effect imprints a characteristic shape in the clustering power spectrum of the earliest structures, with significant power on ∼ 100 Mpc scales featuring highly pronounced baryon acoustic oscillations. The amplitude of these oscillations is orders of magnitude larger at z ∼ 20 than previously expected. This characteristic signature can allow us to distinguish the effects of minihalos on intergalactic gas at times preceding and during reionization. We illustrate this effect with the example of 21 cm emission and absorption from redshifts during and before reionization. This effect can potentially allow us to probe physics on kpc scales using observations on 100 Mpc scales. We present sensitivity forecasts for FAST and Arecibo. Depending on parameters, this enhanced structure may be detectable by Arecibo at z ∼ 15−20, and with appropriate instrumentation FAST could measure the BAO power spectrum with high precision. In principle, this effect could also pose a serious challenge for efforts to constrain dark energy using observations of the BAO feature at low redshift.

  20. LARGE SCALE METHOD FOR THE PRODUCTION AND PURIFICATION OF CURIUM

    DOE Patents [OSTI]

    Higgins, G.H.; Crane, W.W.T.

    1959-05-19

    A large-scale process for production and purification of Cm/sup 242/ is described. Aluminum slugs containing Am are irradiated and declad in a NaOH-- NaHO/sub 3/ solution at 85 to 100 deg C. The resulting slurry filtered and washed with NaOH, NH/sub 4/OH, and H/sub 2/O. Recovery of Cm from filtrate and washings is effected by an Fe(OH)/sub 3/ precipitation. The precipitates are then combined and dissolved ln HCl and refractory oxides centrifuged out. These oxides are then fused with Na/sub 2/CO/sub 3/ and dissolved in HCl. The solution is evaporated and LiCl solution added. The Cm, rare earths, and anionic impurities are adsorbed on a strong-base anfon exchange resin. Impurities are eluted with LiCl--HCl solution, rare earths and Cm are eluted by HCl. Other ion exchange steps further purify the Cm. The Cm is then precipitated as fluoride and used in this form or further purified and processed. (T.R.H.)

  1. Large Scale Obscuration and Related Climate Effects Workshop: Proceedings

    SciTech Connect (OSTI)

    Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K.

    1994-05-01

    A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

  2. Comparison of the effects in the rock mass of large-scale chemical...

    Office of Scientific and Technical Information (OSTI)

    Comparison of the effects in the rock mass of large-scale chemical and nuclear explosions. ... Title: Comparison of the effects in the rock mass of large-scale chemical and nuclear ...

  3. I/O Performance of a Large-Scale, Interpreter-Driven Laser-Plasma...

    Office of Scientific and Technical Information (OSTI)

    Conference: IO Performance of a Large-Scale, Interpreter-Driven Laser-Plasma Interaction Code Citation Details In-Document Search Title: IO Performance of a Large-Scale, ...

  4. Energy Department Awards $66.7 Million for Large-Scale Carbon...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    66.7 Million for Large-Scale Carbon Sequestration Project Energy Department Awards 66.7 Million for Large-Scale Carbon Sequestration Project December 18, 2007 - 4:58pm Addthis ...

  5. Large-Scale Deep Learning on the YFCC100M Dataset (Conference...

    Office of Scientific and Technical Information (OSTI)

    Conference: Large-Scale Deep Learning on the YFCC100M Dataset Citation Details In-Document Search Title: Large-Scale Deep Learning on the YFCC100M Dataset Authors: Ni, K ; Boakye, ...

  6. Parallel I/O Software Infrastructure for Large-Scale Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel IO Software Infrastructure for Large-Scale Systems Parallel IO Software Infrastructure for Large-Scale Systems Choudhary.png An illustration of how MPI---IO file domain...

  7. EERE Success Story-FEMP Helps Federal Facilities Develop Large-Scale

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Renewable Energy Projects | Department of Energy FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects EERE Success Story-FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects August 21, 2013 - 12:00am Addthis EERE's Federal Energy Management Program issued a new resource that provides best practices and helpful guidance for federal agencies developing large-scale renewable energy projects. The resource, Large-Scale Renewable Energy Guide:

  8. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  9. Large-Scale Data Challenges in Future Power Grids

    SciTech Connect (OSTI)

    Yin, Jian; Sharma, Poorva; Gorton, Ian; Akyol, Bora A.

    2013-03-25

    This paper describes technical challenges in supporting large-scale real-time data analysis for future power grid systems and discusses various design options to address these challenges. Even though the existing U.S. power grid has served the nation remarkably well over the last 120 years, big changes are in the horizon. The widespread deployment of renewable generation, smart grid controls, energy storage, plug-in hybrids, and new conducting materials will require fundamental changes in the operational concepts and principal components. The whole system becomes highly dynamic and needs constant adjustments based on real time data. Even though millions of sensors such as phase measurement units (PMUs) and smart meters are being widely deployed, a data layer that can support this amount of data in real time is needed. Unlike the data fabric in cloud services, the data layer for smart grids must address some unique challenges. This layer must be scalable to support millions of sensors and a large number of diverse applications and still provide real time guarantees. Moreover, the system needs to be highly reliable and highly secure because the power grid is a critical piece of infrastructure. No existing systems can satisfy all the requirements at the same time. We examine various design options. In particular, we explore the special characteristics of power grid data to meet both scalability and quality of service requirements. Our initial prototype can improve performance by orders of magnitude over existing general-purpose systems. The prototype was demonstrated with several use cases from PNNL’s FPGI and was shown to be able to integrate huge amount of data from a large number of sensors and a diverse set of applications.

  10. Primordial non-Gaussianity in the bispectra of large-scale structure

    SciTech Connect (OSTI)

    Tasinato, Gianmassimo; Tellarini, Matteo; Ross, Ashley J.; Wands, David E-mail: matteo.tellarini@port.ac.uk E-mail: david.wands@port.ac.uk

    2014-03-01

    The statistics of large-scale structure in the Universe can be used to probe non-Gaussianity of the primordial density field, complementary to existing constraints from the cosmic microwave background. In particular, the scale dependence of halo bias, which affects the halo distribution at large scales, represents a promising tool for analyzing primordial non-Gaussianity of local form. Future observations, for example, may be able to constrain the trispectrum parameter g{sub NL} that is difficult to study and constrain using the CMB alone. We investigate how galaxy and matter bispectra can distinguish between the two non-Gaussian parameters f{sub NL} and g{sub NL}, whose effects give nearly degenerate contributions to the power spectra. We use a generalization of the univariate bias approach, making the hypothesis that the number density of halos forming at a given position is a function of the local matter density contrast and of its local higher-order statistics. Using this approach, we calculate the halo-matter bispectra and analyze their properties. We determine a connection between the sign of the halo bispectrum on large scales and the parameter g{sub NL}. We also construct a combination of halo and matter bispectra that is sensitive to f{sub NL}, with little contamination from g{sub NL}. We study both the case of single and multiple sources to the primordial gravitational potential, discussing how to extend the concept of stochastic halo bias to the case of bispectra. We use a specific halo mass-function to calculate numerically the bispectra in appropriate squeezed limits, confirming our theoretical findings.

  11. IN SITU FIELD TESTING OF PROCESSES

    SciTech Connect (OSTI)

    J.S.Y. YANG

    2004-11-08

    The purpose of this scientific analysis report is to update and document the data and subsequent analyses from ambient field-testing activities performed in underground drifts and surface-based boreholes through unsaturated zone (UZ) tuff rock units. In situ testing, monitoring, and associated laboratory studies are conducted to directly assess and evaluate the waste emplacement environment and the natural barriers to radionuclide transport at Yucca Mountain. This scientific analysis report supports and provides data to UZ flow and transport model reports, which in turn contribute to the Total System Performance Assessment (TSPA) of Yucca Mountain, an important document for the license application (LA). The objectives of ambient field-testing activities are described in Section 1.1. This report is the third revision (REV 03), which supercedes REV 02. The scientific analysis of data for inputs to model calibration and validation as documented in REV 02 were developed in accordance with the Technical Work Plan (TWP) ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (BSC 2004 [DIRS 167969]). This revision was developed in accordance with the ''Technical Work Plan for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Section 1.2.4) for better integrated, consistent, transparent, traceable, and more complete documentation in this scientific analysis report and associated UZ flow and transport model reports. No additional testing or analyses were performed as part of this revision. The list of relevant acceptance criteria is provided by ''Technical Work Plan for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654]), Table 3-1. Additional deviations from the TWP regarding the features, events, and processes (FEPs) list are discussed in Section 1.3. Documentation in this report includes descriptions of how, and under what conditions, the tests were conducted. The descriptions and

  12. CO2 Storage and Enhanced Oil Recovery: Bald Unit Test Site, Mumford Hills Oil Field, Posey County, Indiana

    SciTech Connect (OSTI)

    Frailey, Scott M.; Krapac, Ivan G.; Damico, James R.; Okwen, Roland T.; McKaskle, Ray W.

    2012-03-30

    The Midwest Geological Sequestration Consortium (MGSC) carried out a small-scale carbon dioxide (CO2) injection test in a sandstone within the Clore Formation (Mississippian System, Chesterian Series) in order to gauge the large-scale CO2 storage that might be realized from enhanced oil recovery (EOR) of mature Illinois Basin oil fields via miscible liquid CO2 flooding.

  13. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  14. Large-scale seismic waveform quality metric calculation using Hadoop

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Magana-Zook, Steven; Gaylord, Jessie M.; Knapp, Douglas R.; Dodge, Douglas A.; Ruppert, Stanley D.

    2016-05-27

    Here in this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/Omore » performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. We conducted these experiments multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will

  15. FIELD TEST OF THE FLAME QUALITY INDICATOR

    SciTech Connect (OSTI)

    Andrew M. Rudin; Thomas Butcher; Henry Troost

    2003-02-04

    The flame quality indicator concept was developed at BNL specifically to monitor the brightness of the flame in a small oil burner and to provide a ''call for service'' notification when the brightness has changed from its setpoint, either high or low. In prior development work BNL has explored the response of this system to operational upsets such as excess air changes, fouled atomizer nozzles, poor fuel quality, etc. Insight Technologies, Inc. and Honeywell, Inc. have licensed this technology from the U.S. Department of Energy and have been cooperating to develop product offerings which meet industry needs with an optimal combination of function and price. Honeywell has recently completed the development of the Flame Quality Monitor (FQM or Honeywell QS7100F). This is a small module which connects via a serial cable to the burners primary operating control. Primary advantages of this approach are simplicity, cost, and ease of installation. Call-for-service conditions are output in the form of front panel indicator lights and contact closure which can trigger a range of external communication options. Under this project a field test was conducted of the FQM in cooperation with service organizations in Virginia, Pennsylvania, New Jersey, New York, and Connecticut. At total of 83 field sites were included. At each site the FQM was installed in parallel with another embodiment of this concept--the Insight AFQI. The AFQI incorporates a modem and provides the ability to provide detailed information on the trends in the flame quality over the course of the two year test period. The test site population was comprised of 79.5% boilers, 13.7% warm air furnaces, and 6.8% water heaters. Nearly all were of residential size--with firing rates ranging from 0.6 gallons of oil per hour to 1.25. During the course of the test program the monitoring equipment successfully identified problems including: plugged fuel lines, fouled nozzles, collapsed combustion chambers, and poor fuel

  16. Multilevel method for modeling large-scale networks.

    SciTech Connect (OSTI)

    Safro, I. M.

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  17. Ward identities and consistency relations for the large scale...

    Office of Scientific and Technical Information (OSTI)

    distribution of dark matter halos relative to that of the underlying dark matter field. ... Country of input: International Atomic Energy Agency (IAEA) Country of Publication: ...

  18. FIELD INVESTIGATION AT THE FAULTLESS SITE CENTRAL NEVADA TEST

    Office of Legacy Management (LM)

    FIELD INVESTIGATION AT THE FAULTLESS SITE CENTRAL NEVADA TEST AREA DOEINV10845--T3 DE93 ... at the Faultless Site Central Nevada Test Area An evaluation of groundwater ...

  19. FIELD TEST AND EVALUATION OF RESIDENTIAL GROUND SOURCE HEAT PUMP...

    Office of Scientific and Technical Information (OSTI)

    TEST AND EVALUATION OF RESIDENTIAL GROUND SOURCE HEAT PUMP SYSTEMS USING ALTERNATIVE VERTICAL-BORE GROUND HEAT EXCHANGERS Citation Details In-Document Search Title: FIELD TEST AND ...

  20. Interagency Field Test Evaluates Co-operation of Turbines and...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Interagency Field Test Evaluates Co-operation of Turbines and Radar Interagency Field Test Evaluates Co-operation of Turbines and Radar May 1, 2012 - 2:56pm Addthis The Department ...

  1. Small-Scale Carbon Sequestration Field Test Yields Significant...

    Office of Environmental Management (EM)

    Small-Scale Carbon Sequestration Field Test Yields Significant Lessons Learned Small-Scale Carbon Sequestration Field Test Yields Significant Lessons Learned May 20, 2009 - 1:00pm ...

  2. Large Scale Computing Requirements for Basic Energy Sciences...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Acoustic Waves ). ( ) , , , ( 1 2 2 2 2 2 2 2 2 2 t s t z y x p z y x t v ... Starting Models - Test Different Noise Assumptions * Scale Problem Up to Ever ...

  3. DOE Awards First Three Large-Scale Carbon Sequestration Projects...

    Broader source: Energy.gov (indexed) [DOE]

    ... The CO2 for this project will come from a post-combustion capture facility located at a coal-fired power plant in the region. A second test will be conducted in northwestern ...

  4. Large-Scale First-Principles Molecular Dynamics Simulations with Electrostatic Embedding: Application to Acetylcholinesterase Catalysis

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Fattebert, Jean-Luc; Lau, Edmond Y.; Bennion, Brian J.; Huang, Patrick; Lightstone, Felice C.

    2015-10-22

    Enzymes are complicated solvated systems that typically require many atoms to simulate their function with any degree of accuracy. We have recently developed numerical techniques for large scale First-Principles molecular dynamics simulations and applied them to study the enzymatic reaction catalyzed by acetylcholinesterase. We carried out Density functional theory calculations for a quantum mechanical (QM) sub- system consisting of 612 atoms with an O(N) complexity finite-difference approach. The QM sub-system is embedded inside an external potential field representing the electrostatic effect due to the environment. We obtained finite temperature sampling by First-Principles molecular dynamics for the acylation reaction of acetylcholinemore » catalyzed by acetylcholinesterase. Our calculations shows two energies barriers along the reaction coordinate for the enzyme catalyzed acylation of acetylcholine. In conclusion, the second barrier (8.5 kcal/mole) is rate-limiting for the acylation reaction and in good agreement with experiment.« less

  5. Large-Scale First-Principles Molecular Dynamics Simulations with Electrostatic Embedding: Application to Acetylcholinesterase Catalysis

    SciTech Connect (OSTI)

    Fattebert, Jean-Luc; Lau, Edmond Y.; Bennion, Brian J.; Huang, Patrick; Lightstone, Felice C.

    2015-10-22

    Enzymes are complicated solvated systems that typically require many atoms to simulate their function with any degree of accuracy. We have recently developed numerical techniques for large scale First-Principles molecular dynamics simulations and applied them to study the enzymatic reaction catalyzed by acetylcholinesterase. We carried out Density functional theory calculations for a quantum mechanical (QM) sub- system consisting of 612 atoms with an O(N) complexity finite-difference approach. The QM sub-system is embedded inside an external potential field representing the electrostatic effect due to the environment. We obtained finite temperature sampling by First-Principles molecular dynamics for the acylation reaction of acetylcholine catalyzed by acetylcholinesterase. Our calculations shows two energies barriers along the reaction coordinate for the enzyme catalyzed acylation of acetylcholine. In conclusion, the second barrier (8.5 kcal/mole) is rate-limiting for the acylation reaction and in good agreement with experiment.

  6. Field Testing of Environmentally Friendly Drilling System

    SciTech Connect (OSTI)

    David Burnett

    2009-05-31

    The Environmentally Friendly Drilling (EFD) program addresses new low-impact technology that reduces the footprint of drilling activities, integrates light weight drilling rigs with reduced emission engine packages, addresses on-site waste management, optimizes the systems to fit the needs of a specific development sites and provides stewardship of the environment. In addition, the program includes industry, the public, environmental organizations, and elected officials in a collaboration that addresses concerns on development of unconventional natural gas resources in environmentally sensitive areas. The EFD program provides the fundamentals to result in greater access, reasonable regulatory controls, lower development cost and reduction of the environmental footprint associated with operations for unconventional natural gas. Industry Sponsors have supported the program with significant financial and technical support. This final report compendium is organized into segments corresponding directly with the DOE approved scope of work for the term 2005-2009 (10 Sections). Each specific project is defined by (a) its goals, (b) its deliverable, and (c) its future direction. A web site has been established that contains all of these detailed engineering reports produced with their efforts. The goals of the project are to (1) identify critical enabling technologies for a prototype low-impact drilling system, (2) test the prototype systems in field laboratories, and (3) demonstrate the advanced technology to show how these practices would benefit the environment.

  7. Results of the fourth Hanna field test

    SciTech Connect (OSTI)

    Covell, J. R.; Wojdac, L. F.; Barbour, F. A.; Gardner, G. W.; Glass, R.; Hommert, P. J.

    1980-01-01

    The second phase (Hanna IVB) of a coal gasification experiment near Hanna, Wyoming, was completed in September 1979. The experiment attempted to link and gasify coal between process wells spaced 34.3 meters apart. Intermediate wells were positioned between the process wells so that the link could be relayed over shorter distances. Reverse combustion linking was attempted over a 22.9-meter and a 11.4-meter distance of the total well spacing. Thermal activity was generally noted in the upper 3 meters of the coal seam during the link. Two attempts to gasify over the 34.3-meter distance resulted in the propagation of the burn front at the coal overburden interface. Post-burn evaluation indicates fractures as major influencing factors of the combustion process. The Hanna IVB field test provided much insight into influence that geologic features have on in situ coal combustion. The influence of these faults, permeable zones, and cleats, on the air flow patterns can drastically change the overall results of a gasification experiment and should be studied further. The overall results of Hanna IVB were discouraging because of the rapid decline in the heating values for the production gas and the amount of coal gasified. With more complete geologic characerization prior to experimentation and proper well completions, it is believed that most of the subsurface operational problems encountered during Hanna IV could have been avoided.

  8. PATHWAYS OF LARGE-SCALE MAGNETIC COUPLINGS BETWEEN SOLAR CORONAL EVENTS

    SciTech Connect (OSTI)

    Schrijver, Carolus J.; Title, Alan M.; DeRosa, Marc L.; Yeates, Anthony R.

    2013-08-20

    The high-cadence, comprehensive view of the solar corona by SDO/AIA shows many events that are widely separated in space while occurring close together in time. In some cases, sets of coronal events are evidently causally related, while in many other instances indirect evidence can be found. We present case studies to highlight a variety of coupling processes involved in coronal events. We find that physical linkages between events do occur, but concur with earlier studies that these couplings appear to be crucial to understanding the initiation of major eruptive or explosive phenomena relatively infrequently. We note that the post-eruption reconfiguration timescale of the large-scale corona, estimated from the extreme-ultraviolet afterglow, is on average longer than the mean time between coronal mass ejections (CMEs), so that many CMEs originate from a corona that is still adjusting from a previous event. We argue that the coronal field is intrinsically global: current systems build up over days to months, the relaxation after eruptions continues over many hours, and evolving connections easily span much of a hemisphere. This needs to be reflected in our modeling of the connections from the solar surface into the heliosphere to properly model the solar wind, its perturbations, and the generation and propagation of solar energetic particles. However, the large-scale field cannot be constructed reliably by currently available observational resources. We assess the potential of high-quality observations from beyond Earth's perspective and advanced global modeling to understand the couplings between coronal events in the context of CMEs and solar energetic particle events.

  9. Locations of Smart Grid Demonstration and Large-Scale Energy Storage

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects | Department of Energy Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Map of the United States showing the location of all projects created with funding from the Smart Grid Demonstration and Energy Storage Project, funded through the American Recovery and Reinvestment Act. Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects (90.94 KB) More Documents

  10. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    researchLANSeventslistn Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based...

  11. Development of fine-resolution analyses and expanded large-scale...

    Office of Scientific and Technical Information (OSTI)

    II: Scale-awareness and application to single-column model experiments Title: Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: ...

  12. Development of fine-resolution analyses and expanded large-scale...

    Office of Scientific and Technical Information (OSTI)

    I: Methodology and evaluation Citation Details In-Document Search Title: Development of fine-resolution analyses and expanded large-scale forcing properties. Part I: Methodology ...

  13. A Large-Scale, High-Resolution Hydrological Model Parameter Data...

    Office of Scientific and Technical Information (OSTI)

    Large-Scale, High-Resolution Hydrological Model Parameter Data Set for Climate Change Impact Assessment for the Conterminous US Citation Details In-Document Search Title: A ...

  14. Large-scale Offshore Wind Power in the United States. Assessment of Opportunities and Barriers

    SciTech Connect (OSTI)

    Musial, Walter; Ram, Bonnie

    2010-09-01

    This report describes the benefits of and barriers to large-scale deployment of offshore wind energy systems in U.S. waters.

  15. HyLights -- Tools to Prepare the Large-Scale European Demonstration...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects on Hydrogen for Transport HyLights -- Tools to Prepare the Large-Scale European Demonstration Projects on Hydrogen for Transport Presented at Refueling ...

  16. DOE's Office of Science Seeks Proposals for Expanded Large-Scale...

    Office of Environmental Management (EM)

    Seeks Proposals for Expanded Large-Scale Scientific Computing DOE's Office of Science ... Successful proposers will be given the use of substantial computer time and data storage ...

  17. Large-scale delamination of multi-layers transition metal carbides...

    Office of Scientific and Technical Information (OSTI)

    Citation Details In-Document Search Title: Large-scale ... Herein we report on a general approach to delaminate ... Type: Accepted Manuscript Journal Name: Dalton Transactions ...

  18. A review of large-scale LNG spills : experiment and modeling.

    SciTech Connect (OSTI)

    Luketa-Hanlin, Anay Josephine

    2005-04-01

    The prediction of the possible hazards associated with the storage and transportation of liquefied natural gas (LNG) by ship has motivated a substantial number of experimental and analytical studies. This paper reviews the experimental and analytical work performed to date on large-scale spills of LNG. Specifically, experiments on the dispersion of LNG, as well as experiments of LNG fires from spills on water and land are reviewed. Explosion, pool boiling, and rapid phase transition (RPT) explosion studies are described and discussed, as well as models used to predict dispersion and thermal hazard distances. Although there have been significant advances in understanding the behavior of LNG spills, technical knowledge gaps to improve hazard prediction are identified. Some of these gaps can be addressed with current modeling and testing capabilities. A discussion of the state of knowledge and recommendations to further improve the understanding of the behavior of LNG spills on water is provided.

  19. Large-scale computations in analysis of structures

    SciTech Connect (OSTI)

    McCallen, D.B.; Goudreau, G.L.

    1993-09-01

    Computer hardware and numerical analysis algorithms have progressed to a point where many engineering organizations and universities can perform nonlinear analyses on a routine basis. Through much remains to be done in terms of advancement of nonlinear analysis techniques and characterization on nonlinear material constitutive behavior, the technology exists today to perform useful nonlinear analysis for many structural systems. In the current paper, a survey on nonlinear analysis technologies developed and employed for many years on programmatic defense work at the Lawrence Livermore National Laboratory is provided, and ongoing nonlinear numerical simulation projects relevant to the civil engineering field are described.

  20. Large scale condensed matter and fluid dynamics simulations | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Leadership Computing Facility , (a)Snapshots of the vorticity field of a UPO located in weakly turbulent flow with Re=371 and period equal to 26864 LB time steps. The quantity shown is the magnitude of vorticity above a given cut-off level. Red corresponds to large negative vorticity (clockwise rotation), and blue to large positive vorticity (counter-clockwise rotation). (b)Initial stucture of the large LDH-nucleic acid models, (a) System, at the start of the simulation. For clarity, water

  1. Large-scale soil bioremediation using white-rot fungi

    SciTech Connect (OSTI)

    Holroyd, M.L.; Caunt, P.

    1995-12-31

    Some organic pollutant compounds are considered resistant to conventional bioremediation because of their structure or behavior in soil. This phenomenon, together with the increasing need to reach lower target levels in shorter time periods, has shown the need for improved or alternative biological processes. It has been known for some time that the white-rot fungi, particularly the species Phanerochaete chrysosporium, have potentially useful abilities to rapidly degrade pollutant molecules. The use of white-rot fungi at the field scale presents a number of challenges, and this paper outlines the use of a process incorporating Phanerochaete to successfully bioremediate over 6,000 m{sup 3} of chlorophenol-contaminated soil at a site in Finland. Moreover, the method developed is very cost-effective and proved capable of reaching the very low target levels within the contracted time span.

  2. Zero discharge and large-scale DCS are plant highlights

    SciTech Connect (OSTI)

    Solar, R.

    1995-04-01

    This article reports that the Mulberry cogeneration facility has several features that make it notable in the power field. A zero-discharge wastewater system, an inlet-air chilling system, a secondary boiler, and an extensive distributed-control system (DCS) for overall plant operation are examples. Ability to meet the two-stage NO{sub x}-emission limits -- 25 ppm during the first three years and 15 ppm thereafter -- is a unique challenge. The plant design allows the lower limit to be met now, and retrofit with different burners is possible if NO{sub x}-emission limits are tightened later. The facility, near Bartow in Polk County, Fla, is owned by Polk Power Partners LP, whose members include Central and South West Energy Inc (CSW) of Dallas and ARK Energy of Laguna Hills, Calif. The operating company, CSW Operations, is a subsidiary of CSW. Heart of the plant is a single gas-turbine (GT)/HRSG/steam-turbine combined cycle, providing electric power to Tampa Electric Co and Florida Power Corp, with up to 25,000 lb/hr of process steam for an adjacent ethanol plant which was developed by the facility`s partnership. Commercial operation of Mulberry began on Sept 2, 1994.

  3. Large-Scale Continuous Subgraph Queries on Streams

    SciTech Connect (OSTI)

    Choudhury, Sutanay; Holder, Larry; Chin, George; Feo, John T.

    2011-11-30

    Graph pattern matching involves finding exact or approximate matches for a query subgraph in a larger graph. It has been studied extensively and has strong applications in domains such as computer vision, computational biology, social networks, security and finance. The problem of exact graph pattern matching is often described in terms of subgraph isomorphism which is NP-complete. The exponential growth in streaming data from online social networks, news and video streams and the continual need for situational awareness motivates a solution for finding patterns in streaming updates. This is also the prime driver for the real-time analytics market. Development of incremental algorithms for graph pattern matching on streaming inputs to a continually evolving graph is a nascent area of research. Some of the challenges associated with this problem are the same as found in continuous query (CQ) evaluation on streaming databases. This paper reviews some of the representative work from the exhaustively researched field of CQ systems and identifies important semantics, constraints and architectural features that are also appropriate for HPC systems performing real-time graph analytics. For each of these features we present a brief discussion of the challenge encountered in the database realm, the approach to the solution and state their relevance in a high-performance, streaming graph processing framework.

  4. AUTOMATED PARAMETRIC EXECUTION AND DOCUMENTATION FOR LARGE-SCALE SIMULATIONS

    SciTech Connect (OSTI)

    R. L. KELSEY; ET AL

    2001-03-01

    A language has been created to facilitate the automatic execution of simulations for purposes of enabling parametric study and test and evaluation. Its function is similar in nature to a job-control language, but more capability is provided in that the language extends the notion of literate programming to job control. Interwoven markup tags self document and define the job control process. The language works in tandem with another language used to describe physical systems. Both languages are implemented in the Extensible Markup Language (XML). A user describes a physical system for simulation and then creates a set of instructions for automatic execution of the simulation. Support routines merge the instructions with the physical-system description, execute the simulation the specified number of times, gather the output data, and document the process and output for the user. The language enables the guided exploration of a parameter space and can be used for simulations that must determine optimal solutions to particular problems. It is generalized enough that it can be used with any simulation input files that are described using XML. XML is shown to be useful as a description language, an interchange language, and a self-documented language.

  5. Geothermal Testing Facilities in an Oil Field

    Broader source: Energy.gov [DOE]

    Engineered Geothermal Systems, Low Temp, Exploration Demonstration. The proposed project is to develop a long term testing facility and test geothermal power units for the evaluation of electrical power generation from low-temperature and co-produced fluids. The facility will provide the ability to conduct both long and short term testing of different power generation configurations to determine reliability, efficiency and to provide economic evaluation data.

  6. Geothermal Testing Facilities in an Oil Field - Rocky Mountain Oil Field

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Testing Center; 2010 Geothermal Technology Program Peer Review Report | Department of Energy Field - Rocky Mountain Oil Field Testing Center; 2010 Geothermal Technology Program Peer Review Report Geothermal Testing Facilities in an Oil Field - Rocky Mountain Oil Field Testing Center; 2010 Geothermal Technology Program Peer Review Report DOE 2010 Geothermal Technologies Program Peer Review lowtemp_014_johnson.pdf (258.37 KB) More Documents & Publications Electrical Power Generation Using

  7. Video: Appliance Standards Testing Ensures Level Playing Field | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy Video: Appliance Standards Testing Ensures Level Playing Field Video: Appliance Standards Testing Ensures Level Playing Field August 3, 2016 - 11:45am Addthis The Intertek testing laboratory in Cortland, NY tests products for the U.S. Department of Energy to ensure that consumers are getting the savings promised by its federal Appliance and Equipment Standards Program. Mike Mueller Senior Digital Content Strategist, EERE Communications Learn more about standards & test

  8. IFT&E Field Test 2 Public Fact Sheet

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Second Test Results for the Interagency Field Test &Evaluation of Wind Turbine - Radar Interference Mitigation Technologies PUBLIC RELEASE ASR-11 Campaign at Abilene, TX Test: October 18-28, 2012. Report: March 2013 Distribution Statement A: Approved for Public Release (April 30, 2013). Approved for Public Release IFT&E Field Test Report 2: ASR-11 2 Purpose of this Fact Sheet: This document includes background information and a summary of the second of three tests on the effectiveness of

  9. Steam atmosphere dryer project: System development and field test. Final report

    SciTech Connect (OSTI)

    NONE

    1999-02-01

    The objective of this project was to develop and demonstrate the use of a superheated steam atmosphere dryer as a highly improved alternative to conventional hot air-drying systems, the present industrial standard method for drying various wet feedstocks. The development program plan consisted of three major activities. The first was engineering analysis and testing of a small-scale laboratory superheated steam dryer. This dryer provided the basic engineering heat transfer data necessary to design a large-scale system. The second major activity consisted of the design, fabrication, and laboratory checkout testing of the field-site prototype superheated steam dryer system. The third major activity consisted of the installation and testing of the complete 250-lb/hr evaporation rate dryer and a 30-kW cogeneration system in conjunction with an anaerobic digester facility at the Village of Bergen, NY. Feedstock for the digester facility at the Village of Bergen, NY. Feedstock for the digester was waste residue from a nearby commercial food processing plant. The superheated steam dryer system was placed into operation in August 1996 and operated successfully through March 1997. During this period, the dryer processed all the material from the digester to a powdered consistency usable as a high-nitrogen-based fertilizer.

  10. NREL: Wind Research - Field Test Sites

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Utility-scale turbines tested at the NWTC include those manufactured by Siemens, GE, Gamesa, and Alstom. For more information, contact: David Simms, 303-384-6942. Printable Version ...

  11. Optimization and large scale computation of an entropy-based moment closure

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used asmore » test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.« less

  12. Optimization and large scale computation of an entropy-based moment closure

    SciTech Connect (OSTI)

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  13. DOE Approves Field Test for Promising Carbon Capture Technology |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Approves Field Test for Promising Carbon Capture Technology DOE Approves Field Test for Promising Carbon Capture Technology November 20, 2012 - 12:00pm Addthis Washington, DC - A promising post combustion membrane technology that can separate and capture 90 percent of the carbon dioxide (CO2) from a pulverized coal plant has been successfully demonstrated and received Department of Energy (DOE) approval to advance to a larger-scale field test. In an $18.75 million

  14. Tank Manufacturing, Testing, Deployment and Field Performance | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy Tank Manufacturing, Testing, Deployment and Field Performance Tank Manufacturing, Testing, Deployment and Field Performance These slides were presented at the International Hydrogen Fuel and Pressure Vessel Forum on September 27 - 29, 2010, in Beijing, China. ihfpv_newhouse.pdf (5.48 MB) More Documents & Publications Fuel Tank Manufacturing, Testing, Field Performance, and Certification International Hydrogen Fuel and Pressure Vessel Forum 2010 Proceedings CNG and Hydrogen Tank

  15. NWTC Researchers Field-Test Advanced Control Turbine Systems...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Researchers Field-Test Advanced Control Turbine Systems to Increase Performance, Decrease ... damage that increase maintenance costs and shorten the life of a turbine or wind plant. ...

  16. Controller Field Tests on the NREL CART2 Turbine

    SciTech Connect (OSTI)

    Bossanyi, E.; Wright, A.; Fleming, P.

    2010-12-01

    This document presents the results of the field tests carried out on the CART2 turbine at NREL to validate individual pitch control and active tower damping.

  17. SMART Wind Turbine Rotor: Design and Field Test

    Broader source: Energy.gov [DOE]

    Design and field test results from the SMART Rotor project, a wind turbine rotor with integrated trailing-edge flaps designed for active control of the rotor aerodynamics.

  18. SNL Begins Field Testing on First SMART Blades | Department of...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    began field testing a set of wind turbine blades with active load control capabilities. ... control during peak loads experienced by the turbine blades and drivetrain components. ...

  19. Plasma turbulence driven by transversely large-scale standing shear Alfven waves

    SciTech Connect (OSTI)

    Singh, Nagendra; Rao, Sathyanarayan

    2012-12-15

    Using two-dimensional particle-in-cell simulations, we study generation of turbulence consisting of transversely small-scale dispersive Alfven and electrostatic waves when plasma is driven by a large-scale standing shear Alfven wave (LS-SAW). The standing wave is set up by reflecting a propagating LS-SAW. The ponderomotive force of the standing wave generates transversely large-scale density modifications consisting of density cavities and enhancements. The drifts of the charged particles driven by the ponderomotive force and those directly caused by the fields of the standing LS-SAW generate non-thermal features in the plasma. Parametric instabilities driven by the inherent plasma nonlinearities associated with the LS-SAW in combination with the non-thermal features generate small-scale electromagnetic and electrostatic waves, yielding a broad frequency spectrum ranging from below the source frequency of the LS-SAW to ion cyclotron and lower hybrid frequencies and beyond. The power spectrum of the turbulence has peaks at distinct perpendicular wave numbers (k{sub Up-Tack }) lying in the range d{sub e}{sup -1}-6d{sub e}{sup -1}, d{sub e} being the electron inertial length, suggesting non-local parametric decay from small to large k{sub Up-Tack }. The turbulence spectrum encompassing both electromagnetic and electrostatic fluctuations is also broadband in parallel wave number (k{sub ||}). In a standing-wave supported density cavity, the ratio of the perpendicular electric to magnetic field amplitude is R(k{sub Up-Tack }) = |E{sub Up-Tack }(k{sub Up-Tack })/|B{sub Up-Tack }(k{sub Up-Tack })| Much-Less-Than V{sub A} for k{sub Up-Tack }d{sub e} < 0.5, where V{sub A} is the Alfven velocity. The characteristic features of the broadband plasma turbulence are compared with those available from satellite observations in space plasmas.

  20. Demonstrating Strong Electric Fields in Liquid Helium for Tests...

    Office of Science (SC) Website

    Image courtesy of Los Alamos National Laboratory The Medium Scale High Voltage Test apparatus in TA-53 Building 10 allowed scientists to test electric fields in liquid helium, a ...

  1. Field Test Best Practices Website | Department of Energy

    Broader source: Energy.gov (indexed) [DOE]

    man standing in front of a door performing a blower door test. The Field Test Best Practices website is a start-to-finish best practice guide for building science researchers ...

  2. A Semi-Analytical Solution for Large-Scale Injection-Induced...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: A Semi-Analytical Solution for Large-Scale Injection-Induced PressurePerturbation and Leakage in a Laterally Bounded Aquifer-AquitardSystem Citation Details ...

  3. Development of fine-resolution analyses and expanded large-scale...

    Office of Scientific and Technical Information (OSTI)

    II: Scale-awareness and application to single-column model experiments Citation Details In-Document Search Title: Development of fine-resolution analyses and expanded large-scale ...

  4. FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects

    Broader source: Energy.gov [DOE]

    FEMP developed a guide to help federal agencies, as well as the developers and financiers that work with them, to successfully install large-scale renewable energy projects at federal facilities.

  5. DOE's Office of Science Seeks Proposals for Expanded Large-Scale Scientific Computing

    Broader source: Energy.gov [DOE]

    WASHINGTON, D.C. -- Secretary of Energy Samuel W. Bodman announced today that DOE’s Office of Science is seeking proposals to support innovative, large-scale computational science projects to...

  6. Transport Induced by Large Scale Convective Structures in a Dipole-Confined Plasma

    SciTech Connect (OSTI)

    Grierson, B. A.; Mauel, M. E.; Worstell, M. W.; Klassen, M.

    2010-11-12

    Convective structures characterized by ExB motion are observed in a dipole-confined plasma. Particle transport rates are calculated from density dynamics obtained from multipoint measurements and the reconstructed electrostatic potential. The calculated transport rates determined from the large-scale dynamics and local probe measurements agree in magnitude, show intermittency, and indicate that the particle transport is dominated by large-scale convective structures.

  7. HyLights -- Tools to Prepare the Large-Scale European Demonstration

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects on Hydrogen for Transport | Department of Energy HyLights -- Tools to Prepare the Large-Scale European Demonstration Projects on Hydrogen for Transport HyLights -- Tools to Prepare the Large-Scale European Demonstration Projects on Hydrogen for Transport Presented at Refueling Infrastructure for Alternative Fuel Vehicles: Lessons Learned for Hydrogen Conference, April 2-3, 2008, Sacramento, California buenger.pdf (1.96 MB) More Documents & Publications Santa Clara Valley

  8. Reducing Data Center Loads for a Large-Scale, Net Zero Office Building |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Reducing Data Center Loads for a Large-Scale, Net Zero Office Building Reducing Data Center Loads for a Large-Scale, Net Zero Office Building Document describes the design, implementation strategies, and continuous performance monitoring of the National Renewable Energy Laboratory's Research Support Facility data center. Download the case study. (3.03 MB) More Documents & Publications Top ECMs for Labs and Data Centers Best Practices Guide for Energy-Efficient Data

  9. A First Step towards Large-Scale Plants to Plastics Engineering |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy A First Step towards Large-Scale Plants to Plastics Engineering A First Step towards Large-Scale Plants to Plastics Engineering November 9, 2010 - 1:56pm Addthis Brookhaven National Laboratory researches making plastics from plants. Niketa Kumar Niketa Kumar Public Affairs Specialist, Office of Public Affairs What does this mean for me? By optimizing the accumulation of particular fatty acids, a Brookhaven team of scientists are developing a method suitable for

  10. Large Scale GSHP as Alternative Energy for American Farmers | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Large Scale GSHP as Alternative Energy for American Farmers Large Scale GSHP as Alternative Energy for American Farmers Project objectives: 100% replacement of on-site fossil fuel in the poultry farm; Reduce heating cost by 70% through bar efficiency improvement, GSHP and solar applications; Reduce 4% of mortality through cooling effect of GSHP in summer. gshp_xu_gshp_farmers.pdf (276.4 KB) More Documents & Publications Analysis of Energy, Environmental and Life Cycle Cost

  11. UPS multifuel stratified charge engine development program - Field test

    SciTech Connect (OSTI)

    Lewis, J.M.

    1986-01-01

    The multifuel, stratified charge engine program launched by United Parcel Service in 1978 has progressed through two years of field tests. The mechanical and electronic experience with the field test engine is covered in detail, with problems and causes identified and solutions described. Also included are reports on research initiated as a consequence of problems that appeared in the field test engines. All aspects of engine performance are covered, including fuel economy, multifuel experience, emissions testing and tuning, maintenance expectations and driver reactions. The original 350-engine field test was run with many components newly designed or modified, and relatively untested. Component and reliability problems identified in the field test have prompted modifications, and the engines are being reworked for the start of a new 200-engine field test. Research studies conducted on the field test engine have produced very encouraging emissions data, which suggests that the low-load hydrocarbon problem historically associated with this technology is not a barrier to commercial application. The engine appears capable of passing the heavy duty gasoline engine transient test.

  12. DGDFT: A massively parallel method for large scale density functional theory calculations

    SciTech Connect (OSTI)

    Hu, Wei Yang, Chao; Lin, Lin

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  13. LARGE-SCALE EXTREME-ULTRAVIOLET DISTURBANCES ASSOCIATED WITH A LIMB CORONAL MASS EJECTION

    SciTech Connect (OSTI)

    Dai, Y.; Auchere, F.; Vial, J.-C.; Tang, Y. H.; Zong, W. G.

    2010-01-10

    We present composite observations of a coronal mass ejection (CME) and the associated large-scale extreme-ultraviolet (EUV) disturbances on 2007 December 31 by the Extreme-ultraviolet Imager (EUVI) and COR1 coronagraph on board the recent Solar Terrestrial Relations Observatory mission. For this limb event, the EUV disturbances exhibit some typical characteristics of EUV Imaging Telescope waves: (1) in the 195 A bandpass, diffuse brightenings are observed propagating oppositely away from the flare site with a velocity of approx260 km s{sup -1}, leaving dimmings behind; (2) when the brightenings encounter the boundary of a polar coronal hole, they stop there to form a stationary front. Multi-temperature analysis of the propagating EUV disturbances favors a heating process over a density enhancement in the disturbance region. Furthermore, the EUVI-COR1 composite display shows unambiguously that the propagation of the diffuse brightenings coincides with a large lateral expansion of the CME, which consequently results in a double-loop-structured CME leading edge. Based on these observational facts, we suggest that the wave-like EUV disturbances are a result of magnetic reconfiguration related to the CME liftoff rather than true waves in the corona. Reconnections between the expanding CME magnetic field lines and surrounding quiet-Sun magnetic loops account for the propagating diffuse brightenings; dimmings appear behind them as a consequence of volume expansion. X-ray and radio data provide us with complementary evidence.

  14. Induced core formation time in subcritical magnetic clouds by large-scale trans-Alfvnic flows

    SciTech Connect (OSTI)

    Kudoh, Takahiro; Basu, Shantanu E-mail: basu@uwo.ca

    2014-10-20

    We clarify the mechanism of accelerated core formation by large-scale nonlinear flows in subcritical magnetic clouds by finding a semi-analytical formula for the core formation time and describing the physical processes that lead to them. Recent numerical simulations show that nonlinear flows induce rapid ambipolar diffusion that leads to localized supercritical regions that can collapse. Here, we employ non-ideal magnetohydrodynamic simulations including ambipolar diffusion for gravitationally stratified sheets threaded by vertical magnetic fields. One of the horizontal dimensions is eliminated, resulting in a simpler two-dimensional simulation that can clarify the basic process of accelerated core formation. A parameter study of simulations shows that the core formation time is inversely proportional to the square of the flow speed when the flow speed is greater than the Alfvn speed. We find a semi-analytical formula that explains this numerical result. The formula also predicts that the core formation time is about three times shorter than that with no turbulence, when the turbulent speed is comparable to the Alfvn speed.

  15. Using calibrated engineering models to predict energy savings in large-scale geothermal heat pump projects

    SciTech Connect (OSTI)

    Shonder, J.A.; Hughes, P.J.; Thornton, J.W.

    1998-10-01

    Energy savings performance contracting (ESPC) is now receiving greater attention as a means of implementing large-scale energy conservation projects in housing. Opportunities for such projects exist for military housing, federally subsidized low-income housing, and planned communities (condominiums, townhomes, senior centers), to name a few. Accurate prior (to construction) estimates of the energy savings in these projects reduce risk, decrease financing costs, and help avoid post-construction disputes over performance contract baseline adjustments. This paper demonstrates an improved method of estimating energy savings before construction takes place. Using an engineering model calibrated to pre-construction energy-use data collected in the field, this method is able to predict actual energy savings to a high degree of accuracy. This is verified with post-construction energy-use data from a geothermal heat pump ESPC at Fort Polk, Louisiana. This method also allows determination of the relative impact of the various energy conservation measures installed in a comprehensive energy conservation project. As an example, the breakout of savings at Fort Polk for the geothermal heat pumps, desuperheaters, lighting retrofits, and low-flow hot water outlets is provided.

  16. Using Calibrated Engineering Models To Predict Energy Savings In Large-Scale Geothermal Heat Pump Projects

    SciTech Connect (OSTI)

    Shonder, John A; Hughes, Patrick; Thornton, Jeff W.

    1998-01-01

    Energy savings performance contracting (ESPC) is now receiving greater attention as a means of implementing large-scale energy conservation projects in housing. Opportunities for such projects exist for military housing, federally subsidized low-income housing, and planned communities (condominiums, townhomes, senior centers), to name a few. Accurate prior (to construction) estimates of the energy savings in these projects reduce risk, decrease financing costs, and help avoid post-construction disputes over performance contract baseline adjustments. This paper demonstrates an improved method of estimating energy savings before construction takes place. Using an engineering model calibrated to pre-construction energy-use data collected in the field, this method is able to predict actual energy savings to a high degree of accuracy. This is verified with post-construction energy-use data from a geothermal heat pump ESPC at Fort Polk, Louisiana. This method also allows determination of the relative impact of the various energy conservation measures installed in a comprehensive energy conservation project. As an example, the breakout of savings at Fort Polk for the geothermal heat pumps, desuperheaters, lighting retrofits, and low-flow hot water outlets is provided.

  17. Fluorescence enhancement in large-scale self-assembled gold nanoparticle double arrays

    SciTech Connect (OSTI)

    Chekini, M.; Bierwagen, J.; Cunningham, A.; Bürgi, T.; Filter, R.; Rockstuhl, C.

    2015-12-21

    Localized surface plasmon resonances excited in metallic nanoparticles confine and enhance electromagnetic fields at the nanoscale. This is particularly pronounced in dimers made from two closely spaced nanoparticles. When quantum emitters, such as dyes, are placed in the gap of those dimers, their absorption and emission characteristics can be modified. Both processes have to be considered when aiming to enhance the fluorescence from the quantum emitters. This is particularly challenging for dimers, since the electromagnetic properties and the enhanced fluorescence sensitively depend on the distance between the nanoparticles. Here, we use a layer-by-layer method to precisely control the distances in such systems. We consider a dye layer deposited on top of an array of gold nanoparticles or integrated into a central position of a double array of gold nanoparticles. We study the effect of the spatial arrangement and the average distance on the plasmon-enhanced fluorescence. We found a maximum of a 99-fold increase in the fluorescence intensity of the dye layer sandwiched between two gold nanoparticle arrays. The interaction of the dye layer with the plasmonic system also causes a spectral shift in the emission wavelengths and a shortening of the fluorescence life times. Our work paves the way for large-scale, high throughput, and low-cost self-assembled functionalized plasmonic systems that can be used as efficient light sources.

  18. Kinematic morphology of large-scale structure: evolution from potential to rotational flow

    SciTech Connect (OSTI)

    Wang, Xin; Szalay, Alex; Aragn-Calvo, Miguel A.; Neyrinck, Mark C.; Eyink, Gregory L.

    2014-09-20

    As an alternative way to describe the cosmological velocity field, we discuss the evolution of rotational invariants constructed from the velocity gradient tensor. Compared with the traditional divergence-vorticity decomposition, these invariants, defined as coefficients of the characteristic equation of the velocity gradient tensor, enable a complete classification of all possible flow patterns in the dark-matter comoving frame, including both potential and vortical flows. We show that this tool, first introduced in turbulence two decades ago, is very useful for understanding the evolution of the cosmic web structure, and in classifying its morphology. Before shell crossing, different categories of potential flow are highly associated with the cosmic web structure because of the coherent evolution of density and velocity. This correspondence is even preserved at some level when vorticity is generated after shell crossing. The evolution from the potential to vortical flow can be traced continuously by these invariants. With the help of this tool, we show that the vorticity is generated in a particular way that is highly correlated with the large-scale structure. This includes a distinct spatial distribution and different types of alignment between the cosmic web and vorticity direction for various vortical flows. Incorporating shell crossing into closed dynamical systems is highly non-trivial, but we propose a possible statistical explanation for some of the phenomena relating to the internal structure of the three-dimensional invariant space.

  19. Analysis of ground response data at Lotung large-scale soil- structure interaction experiment site. Final report

    SciTech Connect (OSTI)

    Chang, C.Y.; Mok, C.M.; Power, M.S.

    1991-12-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4-scale and 1/2-scale) of a nuclear plant containment structure at a site in Lotung (Tang, 1987), a seismically active region in northeast Taiwan. The models were constructed to gather data for the evaluation and validation of soil-structure interaction (SSI) analysis methodologies. Extensive instrumentation was deployed to record both structural and ground responses at the site during earthquakes. The experiment is generally referred to as the Lotung Large-Scale Seismic Test (LSST). As part of the LSST, two downhole arrays were installed at the site to record ground motions at depths as well as at the ground surface. Structural response and ground response have been recorded for a number of earthquakes (i.e. a total of 18 earthquakes in the period of October 1985 through November 1986) at the LSST site since the completion of the installation of the downhole instruments in October 1985. These data include those from earthquakes having magnitudes ranging from M{sub L} 4.5 to M{sub L} 7.0 and epicentral distances range from 4.7 km to 77.7 km. Peak ground surface accelerations range from 0.03 g to 0.21 g for the horizontal component and from 0.01 g to 0.20 g for the vertical component. The objectives of the study were: (1) to obtain empirical data on variations of earthquake ground motion with depth; (2) to examine field evidence of nonlinear soil response due to earthquake shaking and to determine the degree of soil nonlinearity; (3) to assess the ability of ground response analysis techniques including techniques to approximate nonlinear soil response to estimate ground motions due to earthquake shaking; and (4) to analyze earth pressures recorded beneath the basemat and on the side wall of the 1/4 scale model structure during selected earthquakes.

  20. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    SciTech Connect (OSTI)

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  1. Large-scale simulation of methane dissociation along the West Spitzbergen Margin

    SciTech Connect (OSTI)

    Reagan, M.T.; Moridis, G.J.

    2009-07-15

    Vast quantities of methane are trapped in oceanic hydrate deposits, and there is concern that a rise in the ocean temperature will induce dissociation of these hydrate accumulations, potentially releasing large amounts of methane into the atmosphere. The recent discovery of active methane gas venting along the landward limit of the gas hydrate stability zone (GHSZ) on the shallow continental slope west of Spitsbergen could be an indication of this process, if the source of the methane can be confidently attributed to dissociating hydrates. In the first large-scale simulation study of its kind, we simulate shallow hydrate dissociation in conditions representative of the West Spitsbergen margin to test the hypothesis that the observed gas release originated from hydrates. The simulation results are consistent with this hypothesis, and are in remarkable agreement with the recently published observations. They show that shallow, low-saturation hydrate deposits, when subjected to temperature increases at the seafloor, can release significant quantities of methane, and that the releases will be localized near the landward limit of the top of the GHSZ. These results indicate the possibility that hydrate dissociation and methane release may be both a consequence and a cause of climate change.

  2. Thermal Performance Evaluation of Attic Radiant Barrier Systems Using the Large Scale Climate Simulator (LSCS)

    SciTech Connect (OSTI)

    Shrestha, Som S; Miller, William A; Desjarlais, Andre Omer

    2013-01-01

    Application of radiant barriers and low-emittance surface coatings in residential building attics can significantly reduce conditioning loads from heat flow through attic floors. The roofing industry has been developing and using various radiant barrier systems and low-emittance surface coatings to increase energy efficiency in buildings; however, minimal data are available that quantifies the effectiveness of these technologies. This study evaluates performance of various attic radiant barrier systems under simulated summer daytime conditions and nighttime or low solar gain daytime winter conditions using the large scale climate simulator (LSCS). The four attic configurations that were evaluated are 1) no radiant barrier (control), 2) perforated low-e foil laminated oriented strand board (OSB) deck, 3) low-e foil stapled on rafters, and 4) liquid applied low-emittance coating on roof deck and rafters. All test attics used nominal RUS 13 h-ft2- F/Btu (RSI 2.29 m2-K/W) fiberglass batt insulation on attic floor. Results indicate that the three systems with radiant barriers had heat flows through the attic floor during summer daytime condition that were 33%, 50%, and 19% lower than the control, respectively.

  3. IFT&E Field Test 1 Public Fact Sheet

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Fact Sheet on the First Test Results of Interagency Field Test &Evaluation of Wind-Radar Mitigation Technologies PUBLIC RELEASE CARSR Campaign at Tyler, MN Test Period: April 23-May 4, 2012. Report: October 2012 Distribution Statement A: Approved for Public Release October 31, 2012. Approved for Public Release IFT&E Field Test Report 1: CARSR Purpose of this Fact Sheet: This document includes background information and a summary of the first of three tests on the effectiveness of mature

  4. Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities

    Broader source: Energy.gov [DOE]

    The Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities provides best practices and other helpful guidance for federal agencies developing large-scale renewable energy projects.

  5. Type 4 Tank Testing, Certification and Field Performance Data | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy Type 4 Tank Testing, Certification and Field Performance Data Type 4 Tank Testing, Certification and Field Performance Data These slides were presented at the International Hydrogen Fuel and Pressure Vessel Forum on September 27 - 29, 2010, in Beijing, China. ihfpv_wong.pdf (2.29 MB) More Documents & Publications Hydrogen Tank Testing R&D CNG and Hydrogen Tank Safety, R&D, and Testing Developing SAE Safety Standards for Hydrogen and Fuel Cell Vehicles (FCVs)

  6. Large scale validation of the M5L lung CAD on heterogeneous CT datasets

    SciTech Connect (OSTI)

    Lopez Torres, E. E-mail: cerello@to.infn.it; Fiorina, E.; Pennazio, F.; Peroni, C.; Saletta, M.; Cerello, P. E-mail: cerello@to.infn.it; Camarlinghi, N.; Fantacci, M. E.

    2015-04-15

    Purpose: M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. Methods: M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number of features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. Results: The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. Conclusions: The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large

  7. Accelerated Stress Testing, Qualification Testing, HAST, Field Experience - What Do They All Mean? (Presentation)

    SciTech Connect (OSTI)

    Wohlgemuth, J.

    2013-05-01

    This presentation discusses the need for a set of tests for modules that would predict their long term-field performance.

  8. Field Testing Research at the NWTC (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2015-02-01

    The National Wind Technology Center (NWTC) at the National Renewable Laboratory (NREL) has extensive field testing capabilities that have been used in collaboration with the wind industry to accelerate wind technology development and deployment for more than 30 years.

  9. Project Impact Assessments: Building America FY14 Field Test Technical

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Support - 2014 BTO Peer Review | Department of Energy Project Impact Assessments: Building America FY14 Field Test Technical Support - 2014 BTO Peer Review Project Impact Assessments: Building America FY14 Field Test Technical Support - 2014 BTO Peer Review Presenter: Lieko Earle, National Renewable Energy Laboratory The goal of this project is for the National Renewable Energy Laboratory to provide extensive, hands-on technical support to Building America teams in the areas of experiment

  10. Solar Energy Education. Home economics: teacher's guide. Field test

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    edition. [Includes glossary] (Technical Report) | SciTech Connect Home economics: teacher's guide. Field test edition. [Includes glossary] Citation Details In-Document Search Title: Solar Energy Education. Home economics: teacher's guide. Field test edition. [Includes glossary] × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit

  11. Solar Energy Education. Industrial arts: student activities. Field test

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    edition (Technical Report) | SciTech Connect Industrial arts: student activities. Field test edition Citation Details In-Document Search Title: Solar Energy Education. Industrial arts: student activities. Field test edition × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in

  12. Solar Energy Education. Industrial arts: teacher's guide. Field test

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    edition. [Includes glossary] (Technical Report) | SciTech Connect Industrial arts: teacher's guide. Field test edition. [Includes glossary] Citation Details In-Document Search Title: Solar Energy Education. Industrial arts: teacher's guide. Field test edition. [Includes glossary] × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit

  13. Development of fine-resolution analyses and expanded large-scale forcing properties. Part I: Methodology and evaluation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Li, Zhijin; Vogelmann, Andrew M.; Feng, Sha; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Endo, Satoshi

    2015-01-20

    We produce fine-resolution, three-dimensional fields of meteorological and other variables for the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) Southern Great Plains site. The Community Gridpoint Statistical Interpolation system is implemented in a multiscale data assimilation (MS-DA) framework that is used within the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. The MS-DA algorithm uses existing reanalysis products and constrains fine-scale atmospheric properties by assimilating high-resolution observations. A set of experiments show that the data assimilation analysis realistically reproduces the intensity, structure, and time evolution of clouds and precipitation associated with a mesoscale convective system.more » Evaluations also show that the large-scale forcing derived from the fine-resolution analysis has an overall accuracy comparable to the existing ARM operational product. For enhanced applications, the fine-resolution fields are used to characterize the contribution of subgrid variability to the large-scale forcing and to derive hydrometeor forcing, which are presented in companion papers.« less

  14. What Will the Neighbors Think? Building Large-Scale Science Projects Around the World

    ScienceCinema (OSTI)

    Jones, Craig; Mrotzek, Christian; Toge, Nobu; Sarno, Doug

    2010-01-08

    Public participation is an essential ingredient for turning the International Linear Collider into a reality. Wherever the proposed particle accelerator is sited in the world, its neighbors -- in any country -- will have something to say about hosting a 35-kilometer-long collider in their backyards. When it comes to building large-scale physics projects, almost every laboratory has a story to tell. Three case studies from Japan, Germany and the US will be presented to examine how community relations are handled in different parts of the world. How do particle physics laboratories interact with their local communities? How do neighbors react to building large-scale projects in each region? How can the lessons learned from past experiences help in building the next big project? These and other questions will be discussed to engage the audience in an active dialogue about how a large-scale project like the ILC can be a good neighbor.

  15. Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet.

    SciTech Connect (OSTI)

    Adalsteinsson, Helgi; Armstrong, Robert C.; Chiang, Ken; Gentile, Ann C.; Lloyd, Levi; Minnich, Ronald G.; Vanderveen, Keith; Van Randwyk, Jamie A; Rudish, Don W.

    2008-10-01

    We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3

  16. Variability of Load and Net Load in Case of Large Scale Distributed Wind Power

    SciTech Connect (OSTI)

    Holttinen, H.; Kiviluoma, J.; Estanqueiro, A.; Gomez-Lazaro, E.; Rawn, B.; Dobschinski, J.; Meibom, P.; Lannoye, E.; Aigner, T.; Wan, Y. H.; Milligan, M.

    2011-01-01

    Large scale wind power production and its variability is one of the major inputs to wind integration studies. This paper analyses measured data from large scale wind power production. Comparisons of variability are made across several variables: time scale (10-60 minute ramp rates), number of wind farms, and simulated vs. modeled data. Ramp rates for Wind power production, Load (total system load) and Net load (load minus wind power production) demonstrate how wind power increases the net load variability. Wind power will also change the timing of daily ramps.

  17. ARM - PI Product - Large Scale Ice Water Path and 3-D Ice Water Content

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ProductsLarge Scale Ice Water Path and 3-D Ice Water Content ARM Data Discovery Browse Data Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send PI Product : Large Scale Ice Water Path and 3-D Ice Water Content Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM

  18. Energy Department Loan Guarantee Would Support Large-Scale Rooftop Solar

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Power for U.S. Military Housing | Department of Energy Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S. Military Housing Energy Department Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S. Military Housing September 7, 2011 - 2:10pm Addthis Washington D.C. - U.S. Energy Secretary Steven Chu today announced the offer of a conditional commitment for a partial guarantee of a $344 million loan that will support the SolarStrong Project, which is expected

  19. DOE Awards $126.6 Million for Two More Large-Scale Carbon Sequestration

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects | Department of Energy 6.6 Million for Two More Large-Scale Carbon Sequestration Projects DOE Awards $126.6 Million for Two More Large-Scale Carbon Sequestration Projects May 6, 2008 - 11:30am Addthis Projects in California and Ohio Join Four Others in Effort to Drastically Reduce Greenhouse Gas Emissions WASHINGTON, DC - The U.S. Department of Energy (DOE) today announced awards of more than $126.6 million to the West Coast Regional Carbon Sequestration Partnership (WESTCARB) and

  20. Building America Technology Solutions Case Study: Field Testing an Unvented

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Roof with Asphalt Shingles in a Cold Climate | Department of Energy Testing an Unvented Roof with Asphalt Shingles in a Cold Climate Building America Technology Solutions Case Study: Field Testing an Unvented Roof with Asphalt Shingles in a Cold Climate In this project, Building America team Building Science Corporation devised an experiment to build and instrument unvented test roofs using air-permeable insulation (dense-pack cellulose and fiberglass) in a cold climate (Chicago, Illinois

  1. Fabrication and testing of gas-filled targets for large-scale plasma experiments on nova

    SciTech Connect (OSTI)

    Stone, G.F.; Rivers, C.J.; Spragge, M.R.; Wallace, R.J.

    1996-06-01

    The proposed next-generation ICF facility, the National Ignition Facility (NIF) is designed to produce energy gain from x-ray heated {open_quotes}indirect-drive{close_quotes} fuel capsules. For indirect-drive targets, laser light heats the inside of the Au hohlraum wall and produces x rays which in turn heat and implode the capsule to produce fusion conditions in the fuel. Unlike Nova targets, in NIF-scale targets laser light will propagate through several millimeters of gas, producing a plasma, before impinging upon the Au hohlraum wall. The purpose of the gas-produced plasma is to provide sufficient pressure to keep the radiating Au surface from expanding excessively into the hohlraum cavity. Excessive expansion of the Au wall interacts with the laser pulse and degrades the drive symmetry of the capsule implosion. The authors have begun an experimental campaign on the Nova laser to study the effect of hohlraum gas on both laser-plasma interaction and implosion symmetry. In their current NIF target design, the calculated plasma electron temperature is T{sub e} {approx} 3 keV and the electron density is N{sub e} {approx} 10{sup 21}cm{sup {minus}3}.

  2. Analysis of large scale tests for AP-600 passive containment cooling system

    SciTech Connect (OSTI)

    Sha, W.T.; Chien, T.H.; Sun, J.G.; Chao, B.T.

    1997-07-01

    One unique feature of the AP-600 is its passive containment cooling system (PCCS), which is designed to maintain containment pressure below the design limit for 72 hours without action by the reactor operator. During a design-basis accident, i.e., either a loss-of-coolant or a main steam-line break accident, steam escapes and comes in contact with the much cooler containment vessel wall. Heat is transferred to the inside surface of the steel containment wall by convection and condensation of steam and through the containment steel wall by conduction. Heat is then transferred from the outside of the containment surface by heating and evaporation of a thin liquid film that is formed by applying water at the top of the containment vessel dome. Air in the annual space is heated by both convection and injection of steam from the evaporating liquid film. The heated air and vapor rise as a result of natural circulation and exit the shield building through the outlets above the containment shell. All of the analytical models that are developed for and used in the COMMIX-ID code for predicting performance of the PCCS will be described. These models cover governing conservation equations for multicomponents single phase flow, transport equations for the {kappa}-{epsilon} two-equation turbulence model, auxiliary equations, liquid-film tracking model for both inside (condensate) and outside (evaporating liquid film) surfaces of the containment vessel wall, thermal coupling between flow domains inside and outside the containment vessel, and heat and mass transfer models. Various key parameters of the COMMIX-ID results and corresponding AP-600 PCCS experimental data are compared and the agreement is good. Significant findings from this study are summarized.

  3. Field Test Report: Preliminary Aquifer Test Characterization Results for Well 299-W15-225: Supporting Phase I of the 200-ZP-1 Groundwater Operable Unit Remedial Design

    SciTech Connect (OSTI)

    Spane, Frank A.; Newcomer, Darrell R.

    2009-09-23

    This report examines the hydrologic test results for both local vertical profile characterization and large-scale hydrologic tests associated with a new extraction well (well 299-W15-225) that was constructed during FY2009 for inclusion within the future 200-West Area Groundwater Treatment System that is scheduled to go on-line at the end of FY2011. To facilitate the analysis of the large-scale hydrologic test performed at newly constructed extraction well 299-W15-225 (C7017; also referred to as EW-1 in some planning documents), the existing 200-ZP-1 interim pump-and-treat system was completely shut-down ~1 month before the performance of the large-scale hydrologic test. Specifically, this report 1) applies recently developed methods for removing barometric pressure fluctuations from well water-level measurements to enhance the detection of hydrologic test and pump-and-treat system effects at selected monitor wells, 2) analyzes the barometric-corrected well water-level responses for a preliminary determination of large-scale hydraulic properties, and 3) provides an assessment of the vertical distribution of hydraulic conductivity in the vicinity of newly constructed extraction well 299-W15-225. The hydrologic characterization approach presented in this report is expected to have universal application for meeting the characterization needs at other remedial action sites located within unconfined and confined aquifer systems.

  4. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    SciTech Connect (OSTI)

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem

  5. Large-scale delamination of multi-layers transition metal carbides and carbonitrides “MXenes”

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Naguib, Michael; Unocic, Raymond R.; Armstrong, Beth L.; Nanda, Jagjit

    2015-04-17

    Herein we report on a general approach to delaminate multi-layered MXenes using an organic base to induce swelling that in turn weakens the bonds between the MX layers. Simple agitation or mild sonication of the swollen MXene in water resulted in the large-scale delamination of the MXene layers. The delamination method is demonstrated for vanadium carbide, and titanium carbonitrides MXenes.

  6. Large-Scale Delamination of Multi-Layers Transition Metal Carbides and Carbonitrides MXenes

    SciTech Connect (OSTI)

    Abdelmalak, Michael Naguib; Unocic, Raymond R; Armstrong, Beth L; Nanda, Jagjit

    2015-01-01

    Herein we report on a general approach to delaminate multi-layered MXenes using an organic base to induce swelling that in turn weakens the bonds between the MX layers. Simple agitation or mild sonication of the swollen MXene in water resulted in the large-scale delamination of the MXene layers. The delamination method is demonstrated for vanadium carbide, and titanium carbonitrides MXenes.

  7. First U.S. Large-Scale CO2 Storage Project Advances

    Broader source: Energy.gov [DOE]

    Drilling nears completion for the first large-scale carbon dioxide injection well in the United States for CO2 sequestration. This project will be used to demonstrate that CO2 emitted from industrial sources - such as coal-fired power plants - can be stored in deep geologic formations to mitigate large quantities of greenhouse gas emissions.

  8. DOE Field Operations Program EV and HEV Testing

    SciTech Connect (OSTI)

    Francfort, James Edward; Slezak, L. A.

    2001-10-01

    The United States Department of Energy’s (DOE) Field Operations Program tests advanced technology vehicles (ATVs) and disseminates the testing results to provide fleet managers and other potential ATV users with accurate and unbiased information on vehicle performance. The ATVs (including electric, hybrid, and other alternative fuel vehicles) are tested using one or more methods - Baseline Performance Testing (EVAmerica and Pomona Loop), Accelerated Reliability Testing, and Fleet Testing. The Program (http://ev.inel.gov/sop) and its nine industry testing partners have tested over 30 full-size electric vehicle (EV) models and they have accumulated over 4 million miles of EV testing experience since 1994. In conjunction with several original equipment manufacturers, the Program has developed testing procedures for the new classes of hybrid, urban, and neighborhood EVs. The testing of these vehicles started during 2001. The EVS 18 presentation will include (1) EV and hybrid electric vehicle (HEV) test results, (2) operating experience with and performance trends of various EV and HEV models, and (3) experience with operating hydrogen-fueled vehicles. Data presented for EVs will include vehicle efficiency (km/kWh), average distance driven per charge, and range testing results. The HEV data will include operating considerations, fuel use rates, and range testing results.

  9. Horizontal-well pilot waterflood tests shallow, abandoned field

    SciTech Connect (OSTI)

    McAlpine, J.L. ); Joshi, S.D. )

    1991-08-05

    This paper reports on the suitability of using horizontal wells in a waterflood of shallow, partially depleted sands which will be tested in the Jennings field in Oklahoma. The vertical wells drilled in the Jennings field intersect several well-known formations such as Red Fork, Misner, and Bartlesville sand. Most of these formations have been produced over a number of years, and presently no wells are producing in the field. In the 1940s, 1950s, and 1960s, wells were drilled on 10-acre spacing, and the last well was plugged in 1961. The field was produced only on primary production and produced approximately 1 million bbl of oil. Because the field was not waterflooded, a large potential exists to produce from the field using secondary methods. To improve the economics for the secondary process, a combination of horizontal and vertical wells was considered.

  10. LY? FOREST TOMOGRAPHY FROM BACKGROUND GALAXIES: THE FIRST MEGAPARSEC-RESOLUTION LARGE-SCALE STRUCTURE MAP AT z > 2

    SciTech Connect (OSTI)

    Lee, Khee-Gan; Hennawi, Joseph F.; Eilers, Anna-Christina [Max Planck Institute for Astronomy, Knigstuhl 17, D-69117 Heidelberg (Germany); Stark, Casey; White, Martin [Department of Astronomy, University of California at Berkeley, B-20 Hearst Field Annex 3411, Berkeley, CA 94720 (United States); Prochaska, J. Xavier [Department of Astronomy and Astrophysics, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); Schlegel, David J. [University of California Observatories, Lick Observatory, 1156 High Street, Santa Cruz, CA 95064 (United States); Arinyo-i-Prats, Andreu [Institut de Cincies del Cosmos, Universitat de Barcelona (IEEC-UB), Mart Franqus 1, E-08028 Barcelona (Spain); Suzuki, Nao [Kavli Institute for the Physics and Mathematics of the Universe (IPMU), The University of Tokyo, Kashiwano-ha 5-1-5, Kashiwa-shi, Chiba (Japan); Croft, Rupert A. C. [Department of Physics, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States); Caputi, Karina I. [Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700-AV Groningen (Netherlands); Cassata, Paolo [Instituto de Fisica y Astronomia, Facultad de Ciencias, Universidad de Valparaiso, Av. Gran Bretana 1111, Casilla 5030, Valparaiso (Chile); Ilbert, Olivier; Le Brun, Vincent; Le Fvre, Olivier [Aix Marseille Universit, CNRS, LAM (Laboratoire d'Astrophysique de Marseille) UMR 7326, F-13388 Marseille (France); Garilli, Bianca [INAF-IASF, Via Bassini 15, I-20133, Milano (Italy); Koekemoer, Anton M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Maccagni, Dario [INAF-Osservatorio Astronomico di Bologna, Via Ranzani,1, I-40127 Bologna (Italy); Nugent, Peter, E-mail: lee@mpia.de [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); and others

    2014-11-01

    We present the first observations of foreground Ly? forest absorption from high-redshift galaxies, targeting 24 star-forming galaxies (SFGs) with z ? 2.3-2.8 within a 5' 14' region of the COSMOS field. The transverse sightline separation is ?2 h {sup 1} Mpc comoving, allowing us to create a tomographic reconstruction of the three-dimensional (3D) Ly? forest absorption field over the redshift range 2.20 ? z ? 2.45. The resulting map covers 6 h {sup 1} Mpc 14 h {sup 1} Mpc in the transverse plane and 230 h {sup 1} Mpc along the line of sight with a spatial resolution of ?3.5 h {sup 1} Mpc, and is the first high-fidelity map of a large-scale structure on ?Mpc scales at z > 2. Our map reveals significant structures with ? 10 h {sup 1} Mpc extent, including several spanning the entire transverse breadth, providing qualitative evidence for the filamentary structures predicted to exist in the high-redshift cosmic web. Simulated reconstructions with the same sightline sampling, spectral resolution, and signal-to-noise ratio recover the salient structures present in the underlying 3D absorption fields. Using data from other surveys, we identified 18 galaxies with known redshifts coeval with our map volume, enabling a direct comparison with our tomographic map. This shows that galaxies preferentially occupy high-density regions, in qualitative agreement with the same comparison applied to simulations. Our results establish the feasibility of the CLAMATO survey, which aims to obtain Ly? forest spectra for ?1000 SFGs over ?1 deg{sup 2} of the COSMOS field, in order to map out the intergalactic medium large-scale structure at (z) ? 2.3 over a large volume (100 h {sup 1} Mpc){sup 3}.

  11. Field Testing of Activated Carbon Injection Options for Mercury Control at TXU's Big Brown Station

    SciTech Connect (OSTI)

    John Pavlish; Jeffrey Thompson; Christopher Martin; Mark Musich; Lucinda Hamre

    2009-01-07

    time that enhanced AC was injected, the average mercury removal for the month long test was approximately 74% across the test baghouse module. ACI was interrupted frequently during the month long test because the test baghouse module was bypassed frequently to relieve differential pressure. The high air-to-cloth ratio of operations at this unit results in significant differential pressure, and thus there was little operating margin before encountering differential pressure limits, especially at high loads. This limited the use of sorbent injection as the added material contributes to the overall differential pressure. This finding limits sustainable injection of AC without appropriate modifications to the plant or its operations. Handling and storage issues were observed for the TOXECON ash-AC mixture. Malfunctioning equipment led to baghouse dust hopper plugging, and storage of the stagnant material at flue gas temperatures resulted in self-heating and ignition of the AC in the ash. In the hoppers that worked properly, no such problems were reported. Economics of mercury control at Big Brown were estimated for as-tested scenarios and scenarios incorporating changes to allow sustainable operation. This project was funded under the U.S. Department of Energy National Energy Technology Laboratory project entitled 'Large-Scale Mercury Control Technology Field Testing Program--Phase II'.

  12. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  13. Field test of the Rapid Transuranic Monitoring Laboratory

    SciTech Connect (OSTI)

    McIsaac, C.V.; Sill, C.W.; Gehrke, R.J.; Killian, E.W.; Watts, K.D.; Amaro, C.R.

    1993-12-01

    A field test of the Rapid Transuranic Monitoring Laboratory (RTML) developed at the Idaho National Engineering Laboratory (INEL) was conducted as part of a demonstration sponsored by the Buried Waste Integrated Demonstration (BWID). The RTML is a mobile, field- deployable laboratory developed for use at buried radioactive waste remediation sites to allow onsite preparation and analysis of soil, smear, and air filter samples for alpha and gamma-emitting contaminants. Analytical instruments installed in the RTML include an extended range, germanium photon analysis spectrometer with an automatic sample changer, two large-area ionization chamber alpha spectrometers, and four alpha continuous air monitors. The performance of the RTML was tested at the Test Reactor Area and Cold Test Pit near the Radioactive Waste Management Complex at the INEL. Objectives, experimental procedures, and an evaluation of the performance of the RTML are presented.

  14. Sludge formation in engine testing and field service

    SciTech Connect (OSTI)

    Graf, R.T.; Copan, W.G.; Kornbrekke, R.E.; Murphy, J.P.

    1988-01-01

    The relationship of engine test sludge to field sludge was investigated by a variety of analytical techniques. Engine oil drains and sludges are suspensions of aggregated, resinous particles in oil. The sludges, in particular, contain large particle networks which are readily broken under shear. The resinous phase itself contains highly oxidated fuel fragments, and is enriched in aromatics, acidic species, and additive elements relative to the bulk oil. Field sludge and drain oil samples from the U.S., Europe, and the Far East are shown to be chemically similar to sequence VE engine test sludge and drain oil. Fleet test drain oils from vehicles powered by the Daimler Benz M102E engine are shown to be chemically similar to M102E engine test drain oils.

  15. Gas characterization system 241-AN-105 field acceptance test procedure

    SciTech Connect (OSTI)

    Schneider, T.C.

    1996-03-01

    This document details the field Acceptance Testing of a gas characterization system being installed on waste tank 241-AN-105. The gas characterization systems will be used to monitor the vapor spaces of waste tanks known to contain measurable concentrations of flammable gases.

  16. Gas characterization system 241-AW-101 field acceptance test procedure

    SciTech Connect (OSTI)

    Schneider, T.C.

    1996-03-01

    This document details the field Acceptance Testing of a gas characterization system being installed on waste tank 241-AW-101. The gas characterization systems will be used to monitor the vapor spaces of waste tanks known to contain measurable concentrations of flammable gases.

  17. APPLICATIONS OF CFD METHOD TO GAS MIXING ANALYSIS IN A LARGE-SCALED TANK

    SciTech Connect (OSTI)

    Lee, S; Richard Dimenna, R

    2007-03-19

    The computational fluid dynamics (CFD) modeling technique was applied to the estimation of maximum benzene concentration for the vapor space inside a large-scaled and high-level radioactive waste tank at Savannah River site (SRS). The objective of the work was to perform the calculations for the benzene mixing behavior in the vapor space of Tank 48 and its impact on the local concentration of benzene. The calculations were used to evaluate the degree to which purge air mixes with benzene evolving from the liquid surface and its ability to prevent an unacceptable concentration of benzene from forming. The analysis was focused on changing the tank operating conditions to establish internal recirculation and changing the benzene evolution rate from the liquid surface. The model used a three-dimensional momentum coupled with multi-species transport. The calculations included potential operating conditions for air inlet and exhaust flows, recirculation flow rate, and benzene evolution rate with prototypic tank geometry. The flow conditions are assumed to be fully turbulent since Reynolds numbers for typical operating conditions are in the range of 20,000 to 70,000 based on the inlet conditions of the air purge system. A standard two-equation turbulence model was used. The modeling results for the typical gas mixing problems available in the literature were compared and verified through comparisons with the test results. The benchmarking results showed that the predictions are in good agreement with the analytical solutions and literature data. Additional sensitivity calculations included a reduced benzene evolution rate, reduced air inlet and exhaust flow, and forced internal recirculation. The modeling results showed that the vapor space was fairly well mixed and that benzene concentrations were relatively low when forced recirculation and 72 cfm ventilation air through the tank boundary were imposed. For the same 72 cfm air inlet flow but without forced recirculation

  18. Analysis and experimental study on formation conditions of large-scale barrier-free diffuse atmospheric pressure air plasmas in repetitive pulse mode

    SciTech Connect (OSTI)

    Li, Lee Liu, Lun; Liu, Yun-Long; Bin, Yu; Ge, Ya-Feng; Lin, Fo-Chang

    2014-01-14

    Atmospheric air diffuse plasmas have enormous application potential in various fields of science and technology. Without dielectric barrier, generating large-scale air diffuse plasmas is always a challenging issue. This paper discusses and analyses the formation mechanism of cold homogenous plasma. It is proposed that generating stable diffuse atmospheric plasmas in open air should meet the three conditions: high transient power with low average power, excitation in low average E-field with locally high E-field region, and multiple overlapping electron avalanches. Accordingly, an experimental configuration of generating large-scale barrier-free diffuse air plasmas is designed. Based on runaway electron theory, a low duty-ratio, high voltage repetitive nanosecond pulse generator is chosen as a discharge excitation source. Using the wire-electrodes with small curvature radius, the gaps with highly non-uniform E-field are structured. Experimental results show that the volume-scaleable, barrier-free, homogeneous air non-thermal plasmas have been obtained between the gap spacing with the copper-wire electrodes. The area of air cold plasmas has been up to hundreds of square centimeters. The proposed formation conditions of large-scale barrier-free diffuse air plasmas are proved to be reasonable and feasible.

  19. The absorption chiller in large scale solar pond cooling design with condenser heat rejection in the upper convecting zone

    SciTech Connect (OSTI)

    Tsilingiris, P.T. )

    1992-07-01

    The possibility of using solar ponds as low-cost solar collectors combined with commercial absorption chillers in large scale solar cooling design is investigated. The analysis is based on the combination of a steady-state solar pond mathematical model with the operational characteristics of a commercial absorption chiller, assuming condenser heat rejection in the upper convecting zone (U.C.Z.). The numerical solution of the nonlinear equations involved leads to results which relate the chiller capacity with pond design and environmental parameters, which are also employed for the investigation of the optimum pond size for a minimum capital cost. The derived cost per cooling kW for a 350 kW chiller ranges from about 300 to 500 $/kW cooling. This is almost an order of magnitude lower than using a solar collector field of evacuated tube type.

  20. Simultaneous effect of modified gravity and primordial non-Gaussianity in large scale structure observations

    SciTech Connect (OSTI)

    Mirzatuny, Nareg; Khosravi, Shahram; Baghram, Shant; Moshafi, Hossein E-mail: khosravi@mail.ipm.ir E-mail: hosseinmoshafi@iasbs.ac.ir

    2014-01-01

    In this work we study the simultaneous effect of primordial non-Gaussianity and the modification of the gravity in f(R) framework on large scale structure observations. We show that non-Gaussianity and modified gravity introduce a scale dependent bias and growth rate functions. The deviation from ΛCDM in the case of primordial non-Gaussian models is in large scales, while the growth rate deviates from ΛCDM in small scales for modified gravity theories. We show that the redshift space distortion can be used to distinguish positive and negative f{sub NL} in standard background, while in f(R) theories they are not easily distinguishable. The galaxy power spectrum is generally enhanced in presence of non-Gaussianity and modified gravity. We also obtain the scale dependence of this enhancement. Finally we define galaxy growth rate and galaxy growth rate bias as new observational parameters to constrain cosmology.

  1. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    SciTech Connect (OSTI)

    RODRIGUEZ, MARKO A.; BOLLEN, JOHAN; VAN DE SOMPEL, HERBERT

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  2. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect (OSTI)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  3. Panel 1, Towards Sustainable Energy Systems: The Role of Large-Scale Hydrogen Storage in Germany

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Hanno Butsch | Head of International Cooperation NOW GmbH National Organization Hydrogen and Fuel Cell Technology Towards sustainable energy systems - The role of large scale hydrogen storage in Germany May 14th, 2014 | Sacramento Political background for the transition to renewable energies 2 * Climate protection: Global responsibility for the next generation. * Energy security: More independency from fossil fuels. * Securing the economy: Creating new markets and jobs through innovations. Three

  4. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect (OSTI)

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  5. Topology-Aware Mappings for Large-Scale Eigenvalue Problems | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Leadership Computing Facility Topology-Aware Mappings for Large-Scale Eigenvalue Problems Authors: Aktulga, H.M., Yang, C., Ng.,E.G., Maris, P., Vary, J.P. Obtaining highly accurate predictions for properties of light atomic nuclei using the Configuration Interaction (CI) approach requires computing the lowest eigenvalues and associated eigenvectors of a large many-body nuclear Hamiltonian matrix, H ˆ . Since H ˆ is a large sparse matrix, a parallel iterative eigensolver designed for

  6. Large-Scale Simulation of Brain Tissue: Blue Brain Project, EPFL | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Leadership Computing Facility Digital reconstruction of pyramidal cells Digital reconstruction of pyramidal cells. Blue Brain Project, Ecole Polytechnique Federale de Lausanne Large-Scale Simulation of Brain Tissue: Blue Brain Project, EPFL PI Name: Fabien Delalondre PI Email: fabien.delalondre@epfl.ch Institution: Ecole Federale Polytechnique de Lausanne Allocation Program: ESP Year: 2015 Research Domain: Biological Sciences Tier 1 Science Project Science This ESP project will be used to

  7. Metal Catalyzed sp2 Bonded Carbon - Large-scale Graphene Synthesis and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Beyond | MIT-Harvard Center for Excitonics Metal Catalyzed sp2 Bonded Carbon - Large-scale Graphene Synthesis and Beyond December 1, 2009 at 3pm/36-428 Peter Sutter Center for Functional Nanomaterials sutter abstract: Carbon honeycomb lattices have shown a number of remarkable properties. When wrapped up into fullerenes, for instance, superconductivity with high transition temperatures can be induced by alkali intercalation. Rolling carbon sheets up into 1-dimensional nanotubes generates the

  8. Large-Scale Production of Marine Microalgae for Fuel and Feeds

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Bioenergy Technologies Office (BETO) 2015 Project Peer Review Large-Scale Production of Marine Microalgae for Fuel and Feeds March 24, 2015 Algae Platform Review Mark Huntley Cornell Marine Algal Biofuels Consortium This presentation does not contain any proprietary, confidential, or otherwise restricted information Goal Statement  BETO MYPP Goals (3) Demonstrate 1. Performance against clear cost goals and technical targets (Q4 2013) 2. Productivity of 1,500 gal/acre/yr algal oil (Q4 2014)

  9. High performance CLSM field mixing and pumping test results

    SciTech Connect (OSTI)

    Rajendran, N.; Langton, C.A.

    1997-05-14

    An improved low bleed water CLSM mix was field tested on May 13, 1997 at the Throop portable auger batching plant. Production and pumping tests were very successful. The four cubic yards of material pumped into a ply wood form where it flowed 48 feet (the entire length of the form). The CLSM slurry was very uniform, self leveling, cohesive, showed no segregation, and had no bleed water. Properties of the High Performance CLSM were the same for material collected at the auger and at the end of the pipeline except for the air content which was 5.5% at the auger and 3.2% at the end of the pipeline. This is exactly what was expected and indicates that this CLSM is easy to mix and pump in the Throop/BSRI equipment. CLSM Mix TW-10 is recommended for Tank Closure based on the field batching and pumping tests.

  10. A method of orbital analysis for large-scale first-principles simulations

    SciTech Connect (OSTI)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  11. Field Testing of a Portable Radiation Detector and Mapping System

    SciTech Connect (OSTI)

    Hofstetter, K.J.; Hayes, D.W.; Eakle, R.F.

    1998-03-01

    Researchers at the Savannah River Site (SRS) have developed a man- portable radiation detector and mapping system (RADMAPS) which integrates the accumulation of radiation information with precise ground locations. RADMAPS provides field personnel with the ability to detect, locate, and characterize nuclear material at a site or facility by analyzing the gamma or neutron spectra and correlating them with position. the man-portable field unit records gamma or neutron count rate information and its location, along with date and time, using an embedded Global Positioning System (GPS). RADMAPS is an advancement in data fusion, integrating several off-the-shelf technologies with new computer software resulting in a system that is simple to deploy and provides information useful to field personnel in an easily understandable form. Decisions on subsequent actions can be made in the field to efficiently use available field resources. The technologies employed in this system include: recording GPS, radiation detection (typically scintillation detectors), pulse height analysis, analog-to-digital converters, removable solid-state (Flash or SRAM) memory cards, Geographic Information System (GIS) software and personal computers with CD-ROM supporting digital base maps. RADMAPS includes several field deployable data acquisition systems designed to simultaneously record radiation and geographic positions. This paper summarizes the capabilities of RADMAPS and some of the results of field tests performed with the system.

  12. SMART wind turbine rotor. Design and field test

    SciTech Connect (OSTI)

    Berg, Jonathan Charles; Resor, Brian Ray; Paquette, Joshua A.; White, Jonathan Randall

    2014-01-01

    The Wind Energy Technologies department at Sandia National Laboratories has developed and field tested a wind turbine rotor with integrated trailing-edge flaps designed for active control of rotor aerodynamics. The SMART Rotor project was funded by the Wind and Water Power Technologies Office of the U.S. Department of Energy (DOE) and was conducted to demonstrate active rotor control and evaluate simulation tools available for active control research. This report documents the design, fabrication, and testing of the SMART Rotor. This report begins with an overview of active control research at Sandia and the objectives of this project. The SMART blade, based on the DOE / SNL 9-meter CX-100 blade design, is then documented including all modifications necessary to integrate the trailing edge flaps, sensors incorporated into the system, and the fabrication processes that were utilized. Finally the test site and test campaign are described.

  13. Field Testing: Independent, Accredited Testing and Validation for the Wind Industry (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2011-11-01

    This fact sheet describes the field testing capabilities at the National Wind Technology Center (NWTC). NREL's specialized facilities and personnel at the NWTC provide the U.S. wind industry with scientific and engineering support that has proven critical to the development of wind energy for U.S. energy needs. The NWTC's specialized field-testing capabilities have evolved over 30 years of continuous support by the U.S. Department of Energy Wind and Hydropower Technologies Program and long standing industry partnerships. The NWTC provides wind industry manufacturers, developers, and operators with turbine and component testing all in one convenient location. Although industry utilizes sophisticated modeling tools to design and optimize turbine configurations, there are always limitations in modeling capabilities, and testing is a necessity to ensure performance and reliability. Designs require validation and testing is the only way to determine if there are flaws. Prototype testing is especially important in capturing manufacturing flaws that might require fleet-wide retrofits. The NWTC works with its industry partners to verify the performance and reliability of wind turbines that range in size from 400 Watts to 3 megawatts. Engineers conduct tests on components and full-scale turbines in laboratory environments and in the field. Test data produced from these tests can be used to validate turbine design codes and simulations that further advance turbine designs.

  14. Wind Program Announces $2 Million to Develop and Field Test Wind...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Announces 2 Million to Develop and Field Test Wind Energy Bat Impact Minimization Technologies Wind Program Announces 2 Million to Develop and Field Test Wind Energy Bat Impact ...

  15. Mathematical model of testing of pipeline integrity by thermal fields

    SciTech Connect (OSTI)

    Vaganova, Nataliia

    2014-11-18

    Thermal fields testing at the ground surface above a pipeline are considered. One method to obtain and investigate an ideal thermal field in different environments is a direct numerical simulation of heat transfer processes taking into account the most important physical factors. In the paper a mathematical model of heat propagation from an underground source is described with accounting of physical factors such as filtration of water in soil and solar radiation. Thermal processes are considered in 3D origin where the heat source is a pipeline with constant temperature and non-uniform isolated shell (with 'damages'). This problem leads to solution of heat diffusivity equation with nonlinear boundary conditions. Approaches to analysis of thermal fields are considered to detect damages.

  16. Creation of the dam for the No. 2 Kambaratinskaya HPP by large-scale blasting: analysis of planning experience and lessons learned

    SciTech Connect (OSTI)

    Shuifer, M. I.; Argal, E. S.

    2012-05-15

    Results of complex instrument observations and video taping during large-scale blasts detonated for creation of the dam at the No. 2 Kambaratinskaya HPP on the Naryn River in the Kyrgyz Republic are analyzed. Tests of the energy effectiveness of the explosives are evaluated, characteristics of LSB manifestations in seismic and air waves are revealed, and the shaping and movement of the rock mass are examined. A methodological analysis of the planning and production of the LSB is given.

  17. Deep Borehole Field Test Requirements and Controlled Assumptions.

    SciTech Connect (OSTI)

    Hardin, Ernest

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientific characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.

  18. Field test comparison of natural gas engine exhaust valves

    SciTech Connect (OSTI)

    Bicknell, W.B.; Hay, S.C.; Shade, W.N.; Statler, G.R.

    1996-12-31

    As part of a product improvement program, an extensive spark-ignited, turbocharged, natural gas engine exhaust valve test program was conducted using laboratory and field engines. Program objectives were to identify a valve and seat insert combination that increased mean time between overhauls (MTBO) while reducing the risk of premature valve cracking and failure. Following a thorough design review, a large number of valve and seat insert configurations were tested in a popular 900 RPM, 166 BHP (0.123 Mw) per cylinder industrial gas engine series. Material, head geometry, seat angle and other parameters were compared. Careful in-place measurements and post-test inspections compared various configurations and identified optimal exhaust valving for deployment in new units and upgrades of existing engines.

  19. Networks of silicon nanowires: A large-scale atomistic electronic structure analysis

    SciTech Connect (OSTI)

    Kele?, mit; Bulutay, Ceyhun; Liedke, Bartosz; Heinig, Karl-Heinz

    2013-11-11

    Networks of silicon nanowires possess intriguing electronic properties surpassing the predictions based on quantum confinement of individual nanowires. Employing large-scale atomistic pseudopotential computations, as yet unexplored branched nanostructures are investigated in the subsystem level as well as in full assembly. The end product is a simple but versatile expression for the bandgap and band edge alignments of multiply-crossing Si nanowires for various diameters, number of crossings, and wire orientations. Further progress along this line can potentially topple the bottom-up approach for Si nanowire networks to a top-down design by starting with functionality and leading to an enabling structure.

  20. NREL Offers an Open-Source Solution for Large-Scale Energy Data Collection

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Analysis - News Releases | NREL NREL Offers an Open-Source Solution for Large-Scale Energy Data Collection and Analysis June 18, 2013 The Energy Department's National Renewable Energy Laboratory (NREL) is launching an open-source system for storing, integrating, and aligning energy-related time-series data. NREL's Energy DataBus is used for tracking and analyzing energy use on its own campus. The system is applicable to other facilities-including anything from a single building to a

  1. Materials Science and Materials Chemistry for Large Scale Electrochemical Energy Storage: From Transportation to Electrical Grid

    SciTech Connect (OSTI)

    Liu, Jun; Zhang, Jiguang; Yang, Zhenguo; Lemmon, John P.; Imhoff, Carl H.; Graff, Gordon L.; Li, Liyu; Hu, Jian Z.; Wang, Chong M.; Xiao, Jie; Xia, Guanguang; Viswanathan, Vilayanur V.; Baskaran, Suresh; Sprenkle, Vincent L.; Li, Xiaolin; Shao, Yuyan; Schwenzer, Birgit

    2013-02-15

    Large-scale electrical energy storage has become more important than ever for reducing fossil energy consumption in transportation and for the widespread deployment of intermittent renewable energy in electric grid. However, significant challenges exist for its applications. Here, the status and challenges are reviewed from the perspective of materials science and materials chemistry in electrochemical energy storage technologies, such as Li-ion batteries, sodium (sulfur and metal halide) batteries, Pb-acid battery, redox flow batteries, and supercapacitors. Perspectives and approaches are introduced for emerging battery designs and new chemistry combinations to reduce the cost of energy storage devices.

  2. Pretest Caluculations of Temperature Changes for Field Thermal Conductivity Tests

    SciTech Connect (OSTI)

    N.S. Brodsky

    2002-07-17

    A large volume fraction of the potential monitored geologic repository at Yucca Mountain may reside in the Tptpll (Tertiary, Paintbrush Group, Topopah Spring Tuff, crystal poor, lower lithophysal) lithostratigraphic unit. This unit is characterized by voids, or lithophysae, which range in size from centimeters to meters. A series of thermal conductivity field tests are planned in the Enhanced Characterization of the Repository Block (ECRB) Cross Drift. The objective of the pretest calculation described in this document is to predict changes in temperatures in the surrounding rock for these tests for a given heater power and a set of thermal transport properties. The calculation can be extended, as described in this document, to obtain thermal conductivity, thermal capacitance (density x heat capacity, J {center_dot} m{sup -3} {center_dot} K{sup -1}), and thermal diffusivity from the field data. The work has been conducted under the ''Technical Work Plan For: Testing and Monitoring'' (BSC 2001). One of the outcomes of this analysis is to determine the initial output of the heater. This heater output must be sufficiently high that it will provide results in a reasonably short period of time (within several weeks or a month) and be sufficiently high that the heat increase is detectable by the instruments employed in the test. The test will be conducted in stages and heater output will be step increased as the test progresses. If the initial temperature is set too high, the experiment will not have as many steps and thus fewer thermal conductivity data points will result.

  3. Large-scale structure in brane-induced gravity. I. Perturbation theory

    SciTech Connect (OSTI)

    Scoccimarro, Roman

    2009-11-15

    We study the growth of subhorizon perturbations in brane-induced gravity using perturbation theory. We solve for the linear evolution of perturbations taking advantage of the symmetry under gauge transformations along the extra-dimension to decouple the bulk equations in the quasistatic approximation, which we argue may be a better approximation at large scales than thought before. We then study the nonlinearities in the bulk and brane equations, concentrating on the workings of the Vainshtein mechanism by which the theory becomes general relativity (GR) at small scales. We show that at the level of the power spectrum, to a good approximation, the effect of nonlinearities in the modified gravity sector may be absorbed into a renormalization of the gravitational constant. Since the relation between the lensing potential and density perturbations is entirely unaffected by the extra physics in these theories, the modified gravity can be described in this approximation by a single function, an effective gravitational constant for nonrelativistic motion that depends on space and time. We develop a resummation scheme to calculate it, and provide predictions for the nonlinear power spectrum. At the level of the large-scale bispectrum, the leading order corrections are obtained by standard perturbation theory techniques, and show that the suppression of the brane-bending mode leads to characteristic signatures in the non-Gaussianity generated by gravity, generic to models that become GR at small scales through second-derivative interactions. We compare the predictions in this work to numerical simulations in a companion paper.

  4. Technical and economical aspects of large-scale CO{sub 2} storage in deep oceans

    SciTech Connect (OSTI)

    Sarv, H.; John, J.

    2000-07-01

    The authors examined the technical and economical feasibility of two options for large-scale transportation and ocean sequestration of captured CO{sub 2} at depths of 3000 meters or greater. In one case, CO{sub 2} was pumped from a land-based collection center through six parallel-laid subsea pipelines. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating platform or a barge for vertical injection through a large-diameter pipe to the ocean floor. Based on the preliminary technical and economic analyses, tanker transportation and offshore injection through a large-diameter, 3,000-meter vertical pipeline from a floating structure appears to be the best method for delivering liquid CO{sub 2} to deep ocean floor depressions for distances greater than 400 km. Other benefits of offshore injection are high payload capability and ease of relocation. For shorter distances (less than 400 km), CO{sub 2} delivery by subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines or tankers were under 2 dollars per ton of stored CO{sub 2}. Their analyses also indicates that large-scale sequestration of captured CO{sub 2} in oceans is technologically feasible and has many commonalities with other strategies for deepsea natural gas and oil exploration installations.

  5. Large Scale Ice Water Path and 3-D Ice Water Content

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Liu, Guosheng

    Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.

  6. Large Scale Ice Water Path and 3-D Ice Water Content

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Liu, Guosheng

    2008-01-15

    Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.

  7. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis

    SciTech Connect (OSTI)

    Chen, H.-W.; Chang, N.-B.; Chen, J.-C.; Tsai, S.-J.

    2010-07-15

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.

  8. A High-Performance Rechargeable Iron Electrode for Large-Scale Battery-Based Energy Storage

    SciTech Connect (OSTI)

    Manohar, AK; Malkhandi, S; Yang, B; Yang, C; Prakash, GKS; Narayanan, SR

    2012-01-01

    Inexpensive, robust and efficient large-scale electrical energy storage systems are vital to the utilization of electricity generated from solar and wind resources. In this regard, the low cost, robustness, and eco-friendliness of aqueous iron-based rechargeable batteries are particularly attractive and compelling. However, wasteful evolution of hydrogen during charging and the inability to discharge at high rates have limited the deployment of iron-based aqueous batteries. We report here new chemical formulations of the rechargeable iron battery electrode to achieve a ten-fold reduction in the hydrogen evolution rate, an unprecedented charging efficiency of 96%, a high specific capacity of 0.3 Ah/g, and a twenty-fold increase in discharge rate capability. We show that modifying high-purity carbonyl iron by in situ electro-deposition of bismuth leads to substantial inhibition of the kinetics of the hydrogen evolution reaction. The in situ formation of conductive iron sulfides mitigates the passivation by iron hydroxide thereby allowing high discharge rates and high specific capacity to be simultaneously achieved. These major performance improvements are crucial to advancing the prospect of a sustainable large-scale energy storage solution based on aqueous iron-based rechargeable batteries. (C) 2012 The Electrochemical Society. [DOI: 10.1149/2.034208jes] All rights reserved.

  9. Large-scale structure evolution in axisymmetric, compressible free-shear layers

    SciTech Connect (OSTI)

    Aeschliman, D.P.; Baty, R.S.

    1997-05-01

    This paper is a description of work-in-progress. It describes Sandia`s program to study the basic fluid mechanics of large-scale mixing in unbounded, compressible, turbulent flows, specifically, the turbulent mixing of an axisymmetric compressible helium jet in a parallel, coflowing compressible air freestream. Both jet and freestream velocities are variable over a broad range, providing a wide range mixing layer Reynolds number. Although the convective Mach number, M{sub c}, range is currently limited by the present nozzle design to values of 0.6 and below, straightforward nozzle design changes would permit a wide range of convective Mach number, to well in excess of 1.0. The use of helium allows simulation of a hot jet due to the large density difference, and also aids in obtaining optical flow visualization via schlieren due to the large density gradient in the mixing layer. The work comprises a blend of analysis, experiment, and direct numerical simulation (DNS). There the authors discuss only the analytical and experimental efforts to observe and describe the evolution of the large-scale structures. The DNS work, used to compute local two-point velocity correlation data, will be discussed elsewhere.

  10. Field Testing of a Wet FGD Additive for Enhanced Mercury Control - Pilot-Scale Test Results

    SciTech Connect (OSTI)

    Gary M. Blythe

    2006-03-01

    This Topical Report summarizes progress on Cooperative Agreement DE-FC26-04NT42309, ''Field Testing of a Wet FGD Additive.'' The objective of the project is to demonstrate the use of a flue gas desulfurization (FGD) additive, Degussa Corporation's TMT-15, to prevent the reemissions of elemental mercury (Hg{sup 0}) in flue gas exiting wet FGD systems on coal-fired boilers. Furthermore, the project intends to demonstrate that the additive can be used to precipitate most of the mercury (Hg) removed in the wet FGD system as a fine TMT salt that can be separated from the FGD liquor and bulk solid byproducts for separate disposal. The project will conduct pilot and full-scale tests of the TMT-15 additive in wet FGD absorbers. The tests are intended to determine required additive dosage requirements to prevent Hg{sup 0} reemissions and to separate mercury from the normal FGD byproducts for three coal types: Texas lignite/Power River Basin (PRB) coal blend, high-sulfur Eastern bituminous coal, and low-sulfur Eastern bituminous coal. The project team consists of URS Group, Inc., EPRI, TXU Generation Company LP, Southern Company, and Degussa Corporation. TXU Generation has provided the Texas lignite/PRB co-fired test site for pilot FGD tests, Monticello Steam Electric Station Unit 3. Southern Company is providing the low-sulfur Eastern bituminous coal host site for wet scrubbing tests, as well as the pilot and full-scale jet bubbling reactor (JBR) FGD systems to be tested. A third utility, to be named later, will provide the high-sulfur Eastern bituminous coal full-scale FGD test site. Degussa Corporation is providing the TMT-15 additive and technical support to the test program. The project is being conducted in six tasks. Of the six project tasks, Task 1 involves project planning and Task 6 involves management and reporting. The other four tasks involve field testing on FGD systems, either at pilot or full scale. The four tasks include: Task 2 - Pilot Additive Testing in

  11. FERMI RULES OUT THE INVERSE COMPTON/CMB MODEL FOR THE LARGE-SCALE JET X-RAY EMISSION OF 3C 273

    SciTech Connect (OSTI)

    Meyer, Eileen T.; Georganopoulos, Markos

    2014-01-10

    The X-ray emission mechanism in large-scale jets of powerful radio quasars has been a source of debate in recent years, with two competing interpretations: either the X-rays are of synchrotron origin, arising from a different electron energy distribution than that producing the radio to optical synchrotron component, or they are due to inverse Compton scattering of cosmic microwave background photons (IC/CMB) by relativistic electrons in a powerful relativistic jet with bulk Lorentz factor ? ? 10-20. These two models imply radically different conditions in the large-scale jet in terms of jet speed, kinetic power, and maximum energy of the particle acceleration mechanism, with important implications for the impact of the jet on the large-scale environment. A large part of the X-ray origin debate has centered on the well-studied source 3C 273. Here we present new observations from Fermi which put an upper limit on the gamma-ray flux from the large-scale jet of 3C 273 that violates at a confidence greater that 99.9% the flux expected from the IC/CMB X-ray model found by extrapolation of the UV to X-ray spectrum of knot A, thus ruling out the IC/CMB interpretation entirely for this source when combined with previous work. Further, this upper limit from Fermi puts a limit on the Doppler beaming factor of at least ? <9, assuming equipartition fields, and possibly as low as ? <5, assuming no major deceleration of the jet from knots A throughD1.

  12. Field Testing of Nano-PCM Enhanced Building Envelope Components

    SciTech Connect (OSTI)

    Biswas, Kaushik; Childs, Phillip W; Atchley, Jerald Allen

    2013-08-01

    The U.S. Department of Energy s (DOE) Building Technologies Program s goal of developing high-performance, energy efficient buildings will require more cost-effective, durable, energy efficient building envelopes. Forty-eight percent of the residential end-use energy consumption is spent on space heating and air conditioning. Reducing envelope-generated heating and cooling loads through application of phase change material (PCM)-enhanced envelope components can facilitate maximizing the energy efficiency of buildings. Field-testing of prototype envelope components is an important step in estimating their energy benefits. An innovative phase change material (nano-PCM) was developed with PCM encapsulated with expanded graphite (interconnected) nanosheets, which is highly conducive for enhanced thermal storage and energy distribution, and is shape-stable for convenient incorporation into lightweight building components. During 2012, two test walls with cellulose cavity insulation and prototype PCM-enhanced interior wallboards were installed in a natural exposure test (NET) facility at Charleston, SC. The first test wall was divided into four sections, which were separated by wood studs and thin layers of foam insulation. Two sections contained nano-PCM-enhanced wallboards: one was a three-layer structure, in which nano-PCM was sandwiched between two gypsum boards, and the other one had PCM dispersed homogeneously throughout graphite nanosheets-enhanced gypsum board. The second test wall also contained two sections with interior PCM wallboards; one contained nano-PCM dispersed homogeneously in gypsum and the other was gypsum board containing a commercial microencapsulated PCM (MEPCM) for comparison. Each test wall contained a section covered with gypsum board on the interior side, which served as control or a baseline for evaluation of the PCM wallboards. The walls were instrumented with arrays of thermocouples and heat flux transducers. Further, numerical modeling of

  13. Measurement of the large-scale anisotropy of the cosmic background radiation at 3mm

    SciTech Connect (OSTI)

    Epstein, G.L.

    1983-12-01

    A balloon-borne differential radiometer has measured the large-scale anisotropy of the cosmic background radiation (CBR) with high sensitivity. The antenna temperature dipole anistropy at 90 GHz (3 mm wavelength) is 2.82 +- 0.19 mK, corresponding to a thermodynamic anistropy of 3.48 +- mK for a 2.7 K blackbody CBR. The dipole direction, 11.3 +- 0.1 hours right ascension and -5.7/sup 0/ +- 1.8/sup 0/ declination, agrees well with measurements at other frequencies. Calibration error dominates magnitude uncertainty, with statistical errors on dipole terms being under 0.1 mK. No significant quadrupole power is found, placing a 90% confidence-level upper limit of 0.27 mK on the RMS thermodynamic quadrupolar anistropy. 22 figures, 17 tables.

  14. Detecting and mitigating abnormal events in large scale networks: budget constrained placement on smart grids

    SciTech Connect (OSTI)

    Santhi, Nandakishore; Pan, Feng

    2010-10-19

    Several scenarios exist in the modern interconnected world which call for an efficient network interdiction algorithm. Applications are varied, including various monitoring and load shedding applications on large smart energy grids, computer network security, preventing the spread of Internet worms and malware, policing international smuggling networks, and controlling the spread of diseases. In this paper we consider some natural network optimization questions related to the budget constrained interdiction problem over general graphs, specifically focusing on the sensor/switch placement problem for large-scale energy grids. Many of these questions turn out to be computationally hard to tackle. We present a particular form of the interdiction question which is practically relevant and which we show as computationally tractable. A polynomial-time algorithm will be presented for solving this problem.

  15. Infrared spectroscopy of large scale single layer graphene on self assembled organic monolayer

    SciTech Connect (OSTI)

    Woo Kim, Nak; Youn Kim, Joo; Lee, Chul; Choi, E. J.; Jin Kim, Sang; Hee Hong, Byung

    2014-01-27

    We study the effect of self-assembled monolayer (SAM) organic molecule substrate on large scale single layer graphene using infrared transmission measurement on Graphene/SAM/SiO{sub 2}/Si composite samples. From the Drude weight of the chemically inert CH{sub 3}-SAM, the electron-donating NH{sub 2}-SAM, and the SAM-less graphene, we determine the carrier density doped into graphene by the three sources—the SiO{sub 2} substrate, the gas-adsorption, and the functional group of the SAM's—separately. The SAM-treatment leads to the low carrier density N ∼ 4 × 10{sup 11} cm{sup −2} by blocking the dominant SiO{sub 2}- driven doping. The carrier scattering increases by the SAM-treatment rather than decreases. However, the transport mobility is nevertheless improved due to the reduced carrier doping.

  16. Efficient preconditioning of the electronic structure problem in large scale ab initio molecular dynamics simulations

    SciTech Connect (OSTI)

    Schiffmann, Florian; VandeVondele, Joost

    2015-06-28

    We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filtering small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.

  17. Large-scale production of anhydrous nitric acid and nitric acid solutions of dinitrogen pentoxide

    DOE Patents [OSTI]

    Harrar, Jackson E.; Quong, Roland; Rigdon, Lester P.; McGuire, Raymond R.

    2001-01-01

    A method and apparatus are disclosed for a large scale, electrochemical production of anhydrous nitric acid and N.sub.2 O.sub.5. The method includes oxidizing a solution of N.sub.2 O.sub.4 /aqueous-HNO.sub.3 at the anode, while reducing aqueous HNO.sub.3 at the cathode, in a flow electrolyzer constructed of special materials. N.sub.2 O.sub.4 is produced at the cathode and may be separated and recycled as a feedstock for use in the anolyte. The process is controlled by regulating the electrolysis current until the desired products are obtained. The chemical compositions of the anolyte and catholyte are monitored by measurement of the solution density and the concentrations of N.sub.2 O.sub.4.

  18. Large-Scale Computational Screening of Zeolites for Ethane/Ethene Separation

    SciTech Connect (OSTI)

    Kim, J; Lin, LC; Martin, RL; Swisher, JA; Haranczyk, M; Smit, B

    2012-08-14

    Large-scale computational screening of thirty thousand zeolite structures was conducted to find optimal structures for seperation of ethane/ethene mixtures. Efficient grand canonical Monte Carlo (GCMC) simulations were performed with graphics processing units (GPUs) to obtain pure component adsorption isotherms for both ethane and ethene. We have utilized the ideal adsorbed solution theory (LAST) to obtain the mixture isotherms, which were used to evaluate the performance of each zeolite structure based on its working capacity and selectivity. In our analysis, we have determined that specific arrangements of zeolite framework atoms create sites for the preferential adsorption of ethane over ethene. The majority of optimum separation materials can be identified by utilizing this knowledge and screening structures for the presence of this feature will enable the efficient selection of promising candidate materials for ethane/ethene separation prior to performing molecular simulations.

  19. Molecular Dynamics Simulations from SNL's Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Plimpton, Steve; Thompson, Aidan; Crozier, Paul

    LAMMPS (http://lammps.sandia.gov/index.html) stands for Large-scale Atomic/Molecular Massively Parallel Simulator and is a code that can be used to model atoms or, as the LAMMPS website says, as a parallel particle simulator at the atomic, meso, or continuum scale. This Sandia-based website provides a long list of animations from large simulations. These were created using different visualization packages to read LAMMPS output, and each one provides the name of the PI and a brief description of the work done or visualization package used. See also the static images produced from simulations at http://lammps.sandia.gov/pictures.html The foundation paper for LAMMPS is: S. Plimpton, Fast Parallel Algorithms for Short-Range Molecular Dynamics, J Comp Phys, 117, 1-19 (1995), but the website also lists other papers describing contributions to LAMMPS over the years.

  20. Overview of large scale experiments performed within the LBB project in the Czech Republic

    SciTech Connect (OSTI)

    Kadecka, P.; Lauerova, D.

    1997-04-01

    During several recent years NRI Rez has been performing the LBB analyses of safety significant primary circuit pipings of NPPs in Czech and Slovak Republics. The analyses covered the NPPs with reactors WWER 440 Type 230 and 213 and WWER 1000 Type 320. Within the relevant LBB projects undertaken with the aim to prove the fulfilling of the requirements of LBB, a series of large scale experiments were performed. The goal of these experiments was to verify the properties of the components selected, and to prove the quality and/or conservatism of assessments used in the LBB-analyses. In this poster, a brief overview of experiments performed in Czech Republic under guidance of NRI Rez is presented.

  1. Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design

    SciTech Connect (OSTI)

    Liao, Ben-Shan; Bai, Zhaojun; Lee, Lie-Quan; Ko, Kwok; /SLAC

    2006-09-28

    A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.

  2. Drivers and barriers to e-invoicing adoption in Greek large scale manufacturing industries

    SciTech Connect (OSTI)

    Marinagi, Catherine E-mail: ptrivel@yahoo.com Trivellas, Panagiotis E-mail: ptrivel@yahoo.com Reklitis, Panagiotis E-mail: ptrivel@yahoo.com; Skourlas, Christos

    2015-02-09

    This paper attempts to investigate the drivers and barriers that large-scale Greek manufacturing industries experience in adopting electronic invoices (e-invoices), based on three case studies with organizations having international presence in many countries. The study focuses on the drivers that may affect the increase of the adoption and use of e-invoicing, including the customers demand for e-invoices, and sufficient know-how and adoption of e-invoicing in organizations. In addition, the study reveals important barriers that prevent the expansion of e-invoicing, such as suppliers’ reluctance to implement e-invoicing, and IT infrastructures incompatibilities. Other issues examined by this study include the observed benefits from e-invoicing implementation, and the financial priorities of the organizations assumed to be supported by e-invoicing.

  3. Survey and analysis of selected jointly owned large-scale electric utility storage projects

    SciTech Connect (OSTI)

    Not Available

    1982-05-01

    The objective of this study was to examine and document the issues surrounding the curtailment in commercialization of large-scale electric storage projects. It was sensed that if these issues could be uncovered, then efforts might be directed toward clearing away these barriers and allowing these technologies to penetrate the market to their maximum potential. Joint-ownership of these projects was seen as a possible solution to overcoming the major barriers, particularly economic barriers, of commercializaton. Therefore, discussions with partners involved in four pumped storage projects took place to identify the difficulties and advantages of joint-ownership agreements. The four plants surveyed included Yards Creek (Public Service Electric and Gas and Jersey Central Power and Light); Seneca (Pennsylvania Electric and Cleveland Electric Illuminating Company); Ludington (Consumers Power and Detroit Edison, and Bath County (Virginia Electric Power Company and Allegheny Power System, Inc.). Also investigated were several pumped storage projects which were never completed. These included Blue Ridge (American Electric Power); Cornwall (Consolidated Edison); Davis (Allegheny Power System, Inc.) and Kttatiny Mountain (General Public Utilities). Institutional, regulatory, technical, environmental, economic, and special issues at each project were investgated, and the conclusions relative to each issue are presented. The major barriers preventing the growth of energy storage are the high cost of these systems in times of extremely high cost of capital, diminishing load growth and regulatory influences which will not allow the building of large-scale storage systems due to environmental objections or other reasons. However, the future for energy storage looks viable despite difficult economic times for the utility industry. Joint-ownership can ease some of the economic hardships for utilites which demonstrate a need for energy storage.

  4. LARGE-SCALE HYDROGEN PRODUCTION FROM NUCLEAR ENERGY USING HIGH TEMPERATURE ELECTROLYSIS

    SciTech Connect (OSTI)

    James E. O'Brien

    2010-08-01

    Hydrogen can be produced from water splitting with relatively high efficiency using high-temperature electrolysis. This technology makes use of solid-oxide cells, running in the electrolysis mode to produce hydrogen from steam, while consuming electricity and high-temperature process heat. When coupled to an advanced high temperature nuclear reactor, the overall thermal-to-hydrogen efficiency for high-temperature electrolysis can be as high as 50%, which is about double the overall efficiency of conventional low-temperature electrolysis. Current large-scale hydrogen production is based almost exclusively on steam reforming of methane, a method that consumes a precious fossil fuel while emitting carbon dioxide to the atmosphere. Demand for hydrogen is increasing rapidly for refining of increasingly low-grade petroleum resources, such as the Athabasca oil sands and for ammonia-based fertilizer production. Large quantities of hydrogen are also required for carbon-efficient conversion of biomass to liquid fuels. With supplemental nuclear hydrogen, almost all of the carbon in the biomass can be converted to liquid fuels in a nearly carbon-neutral fashion. Ultimately, hydrogen may be employed as a direct transportation fuel in a hydrogen economy. The large quantity of hydrogen that would be required for this concept should be produced without consuming fossil fuels or emitting greenhouse gases. An overview of the high-temperature electrolysis technology will be presented, including basic theory, modeling, and experimental activities. Modeling activities include both computational fluid dynamics and large-scale systems analysis. We have also demonstrated high-temperature electrolysis in our laboratory at the 15 kW scale, achieving a hydrogen production rate in excess of 5500 L/hr.

  5. Observed large-scale structures and diabatic heating and drying profiles during TWP-ICE

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Xie, Shaocheng; Hume, Timothy; Jakob, Christian; Klein, Stephen A.; McCoy, Renata B.; Zhang, Minghua

    2010-01-01

    This study documents the characteristics of the large-scale structures and diabatic heating and drying profiles observed during the Tropical Warm Pool–International Cloud Experiment (TWP-ICE), which was conducted in January–February 2006 in Darwin during the northern Australian monsoon season. The examined profiles exhibit significant variations between four distinct synoptic regimes that were observed during the experiment. The active monsoon period is characterized by strong upward motion and large advective cooling and moistening throughout the entire troposphere, while the suppressed and clear periods are dominated by moderate midlevel subsidence and significant low- to midlevel drying through horizontal advection. The midlevel subsidence andmore » horizontal dry advection are largely responsible for the dry midtroposphere observed during the suppressed period and limit the growth of clouds to low levels. During the break period, upward motion and advective cooling and moistening located primarily at midlevels dominate together with weak advective warming and drying (mainly from horizontal advection) at low levels. The variations of the diabatic heating and drying profiles with the different regimes are closely associated with differences in the large-scale structures, cloud types, and rainfall rates between the regimes. Strong diabatic heating and drying are seen throughout the troposphere during the active monsoon period while they are moderate and only occur above 700 hPa during the break period. The diabatic heating and drying tend to have their maxima at low levels during the suppressed periods. Furthermore, the diurnal variations of these structures between monsoon systems, continental/coastal, and tropical inland-initiated convective systems are also examined.« less

  6. LARGE-SCALE MAGNETIC HELICITY FLUXES ESTIMATED FROM MDI MAGNETIC SYNOPTIC CHARTS OVER THE SOLAR CYCLE 23

    SciTech Connect (OSTI)

    Yang Shangbin; Zhang Hongqi

    2012-10-10

    To investigate the characteristics of large-scale and long-term evolution of magnetic helicity with solar cycles, we use the method of Local Correlation Tracking to estimate the magnetic helicity evolution over solar cycle 23 from 1996 to 2009 using 795 MDI magnetic synoptic charts. The main results are as follows: the hemispheric helicity rule still holds in general, i.e., the large-scale negative (positive) magnetic helicity dominates the northern (southern) hemisphere. However, the large-scale magnetic helicity fluxes show the same sign in both hemispheres around 2001 and 2005. The global, large-scale magnetic helicity flux over the solar disk changes from a negative value at the beginning of solar cycle 23 to a positive value at the end of the cycle, while the net accumulated magnetic helicity is negative in the period between 1996 and 2009.

  7. EPRI-DOE Joint Report on Fossil Fleet Transition with Fuel Changes and Large Scale Variable Renewable Integration Now Available

    Broader source: Energy.gov [DOE]

    A new report “Fossil Fleet Transition with Fuel Changes and Large Scale Variable Renewable Integration” from the Electric Power Research Institute (EPRI) and jointly funded by the Offices of...

  8. ARRA-Multi-Level Energy Storage and Controls for Large-Scale Wind Energy Integration

    SciTech Connect (OSTI)

    David Wenzhong Gao

    2012-09-30

    intelligent controller that increases battery life within hybrid energy storage systems for wind application was developed. Comprehensive studies have been conducted and simulation results are analyzed. A permanent magnet synchronous generator, coupled with a variable speed wind turbine, is connected to a power grid (14-bus system). A rectifier, a DC-DC converter and an inverter are used to provide a complete model of the wind system. An Energy Storage System (ESS) is connected to a DC-link through a DC-DC converter. An intelligent controller is applied to the DC-DC converter to help the Voltage Source Inverter (VSI) to regulate output power and also to control the operation of the battery and supercapacitor. This ensures a longer life time for the batteries. The detailed model is simulated in PSCAD/EMTP. Additionally, economic analysis has been done for different methods that can reduce the wind power output fluctuation. These methods are, wind power curtailment, dumping loads, battery energy storage system and hybrid energy storage system. From the results, application of single advanced HESS can save more money for wind turbines owners. Generally the income would be the same for most of methods because the wind does not change and maximum power point tracking can be applied to most systems. On the other hand, the cost is the key point. For short term and small wind turbine, the BESS is the cheapest and applicable method while for large scale wind turbines and wind farms the application of advanced HESS would be the best method to reduce the power fluctuation. The key outcomes of this project include a new intelligent controller that can reduce energy exchanged between the battery and DC-link, reduce charging/discharging cycles, reduce depth of discharge and increase time interval between charge/discharge, and lower battery temperature. This improves the overall lifetime of battery energy storages. Additionally, a new design method based on probability help optimize the

  9. Near-field modeling in Frenchman Flat, Nevada Test Site

    SciTech Connect (OSTI)

    Pohlmann, K.; Shirley, C.; Andricevic, R.

    1996-12-01

    The US Department of Energy (DOE) is investigating the effects of nuclear testing in underground test areas (the UGTA program) at the Nevada Test Site. The principal focus of the UGTA program is to better understand and define subsurface radionuclide migration. The study described in this report focuses on the development of tools for generating maps of hydrogeologic characteristics of subsurface Tertiary volcanic units at the Frenchman Flat corrective Action Unit (CAU). The process includes three steps. The first step involves generation of three-dimensional maps of the geologic structure of subsurface volcanic units using geophysical logs to distinguish between two classes: densely welded tuff and nonwelded tuff. The second step generates three-dimensional maps of hydraulic conductivity utilizing the spatial distribution of the two geologic classes obtained in the first step. Each class is described by a correlation structure based on existing data on hydraulic conductivity, and conditioned on the generated spatial location of each class. The final step demonstrates the use of the maps of hydraulic conductivity for modeling groundwater flow and radionuclide transport in volcanic tuffs from an underground nuclear test at the Frenchman Flat CAU. The results indicate that the majority of groundwater flow through the volcanic section occurs through zones of densely welded tuff where connected fractures provide the transport pathway. Migration rates range between near zero to approximately four m/yr, with a mean rate of 0.68 m/yr. This report presents the results of work under the FY96 Near-Field Modeling task of the UGTA program.

  10. Lithium bromide absorption chiller passes gas conditioning field test

    SciTech Connect (OSTI)

    Lane, M.J.; Huey, M.A.

    1995-07-31

    A lithium bromide absorption chiller has been successfully used to provide refrigeration for field conditioning of natural gas. The intent of the study was to identify a process that could provide a moderate level of refrigeration necessary to meet the quality restrictions required by natural-gas transmission companies, minimize the initial investment risk, and reduce operating expenses. The technology in the test proved comparatively less expensive to operate than a propane refrigeration plant. Volatile product prices and changes in natural-gas transmission requirements have created the need for an alternative to conventional methods of natural-gas processing. The paper describes the problems with the accumulation of condensed liquids in pipelines, gas conditioning, the lithium bromide absorption cycle, economics, performance, and operating and maintenance costs.

  11. Advanced Utility Mercury-Sorbent Field-Testing Program

    SciTech Connect (OSTI)

    Ronald Landreth

    2007-12-31

    This report summarizes the work conducted from September 1, 2003 through December 31, 2007 on the project entitled Advanced Utility Mercury-Sorbent Field-Testing Program. The project covers the testing at the Detroit Edison St. Clair Plant and the Duke Power Cliffside and Buck Stations. The St. Clair Plant used a blend of subbituminous and bituminous coal and controlled the particulate emissions by means of a cold-side ESP. The Duke Power Stations used bituminous coals and controlled their particulate emissions by means of hot-side ESPs. The testing at the Detroit Edison St. Clair Plant demonstrated that mercury sorbents could be used to achieve high mercury removal rates with low injection rates at facilities that burn subbituminous coal. A mercury removal rate of 94% was achieved at an injection rate of 3 lb/MMacf over the thirty day long-term test. Prior to this test, it was believed that the mercury in flue gas of this type would be the most difficult to capture. This is not the case. The testing at the two Duke Power Stations proved that carbon- based mercury sorbents can be used to control the mercury emissions from boilers with hot-side ESPs. It was known that plain PACs did not have any mercury capacity at elevated temperatures but that brominated B-PAC did. The mercury removal rate varies with the operation but it appears that mercury removal rates equal to or greater than 50% are achievable in facilities equipped with hot-side ESPs. As part of the program, both sorbent injection equipment and sorbent production equipment was acquired and operated. This equipment performed very well during this program. In addition, mercury instruments were acquired for this program. These instruments worked well in the flue gas at the St. Clair Plant but not as well in the flue gas at the Duke Power Stations. It is believed that the difference in the amount of oxidized mercury, more at Duke Power, was the difference in instrument performance. Much of the equipment was

  12. Vadose Zone Transport Field Study: Detailed Test Plan for Simulated Leak Tests

    SciTech Connect (OSTI)

    Ward, Anderson L.; Gee, Glendon W.

    2000-06-23

    This report describes controlled transport experiments at well-instrumented field tests to be conducted during FY 2000 in support of DOE?s Vadose Zone Transport Field Study (VZTFS). The VZTFS supports the Groundwater/Vadose Zone Integration Project Science and Technology Initiative. The field tests will improve understanding of field-scale transport and lead to the development or identification of efficient and cost-effective characterization methods. These methods will capture the extent of contaminant plumes using existing steel-cased boreholes. Specific objectives are to 1) identify mechanisms controlling transport processes in soils typical of the hydrogeologic conditions of Hanford?s waste disposal sites; 2) reduce uncertainty in conceptual models; 3) develop a detailed and accurate data base of hydraulic and transport parameters for validation of three-dimensional numerical models; and 4) identify and evaluate advanced, cost-effective characterization methods with the potential to assess changing conditions in the vadose zone, particularly as surrogates of currently undetectable high-risk contaminants. Pacific Northwest National Laboratory (PNNL) manages the VZTFS for DOE.

  13. SIMULTANEOUS OBSERVATIONS OF A LARGE-SCALE WAVE EVENT IN THE SOLAR ATMOSPHERE: FROM PHOTOSPHERE TO CORONA

    SciTech Connect (OSTI)

    Shen, Yuandeng; Liu, Yu

    2012-06-20

    For the first time, we report a large-scale wave that was observed simultaneously in the photosphere, chromosphere, transition region, and low corona layers of the solar atmosphere. Using the high temporal and high spatial resolution observations taken by the Solar Magnetic Activity Research Telescope at Hida Observatory and the Atmospheric Imaging Assembly (AIA) on board Solar Dynamic Observatory, we find that the wave evolved synchronously at different heights of the solar atmosphere, and it propagated at a speed of 605 km s{sup -1} and showed a significant deceleration (-424 m s{sup -2}) in the extreme-ultraviolet (EUV) observations. During the initial stage, the wave speed in the EUV observations was 1000 km s{sup -1}, similar to those measured from the AIA 1700 A (967 km s{sup -1}) and 1600 A (893 km s{sup -1}) observations. The wave was reflected by a remote region with open fields, and a slower wave-like feature at a speed of 220 km s{sup -1} was also identified following the primary fast wave. In addition, a type-II radio burst was observed to be associated with the wave. We conclude that this wave should be a fast magnetosonic shock wave, which was first driven by the associated coronal mass ejection and then propagated freely in the corona. As the shock wave propagated, its legs swept the solar surface and thereby resulted in the wave signatures observed in the lower layers of the solar atmosphere. The slower wave-like structure following the primary wave was probably caused by the reconfiguration of the low coronal magnetic fields, as predicted in the field-line stretching model.

  14. Results of field testing of waste forms using lysimeters

    SciTech Connect (OSTI)

    McConnell, J.W., Jr.; Rogers, R.D.

    1988-01-01

    The purpose of the field testing task, using lysimeter arrays, is to expose samples of solidified resin waste to the actual physical, chemical, and microbiological conditions of disposal enviroment. Wastes used in the experiment include a mixture of synthetic organic ion exchange resins and a mixture of organic exchange resins and an inorganic zeolite. Solidification agents used to produce the 4.8-by 7.6-cm cylindrical waste forms used in the study were Portland Type I-II cement and Dow vinyl ester-styrene. Seven of these waste forms were stacked end-to-end and inserted into each lysimeter to provide a 1-L volume. There are 10 lysimeters, 5 at ORNL and 5 at ANL-E. Lysimeters used in this study were designed to be self-contained units which will be disposed at the termination of the 20-year study. Each is a 0.91-by 3.12-m right-circular cylinder divided into an upper compartment, which contains fill material, waste forms, and instrumentation, and an empty lower compartment, which collects leachate. Four lysimeters at each site are filled with soil, while a fifth (used as a control) is filled with inert silica oxide sand. Instrumentation within each lysimeter includes porous cup soil-water samplers and soil moisture/temperature probes. The probes are connected to an on-site data acquisition and storage system (DAS) which also collects data from a field meteorological station located at each site. 9 refs.

  15. Field Lysimeter Test Facility status report IV: FY 1993

    SciTech Connect (OSTI)

    Gee, G.W.; Felmy, D.G.; Ritter, J.C.; Campbell, M.D.; Downs, J.L.; Fayer, M.J.; Kirkham, R.R.; Link, S.O.

    1993-10-01

    At the U.S. Department of Energy`s Hanford Site near Richland, Washington, a unique facility, the Field Lysimeter Test Facility (FLTF) is used to measure drainage from and water storage in soil covers. Drainage has ranged from near zero amounts to more than 50% of the applied water, with the amount depending on vegetative cover and soil type. Drainage occurred from lysimeters with coarse soils and gravel covers, but did not occur from capillary barrier-type lysimeters (1.5 m silt loam soil over coarse sands and gravels) except under the most extreme condition tested. For capillary barriers that were irrigated and kept vegetation-free (bare surface), no drainage occurred in 5 of the past 6 years. However, this past year (1992--1993) a record snowfall of 1,425 mm occurred and water storage in the irrigated, bare-surfaced capillary barriers exceeded 500 mm resulting in drainage of more than 30 mm from these barriers. In contrast, capillary barriers, covered with native vegetation (i.e., shrubs and grasses) did not drain under any climatic condition (with or without irrigation). In FY 1994, the FLTF treatments will be increased from 11 to 17 with the addition of materials that will simulate portions of a prototype barrier planned for construction in 1994 at the Hanford Site. The 17 FLTF treatments are designed to test the expected range of surface soil, vegetation, and climatic conditions encountered at the Hanford Site and will assist in evaluating final surface barrier designs for a waste disposal facility.

  16. Field testing advanced geothermal turbodrill (AGT). Phase 1 final report

    SciTech Connect (OSTI)

    Maurer, W.C.; Cohen, J.H.

    1999-06-01

    Maurer Engineering developed special high-temperature geothermal turbodrills for LANL in the 1970s to overcome motor temperature limitations. These turbodrills were used to drill the directional portions of LANL`s Hot Dry Rock Geothermal Wells at Fenton Hill, New Mexico. The Hot Dry Rock concept is to drill parallel inclined wells (35-degree inclination), hydraulically fracture between these wells, and then circulate cold water down one well and through the fractures and produce hot water out of the second well. At the time LANL drilled the Fenton Hill wells, the LANL turbodrill was the only motor in the world that would drill at the high temperatures encountered in these wells. It was difficult to operate the turbodrills continuously at low speed due to the low torque output of the LANL turbodrills. The turbodrills would stall frequently and could only be restarted by lifting the bit off bottom. This allowed the bit to rotate at very high speeds, and as a result, there was excessive wear in the bearings and on the gauge of insert roller bits due to these high rotary speeds. In 1998, Maurer Engineering developed an Advanced Geothermal Turbodrill (AGT) for the National Advanced Drilling and Excavation Technology (NADET) at MIT by adding a planetary speed reducer to the LANL turbodrill to increase its torque and reduce its rotary speed. Drilling tests were conducted with the AGT using 12 1/2-inch insert roller bits in Texas Pink Granite. The drilling tests were very successful, with the AGT drilling 94 ft/hr in Texas Pink Granite compared to 45 ft/hr with the LANL turbodrill and 42 ft/hr with a rotary drill. Field tests are currently being planned in Mexico and in geothermal wells in California to demonstrate the ability of the AGT to increase drilling rates and reduce drilling costs.

  17. Impact of Large Scale Energy Efficiency Programs On Consumer Tariffs and Utility Finances in India

    SciTech Connect (OSTI)

    Abhyankar, Nikit; Phadke, Amol

    2011-01-20

    Large-scale EE programs would modestly increase tariffs but reduce consumers' electricity bills significantly. However, the primary benefit of EE programs is a significant reduction in power shortages, which might make these programs politically acceptable even if tariffs increase. To increase political support, utilities could pursue programs that would result in minimal tariff increases. This can be achieved in four ways: (a) focus only on low-cost programs (such as replacing electric water heaters with gas water heaters); (b) sell power conserved through the EE program to the market at a price higher than the cost of peak power purchase; (c) focus on programs where a partial utility subsidy of incremental capital cost might work and (d) increase the number of participant consumers by offering a basket of EE programs to fit all consumer subcategories and tariff tiers. Large scale EE programs can result in consistently negative cash flows and significantly erode the utility's overall profitability. In case the utility is facing shortages, the cash flow is very sensitive to the marginal tariff of the unmet demand. This will have an important bearing on the choice of EE programs in Indian states where low-paying rural and agricultural consumers form the majority of the unmet demand. These findings clearly call for a flexible, sustainable solution to the cash-flow management issue. One option is to include a mechanism like FAC in the utility incentive mechanism. Another sustainable solution might be to have the net program cost and revenue loss built into utility's revenue requirement and thus into consumer tariffs up front. However, the latter approach requires institutionalization of EE as a resource. The utility incentive mechanisms would be able to address the utility disincentive of forgone long-run return but have a minor impact on consumer benefits. Fundamentally, providing incentives for EE programs to make them comparable to supply-side investments is a way

  18. X6.9-CLASS FLARE-INDUCED VERTICAL KINK OSCILLATIONS IN A LARGE-SCALE PLASMA CURTAIN AS OBSERVED BY THE SOLAR DYNAMICS OBSERVATORY/ATMOSPHERIC IMAGING ASSEMBLY

    SciTech Connect (OSTI)

    Srivastava, A. K. [Aryabhatta Research Institute of Observational Sciences (ARIES), Manora Peak, Nainital 263 002 (India); Goossens, M. [Centre for Mathematical Plasma Astrophysics, Department of Mathematics, KU Leuven, Celestijnenlaan 200B, B-3001 Leuven (Belgium)

    2013-11-01

    We present rare observational evidence of vertical kink oscillations in a laminar and diffused large-scale plasma curtain as observed by the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory. The X6.9-class flare in active region 11263 on 2011 August 9 induces a global large-scale disturbance that propagates in a narrow lane above the plasma curtain and creates a low density region that appears as a dimming in the observational image data. This large-scale propagating disturbance acts as a non-periodic driver that interacts asymmetrically and obliquely with the top of the plasma curtain and triggers the observed oscillations. In the deeper layers of the curtain, we find evidence of vertical kink oscillations with two periods (795 s and 530 s). On the magnetic surface of the curtain where the density is inhomogeneous due to coronal dimming, non-decaying vertical oscillations are also observed (period ? 763-896 s). We infer that the global large-scale disturbance triggers vertical kink oscillations in the deeper layers as well as on the surface of the large-scale plasma curtain. The properties of the excited waves strongly depend on the local plasma and magnetic field conditions.

  19. Carbon Molecular Sieve Membrane as a True One Box Unit for Large Scale Hydrogen Production

    SciTech Connect (OSTI)

    Liu, Paul

    2012-05-01

    fabricated a full-scale CMS membrane and module for the proposed application. This full-scale membrane element is a 3" diameter with 30"L, composed of ~85 single CMS membrane tubes. The membrane tubes and bundles have demonstrated satisfactory thermal, hydrothermal, thermal cycling and chemical stabilities under an environment simulating the temperature, pressure and contaminant levels encountered in our proposed process. More importantly, the membrane module packed with the CMS bundle was tested for over 30 pressure cycles between ambient pressure and >300 -600 psi at 200 to 300°C without mechanical degradation. Finally, internal baffles have been designed and installed to improve flow distribution within the module, which delivered 90% separation efficiency in comparison with the efficiency achieved with single membrane tubes. In summary, the full-scale CMS membrane element and module have been successfully developed and tested satisfactorily for our proposed one-box application; a test quantity of elements/modules have been fabricated for field testing. Multiple field tests have been performed under this project at National Carbon Capture Center (NCCC). The separation efficiency and performance stability of our full-scale membrane elements have been verified in testing conducted for times ranging from 100 to >250 hours of continuous exposure to coal/biomass gasifier off-gas for hydrogen enrichment with no gas pre-treatment for contaminants removal. In particular, "tar-like" contaminants were effectively rejected by the membrane with no evidence of fouling. In addition, testing was conducted using a hybrid membrane system, i.e., the CMS membrane in conjunction with the palladium membrane, to demonstrate that 99+% H{sub 2} purity and a high degree of CO{sub 2} capture could be achieved. In summary, the stability and performance of the full-scale hydrogen selective CMS membrane/module has been verified in multiple field tests in the presence of coal/biomass gasifier off

  20. Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: Scale-awareness and application to single-column model experiments

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Endo, Satoshi

    2015-01-20

    Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less

  1. Advanced Rooftop Control (ARC) Retrofit: Field-Test Results

    SciTech Connect (OSTI)

    Wang, Weimin; Katipamula, Srinivas; Ngo, Hung; Underhill, Ronald M.; Taasevigen, Danny J.; Lutes, Robert G.

    2013-07-31

    The multi-year research study was initiated to find solutions to improve packaged equipment operating efficiency in the field. Pacific Northwest National Laboratory (PNNL), with funding from the U.S. Department of Energy’s (DOE’s) Building Technologies Office (BTO) and Bonneville Power Administration (BPA) conducted this research, development and demonstration (RD&D) study. Packaged equipment with constant speed supply fans is designed to provide ventilation at the design rate at all times when the fan is operating as required by building code. Although there are a number of hours during the day when a building may not be fully occupied or the need for ventilation is lower than designed, the ventilation rate cannot be adjusted easily with a constant speed fan. Therefore, modulating the supply fan in conjunction with demand controlled ventilation (DCV) will not only reduce the coil energy but also reduce the fan energy. The objective of this multi-year research, development and demonstration project was to determine the magnitude of energy savings achievable by retrofitting existing packaged rooftop air conditioners with advanced control strategies not ordinarily used for packaged units. First, through detailed simulation analysis, it was shown that significant energy (between 24% and 35%) and cost savings (38%) from fan, cooling and heating energy consumption could be realized when packaged air conditioning units with gas furnaces are retrofitted with advanced control packages (combining multi-speed fan control, integrated economizer controls and DCV). The simulation analysis also showed significant savings for heat pumps (between 20% and 60%). The simulation analysis was followed by an extensive field test of a retrofittable advanced rooftop unit (RTU) controller.

  2. NREL Gearbox Reliability Collaborative: Comparing In-Field Gearbox Response to Different Dynamometer Test Conditions: Preprint

    SciTech Connect (OSTI)

    LaCava, W.; van Dam, J.; Wallen, R.; McNiff, B.

    2011-08-01

    This paper presents the results of NREL's Gearbox Reliability Collaborative comparison of dynamometer tests conducted on a 750-kW gearbox to field testing.

  3. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; Visser, Sid; Stevens, Rick L.; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  4. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; Marai, Elisabeta G.; Leigh, Jason

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  5. BREEDER: a microcomputer program for financial analysis of a large-scale prototype breeder reactor

    SciTech Connect (OSTI)

    Giese, R.F.

    1984-04-01

    This report describes a microcomputer-based, single-project financial analysis program: BREEDER. BREEDER is a user-friendly model designed to facilitate frequent and rapid analyses of the financial implications associated with alternative design and financing strategies for electric generating plants and large-scale prototype breeder (LSPB) reactors in particular. The model has proved to be a useful tool in establishing cost goals for LSPB reactors. The program is available on floppy disks for use on an IBM personal computer (or IBM look-a-like) running under PC-DOS or a Kaypro II transportable computer running under CP/M (and many other CP/M machines). The report documents version 1.5 of BREEDER and contains a user's guide. The report also includes a general overview of BREEDER, a summary of hardware requirements, a definition of all required program inputs, a description of all algorithms used in performing the construction-period and operation-period analyses, and a summary of all available reports. The appendixes contain a complete source-code listing, a cross-reference table, a sample interactive session, several sample runs, and additional documentation of the net-equity program option.

  6. A large-scale structure at redshift 1.71 in the Lockman Hole

    SciTech Connect (OSTI)

    Henry, J. Patrick; Hasinger, Günther; Suh, Hyewon; Aoki, Kentaro; Finoguenov, Alexis; Fotopoulou, Sotiria; Salvato, Mara; Tanaka, Masayuki

    2014-01-01

    We previously identified LH146, a diffuse X-ray source in the Lockman Hole, as a galaxy cluster at redshift 1.753. The redshift was based on one spectroscopic value, buttressed by seven additional photometric redshifts. We confirm here the previous spectroscopic redshift and present concordant spectroscopic redshifts for an additional eight galaxies. The average of these nine redshifts is 1.714 ± 0.012 (error on the mean). Scrutiny of the galaxy distribution in redshift space and the plane of the sky shows that there are two concentrations of galaxies near the X-ray source. In addition, there are three diffuse X-ray sources spread along the axis connecting the galaxy concentrations. LH146 is one of these three and lies approximately at the center of the two galaxy concentrations and the outer two diffuse X-ray sources. We thus conclude that LH146 is at the redshift initially reported but it is not a single virialized galaxy cluster, as previously assumed. Rather, it appears to mark the approximate center of a larger region containing more objects. For brevity, we refer to all these objects and their alignments as a large-scale structure. The exact nature of LH146 itself remains unclear.

  7. NV Energy Large-Scale Photovoltaic Integration Study: Intra-Hour Dispatch and AGC Simulation

    SciTech Connect (OSTI)

    Lu, Shuai; Etingov, Pavel V.; Meng, Da; Guo, Xinxin; Jin, Chunlian; Samaan, Nader A.

    2013-01-02

    The uncertainty and variability with photovoltaic (PV) generation make it very challenging to balance power system generation and load, especially under high penetration cases. Higher reserve requirements and more cycling of conventional generators are generally anticipated for large-scale PV integration. However, whether the existing generation fleet is flexible enough to handle the variations and how well the system can maintain its control performance are difficult to predict. The goal of this project is to develop a software program that can perform intra-hour dispatch and automatic generation control (AGC) simulation, by which the balancing operations of a system can be simulated to answer the questions posed above. The simulator, named Electric System Intra-Hour Operation Simulator (ESIOS), uses the NV Energy southern system as a study case, and models the system’s generator configurations, AGC functions, and operator actions to balance system generation and load. Actual dispatch of AGC generators and control performance under various PV penetration levels can be predicted by running ESIOS. With data about the load, generation, and generator characteristics, ESIOS can perform similar simulations and assess variable generation integration impacts for other systems as well. This report describes the design of the simulator and presents the study results showing the PV impacts on NV Energy real-time operations.

  8. Public attitudes regarding large-scale solar energy development in the U.S.

    SciTech Connect (OSTI)

    Carlisle, Juliet E.; Kane, Stephanie L.; Solan, David; Bowman, Madelaine; Joe, Jeffrey C.

    2015-08-01

    Using data collected from both a National sample as well as an oversample in U.S. Southwest, we examine public attitudes toward the construction of utility-scale solar facilities in the U.S. as well as development in ones own county. Our multivariate analyses assess demographic and sociopsychological factors as well as context in terms of proximity of proposed project by considering the effect of predictors for respondents living in the Southwest versus those from a National sample.We find that the predictors, and impact of the predictors, related to support and opposition to solar development vary in terms of psychological and physical distance. Overall, for respondents living in the U.S. Southwest we find that environmentalism, belief that developers receive too many incentives, and trust in project developers to be significantly related to support and opposition to solar development, in general. When Southwest respondents consider large-scale solar development in their county, the influence of these variables changes so that property value, race, and age only yield influence. Differential effects occur for respondents of our National sample.We believe our findings to be relevant for those outside the U.S. due to the considerable growth PV solar has experienced in the last decade, especially in China, Japan, Germany, and the U.S.

  9. Large-scale atomistic simulations of helium-3 bubble growth in complex palladium alloys

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Hale, Lucas M.; Zimmerman, Jonathan A.; Wong, Bryan M.

    2016-05-18

    Palladium is an attractive material for hydrogen and hydrogen-isotope storage applications due to its properties of large storage density and high diffusion of lattice hydrogen. When considering tritium storage, the material’s structural and mechanical integrity is threatened by both the embrittlement effect of hydrogen and the creation and evolution of additional crystal defects (e.g., dislocations, stacking faults) caused by the formation and growth of helium-3 bubbles. Using recently developed inter-atomic potentials for the palladium-silver-hydrogen system, we perform large-scale atomistic simulations to examine the defect-mediated mechanisms that govern helium bubble growth. Our simulations show the evolution of a distribution of materialmore » defects, and we compare the material behavior displayed with expectations from experiment and theory. In conclusion, we also present density functional theory calculations to characterize ideal tensile and shear strengths for these materials, which enable the understanding of how and why our developed potentials either meet or confound these expectations.« less

  10. Galaxy evolution and large-scale structure in the far-infrared. I. IRAS pointed observations

    SciTech Connect (OSTI)

    Lonsdale, C.J.; Hacking, P.B.

    1989-04-01

    Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution. 81 refs.

  11. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    SciTech Connect (OSTI)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp; Vazhkudai, Sudharshan S.; Cao, Qing

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  12. Combined Climate and Carbon-Cycle Effects of Large-Scale Deforestation

    SciTech Connect (OSTI)

    Bala, G; Caldeira, K; Wickett, M; Phillips, T J; Lobell, D B; Delire, C; Mirin, A

    2006-10-17

    The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO{sub 2} to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These are the first such simulations performed using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has a net cooling influence on Earth's climate, since the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. While these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate.

  13. Public attitudes regarding large-scale solar energy development in the U.S.

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Carlisle, Juliet E.; Kane, Stephanie L.; Solan, David; Bowman, Madelaine; Joe, Jeffrey C.

    2015-08-01

    Using data collected from both a National sample as well as an oversample in U.S. Southwest, we examine public attitudes toward the construction of utility-scale solar facilities in the U.S. as well as development in one’s own county. Our multivariate analyses assess demographic and sociopsychological factors as well as context in terms of proximity of proposed project by considering the effect of predictors for respondents living in the Southwest versus those from a National sample.We find that the predictors, and impact of the predictors, related to support and opposition to solar development vary in terms of psychological and physical distance.more » Overall, for respondents living in the U.S. Southwest we find that environmentalism, belief that developers receive too many incentives, and trust in project developers to be significantly related to support and opposition to solar development, in general. When Southwest respondents consider large-scale solar development in their county, the influence of these variables changes so that property value, race, and age only yield influence. Differential effects occur for respondents of our National sample.We believe our findings to be relevant for those outside the U.S. due to the considerable growth PV solar has experienced in the last decade, especially in China, Japan, Germany, and the U.S.« less

  14. Public attitudes regarding large-scale solar energy development in the U.S.

    SciTech Connect (OSTI)

    Carlisle, Juliet E.; Kane, Stephanie L.; Solan, David; Bowman, Madelaine; Joe, Jeffrey C.

    2015-08-01

    Using data collected from both a National sample as well as an oversample in U.S. Southwest, we examine public attitudes toward the construction of utility-scale solar facilities in the U.S. as well as development in one’s own county. Our multivariate analyses assess demographic and sociopsychological factors as well as context in terms of proximity of proposed project by considering the effect of predictors for respondents living in the Southwest versus those from a National sample.We find that the predictors, and impact of the predictors, related to support and opposition to solar development vary in terms of psychological and physical distance. Overall, for respondents living in the U.S. Southwest we find that environmentalism, belief that developers receive too many incentives, and trust in project developers to be significantly related to support and opposition to solar development, in general. When Southwest respondents consider large-scale solar development in their county, the influence of these variables changes so that property value, race, and age only yield influence. Differential effects occur for respondents of our National sample.We believe our findings to be relevant for those outside the U.S. due to the considerable growth PV solar has experienced in the last decade, especially in China, Japan, Germany, and the U.S.

  15. Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.

    2010-01-01

    Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster headsmore » to minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less

  16. Building a Large Scale Climate Data System in Support of HPC Environment

    SciTech Connect (OSTI)

    Wang, Feiyi; Harney, John F; Shipman, Galen M

    2011-01-01

    The Earth System Grid Federation (ESG) is a large scale, multi-institutional, interdisciplinary project that aims to provide climate scientists and impact policy makers worldwide a web-based and client-based platform to publish, disseminate, compare and analyze ever increasing climate related data. This paper describes our practical experiences on the design, development and operation of such a system. In particular, we focus on the support of the data lifecycle from a high performance computing (HPC) perspective that is critical to the end-to-end scientific discovery process. We discuss three subjects that interconnect the consumer and producer of scientific datasets: (1) the motivations, complexities and solutions of deep storage access and sharing in a tightly controlled environment; (2) the importance of scalable and flexible data publication/population; and (3) high performance indexing and search of data with geospatial properties. These perceived corner issues collectively contributed to the overall user experience and proved to be as important as any other architectural design considerations. Although the requirements and challenges are rooted and discussed from a climate science domain context, we believe the architectural problems, ideas and solutions discussed in this paper are generally useful and applicable in a larger scope.

  17. Implementation of a multi-threaded framework for large-scale scientific applications

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Sexton-Kennedy, E.; Gartung, Patrick; Jones, C. D.; Lange, David

    2015-05-22

    The CMS experiment has recently completed the development of a multi-threaded capable application framework. In this paper, we will discuss the design, implementation and application of this framework to production applications in CMS. For the 2015 LHC run, this functionality is particularly critical for both our online and offline production applications, which depend on faster turn-around times and a reduced memory footprint relative to before. These applications are complex codes, each including a large number of physics-driven algorithms. While the framework is capable of running a mix of thread-safe and 'legacy' modules, algorithms running in our production applications need tomore » be thread-safe for optimal use of this multi-threaded framework at a large scale. Towards this end, we discuss the types of changes, which were necessary for our algorithms to achieve good performance of our multithreaded applications in a full-scale application. Lastly performance numbers for what has been achieved for the 2015 run are presented.« less

  18. Fingerprints of anomalous primordial Universe on the abundance of large scale structures

    SciTech Connect (OSTI)

    Baghram, Shant; Abolhasani, Ali Akbar; Firouzjahi, Hassan; Namjoo, Mohammad Hossein E-mail: abolhasani@ipm.ir E-mail: MohammadHossein.Namjoo@utdallas.edu

    2014-12-01

    We study the predictions of anomalous inflationary models on the abundance of structures in large scale structure observations. The anomalous features encoded in primordial curvature perturbation power spectrum are (a): localized feature in momentum space, (b): hemispherical asymmetry and (c): statistical anisotropies. We present a model-independent expression relating the number density of structures to the changes in the matter density variance. Models with localized feature can alleviate the tension between observations and numerical simulations of cold dark matter structures on galactic scales as a possible solution to the missing satellite problem. In models with hemispherical asymmetry we show that the abundance of structures becomes asymmetric depending on the direction of observation to sky. In addition, we study the effects of scale-dependent dipole amplitude on the abundance of structures. Using the quasars data and adopting the power-law scaling k{sup n{sub A}-1} for the amplitude of dipole we find the upper bound n{sub A}<0.6 for the spectral index of the dipole asymmetry. In all cases there is a critical mass scale M{sub c} in which for MM{sub c}) the enhancement in variance induced from anomalous feature decreases (increases) the abundance of dark matter structures in Universe.

  19. The Linearly Scaling 3D Fragment Method for Large Scale Electronic Structure Calculations

    SciTech Connect (OSTI)

    Zhao, Zhengji; Meza, Juan; Lee, Byounghak; Shan, Hongzhang; Strohmaier, Erich; Bailey, David; Wang, Lin-Wang

    2009-06-26

    The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  20. The linearly scaling 3D fragment method for large scale electronic structure calculations

    SciTech Connect (OSTI)

    Zhao, Zhengji; Meza, Juan; Lee, Byounghak; Shan, Hongzhang; Strohmaier, Erich; Bailey, David; Wang, Lin-Wang

    2009-07-28

    The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  1. Field Scale Test and Verification of CHP System at the Ritz Carlton...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Field Scale Test and Verification of CHP System at the Ritz Carlton, San Francisco, August 2007 Field Scale Test and Verification of CHP System at the Ritz Carlton, San Francisco, ...

  2. Field Test Best Practices: A Dynamic Web Tool for Practical Guidance

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    o perated b y t he A lliance f or S ustainable E nergy, L LC. Field Test Best Practices ... Difficult to find general guidelines and examples of well-designed field test plans No ...

  3. High-Voltage Broadband-Over-Powerline (HV-BPL) Field Test Report...

    Open Energy Info (EERE)

    Voltage Broadband-Over-Powerline (HV-BPL) Field Test Report Jump to: navigation, search Tool Summary LAUNCH TOOL Name: High-Voltage Broadband-Over-Powerline (HV-BPL) Field Test...

  4. Interagency Field Test Evaluates Co-operation of Turbines and Radar |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Interagency Field Test Evaluates Co-operation of Turbines and Radar Interagency Field Test Evaluates Co-operation of Turbines and Radar May 1, 2012 - 2:56pm Addthis The Department of Energy and federal agency partners recently completed the first in a series of three radar technology field tests and demonstrations. The Interagency Field Test and Evaluation of Wind-Radar Mitigation Technologies is an $8 million demonstration initiative co-funded by the Energy Department,

  5. Concept Testing and Development at the Raft River Geothermal Field, Idaho |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Concept Testing and Development at the Raft River Geothermal Field, Idaho Concept Testing and Development at the Raft River Geothermal Field, Idaho Concept Testing and Development at the Raft River Geothermal Field, Idaho presentation at the April 2013 peer review meeting held in Denver, Colorado. raft_river_peer2013.pdf (3.68 MB) More Documents & Publications Concept Testing and Development at the Raft River Geothermal Field, Idaho track 4: enhanced geothermal

  6. Field test of a post-closure radiation monitor

    SciTech Connect (OSTI)

    Reed, S.E.; Christy, C.E.; Heath, R.E.

    1995-10-01

    The DOE is conducting remedial actions at many sites contaminated with radioactive materials. After closure of these sites, long-term subsurface monitoring is typically required by law. This monitoring is generally labor intensive and expensive using conventional sampling and analysis techniques. The U.S. Department of Energy`s Morgantown Energy Technology Center (METC) has contracted with Babcock and Wilcox to develop a Long-Term Post-Closure Radiation Monitoring System (LPRMS) to reduce these monitoring costs. The system designed in Phase I of this development program monitors gamma radiation using a subsurface cesium iodide scintillator coupled to above-ground detection electronics using optical waveguide. The radiation probe can be installed to depths up to 50 meters using cone penetrometer techniques, and requires no downhole electrical power. Multiplexing, data logging and analysis are performed at a central location. A prototype LPRMS probe was built, and B&W and FERMCO field tested this monitoring probe at the Fernald Environmental Management Project in the fall of 1994 with funding from the DOE`s Office of Technology Development (EM-50) through METC. The system was used measure soil and water with known uranium contamination levels, both in drums and in situ depths up to 3 meters. For comparison purposes measurements were also performed using a more conventional survey probe with a sodium iodide scintillator directly butt-coupled to detection electronics.

  7. Model based multivariable controller for large scale compression stations. Design and experimental validation on the LHC 18KW cryorefrigerator

    SciTech Connect (OSTI)

    Bonne, François; Bonnay, Patrick; Bradu, Benjamin

    2014-01-29

    In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsed heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  8. Field Testing of the Advanced Worker Protection System

    Office of Scientific and Technical Information (OSTI)

    ... simulate actual decontamination activities. + to shake down the new IUOE test facilities. ... words about the implications of the test results to DoE missions. Acknowledgments ...

  9. Autonomous UAV-Based Mapping of Large-Scale Urban Firefights

    SciTech Connect (OSTI)

    Snarski, S; Scheibner, K F; Shaw, S; Roberts, R S; LaRow, A; Oakley, D; Lupo, J; Neilsen, D; Judge, B; Forren, J

    2006-03-09

    This paper describes experimental results from a live-fire data collect designed to demonstrate the ability of IR and acoustic sensing systems to detect and map high-volume gunfire events from tactical UAVs. The data collect supports an exploratory study of the FightSight concept in which an autonomous UAV-based sensor exploitation and decision support capability is being proposed to provide dynamic situational awareness for large-scale battalion-level firefights in cluttered urban environments. FightSight integrates IR imagery, acoustic data, and 3D scene context data with prior time information in a multi-level, multi-step probabilistic-based fusion process to reliably locate and map the array of urban firing events and firepower movements and trends associated with the evolving urban battlefield situation. Described here are sensor results from live-fire experiments involving simultaneous firing of multiple sub/super-sonic weapons (2-AK47, 2-M16, 1 Beretta, 1 Mortar, 1 rocket) with high optical and acoustic clutter at ranges up to 400m. Sensor-shooter-target configurations and clutter were designed to simulate UAV sensing conditions for a high-intensity firefight in an urban environment. Sensor systems evaluated were an IR bullet tracking system by Lawrence Livermore National Laboratory (LLNL) and an acoustic gunshot detection system by Planning Systems, Inc. (PSI). The results demonstrate convincingly the ability for the LLNL and PSI sensor systems to accurately detect, separate, and localize multiple shooters and the associated shot directions during a high-intensity firefight (77 rounds in 5 sec) in a high acoustic and optical clutter environment with no false alarms. Preliminary fusion processing was also examined that demonstrated an ability to distinguish co-located shooters (shooter density), range to <0.5 m accuracy at 400m, and weapon type.

  10. Biomass Energy for Transport and Electricity: Large scale utilization under low CO2 concentration scenarios

    SciTech Connect (OSTI)

    Luckow, Patrick; Wise, Marshall A.; Dooley, James J.; Kim, Son H.

    2010-01-25

    This paper examines the potential role of large scale, dedicated commercial biomass energy systems under global climate policies designed to stabilize atmospheric concentrations of CO2 at 400ppm and 450ppm. We use an integrated assessment model of energy and agriculture systems to show that, given a climate policy in which terrestrial carbon is appropriately valued equally with carbon emitted from the energy system, biomass energy has the potential to be a major component of achieving these low concentration targets. The costs of processing and transporting biomass energy at much larger scales than current experience are also incorporated into the modeling. From the scenario results, 120-160 EJ/year of biomass energy is produced by midcentury and 200-250 EJ/year by the end of this century. In the first half of the century, much of this biomass is from agricultural and forest residues, but after 2050 dedicated cellulosic biomass crops become the dominant source. A key finding of this paper is the role that carbon dioxide capture and storage (CCS) technologies coupled with commercial biomass energy can play in meeting stringent emissions targets. Despite the higher technology costs of CCS, the resulting negative emissions used in combination with biomass are a very important tool in controlling the cost of meeting a target, offsetting the venting of CO2 from sectors of the energy system that may be more expensive to mitigate, such as oil use in transportation. The paper also discusses the role of cellulosic ethanol and Fischer-Tropsch biomass derived transportation fuels and shows that both technologies are important contributors to liquid fuels production, with unique costs and emissions characteristics. Through application of the GCAM integrated assessment model, it becomes clear that, given CCS availability, bioenergy will be used both in electricity and transportation.

  11. Large scale synthesis of nanostructured zirconia-based compounds from freeze-dried precursors

    SciTech Connect (OSTI)

    Gomez, A.; Villanueva, R.; Vie, D.; Murcia-Mascaros, S.; Martinez, E.; Beltran, A.; Sapina, F.; Vicent, M.; Sanchez, E.

    2013-01-15

    Nanocrystalline zirconia powders have been obtained at the multigram scale by thermal decomposition of precursors resulting from the freeze-drying of aqueous acetic solutions. This technique has equally made possible to synthesize a variety of nanostructured yttria or scandia doped zirconia compositions. SEM images, as well as the analysis of the XRD patterns, show the nanoparticulated character of those solids obtained at low temperature, with typical particle size in the 10-15 nm range when prepared at 673 K. The presence of the monoclinic, the tetragonal or both phases depends on the temperature of the thermal treatment, the doping concentration and the nature of the dopant. In addition, Rietveld refinement of the XRD profiles of selected samples allows detecting the coexistence of the tetragonal and the cubic phases for high doping concentration and high thermal treatment temperatures. Raman experiments suggest the presence of both phases also at relatively low treatment temperatures. - Graphical abstract: Zr{sub 1-x}A{sub x}O{sub 2-x/2} (A=Y, Sc; 0{<=}x{<=}0.12) solid solutions have been prepared as nanostructured powders by thermal decomposition of precursors obtained by freeze-drying, and this synthetic procedure has been scaled up to the 100 g scale. Highlights: Black-Right-Pointing-Pointer Zr{sub 1-x}A{sub x}O{sub 2-x/2} (A=Y, Sc; 0{<=}x{<=}0.12) solid solutions have been prepared as nanostructured powders. Black-Right-Pointing-Pointer The synthetic method involves the thermal decomposition of precursors obtained by freeze-drying. Black-Right-Pointing-Pointer The temperature of the thermal treatment controls particle sizes. Black-Right-Pointing-Pointer The preparation procedure has been scaled up to the 100 g scale. Black-Right-Pointing-Pointer This method is appropriate for the large-scale industrial preparation of multimetallic systems.

  12. FY results for the Los Alamos large scale demonstration and deployment project

    SciTech Connect (OSTI)

    Stallings, E.; McFee, J.

    2000-11-01

    The Los Alamos Large Scale Demonstration and Deployment Project (LSDDP) in support of the US Department of Energy (DOE) Deactivation and Decommissioning Focus Area (DDFA) is identifying and demonstrating technologies to reduce the cost and risk of management of transuranic element contaminated large metal objects, i.e. gloveboxes. DOE must dispose of hundreds of gloveboxes from Rocky Flats, Los Alamos and other DOE sites. Current practices for removal, decontamination and size reduction of large metal objects translates to a DOE system-wide cost in excess of $800 million, without disposal costs. In FY99 and FY00 the Los Alamos LSDDP performed several demonstrations on cost/risk savings technologies. Commercial air pallets were demonstrated for movement and positioning of the oversized crates in neutron counting equipment. The air pallets are able to cost effectively address the complete waste management inventory, whereas the baseline wheeled carts could address only 25% of the inventory with higher manpower costs. A gamma interrogation radiography technology was demonstrated to support characterization of the crates. The technology was developed for radiography of trucks for identification of contraband. The radiographs were extremely useful in guiding the selection and method for opening very large crated metal objects. The cost of the radiography was small and the operating benefit is high. Another demonstration compared a Blade Cutting Plunger and reciprocating saw for removal of glovebox legs and appurtenances. The cost comparison showed that the Blade Cutting Plunger costs were comparable, and a significant safety advantage was reported. A second radiography demonstration was conducted evaluation of a technology based on WIPP-type x-ray characterization of large boxes. This technology provides considerable detail of the contents of the crates. The technology identified details as small as the fasteners in the crates, an unpunctured aerosol can, and a vessel

  13. LUCI: A facility at DUSEL for large-scale experimental study of geologic carbon sequestration

    SciTech Connect (OSTI)

    Peters, C. A.; Dobson, P.F.; Oldenburg, C.M.; Wang, J. S. Y.; Onstott, T.C.; Scherer, G.W.; Freifeld, B.M.; Ramakrishnan, T.S.; Stabinski, E.L.; Liang, K.; Verma, S.

    2010-10-01

    LUCI, the Laboratory for Underground CO{sub 2} Investigations, is an experimental facility being planned for the DUSEL underground laboratory in South Dakota, USA. It is designed to study vertical flow of CO{sub 2} in porous media over length scales representative of leakage scenarios in geologic carbon sequestration. The plan for LUCI is a set of three vertical column pressure vessels, each of which is {approx}500 m long and {approx}1 m in diameter. The vessels will be filled with brine and sand or sedimentary rock. Each vessel will have an inner column to simulate a well for deployment of down-hole logging tools. The experiments are configured to simulate CO{sub 2} leakage by releasing CO{sub 2} into the bottoms of the columns. The scale of the LUCI facility will permit measurements to study CO{sub 2} flow over pressure and temperature variations that span supercritical to subcritical gas conditions. It will enable observation or inference of a variety of relevant processes such as buoyancy-driven flow in porous media, Joule-Thomson cooling, thermal exchange, viscous fingering, residual trapping, and CO{sub 2} dissolution. Experiments are also planned for reactive flow of CO{sub 2} and acidified brines in caprock sediments and well cements, and for CO{sub 2}-enhanced methanogenesis in organic-rich shales. A comprehensive suite of geophysical logging instruments will be deployed to monitor experimental conditions as well as provide data to quantify vertical resolution of sensor technologies. The experimental observations from LUCI will generate fundamental new understanding of the processes governing CO{sub 2} trapping and vertical migration, and will provide valuable data to calibrate and validate large-scale model simulations.

  14. Macroscopic x-ray fluorescence capability for large-scale elemental mapping

    SciTech Connect (OSTI)

    Volz, Heather M; Havrilla, George J; Aikin, Jr., Robert M; Montoya, Velma M

    2010-01-01

    A non-destructive method of determining segregation of constituent elements over large length-scales is desired. Compositional information to moderate resolution over centimeters will be powerful not only to validate casting models but also to understand large-scale phenomena during solidification. To this end, they have rebuilt their XRF capability in conjunction with IXRF Systems, Inc. (Houston, TX) to accommodate samples that are much larger than those that typically fit into an XRF instrument chamber (up to 70 cm x 70 cm x 25 cm). This system uses a rhodium tube with maximum power of 35 kV and 100 {mu}A, the detector is a liquid nitrogen cooled lithium drifted silicon detector, and the smallest spot size is approximately 0.4 mm. Reference standard specimens will enable quantitative elemental mapping and analysis. Challenges to modifying the equipment are described. Non-uniformities in the Inconel 718 system will be shown and discussed. As another example, segregation of niobium or molybdenum in depleted uranium (DU) castings has been known to occur based on wet chemical anslysis (ICP-MS), but this destructive and time-consuming measurement is not practical for routine inspection of ingots. The U-Nb system is complicated due to overlap of the Nb K-alpha line with the U L-beta. Preliminary quantitative results are included on the distribution of Nb across slices from DU castings with different cooling rates. They foresee this macro-XRF elemental mapping capability to prove invaluable to many in the materials processing industry.

  15. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    SciTech Connect (OSTI)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a brief summary of those relevant to issues

  16. Risk of large-scale fires in boreal forests of Finland under changing climate

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Lehtonen, I.; Venäläinen, A.; Kamarainen, M.; Peltola, H.; Gregow, H.

    2016-01-21

    Here, the target of this work was to assess the impact of projected climate change on forest-fire activity in Finland with special emphasis on large-scale fires. In addition, we were particularly interested to examine the inter-model variability of the projected change of fire danger. For this purpose, we utilized fire statistics covering the period 1996-2014 and consisting of almost 20,000 forest fires, as well as daily meteorological data from five global climate models under representative concentration pathway RCP4.5 and RCP8.5 scenarios. The model data were statistically downscaled onto a high-resolution grid using the quantile-mapping method before performing the analysis. Inmore » examining the relationship between weather and fire danger, we applied the Canadian fire weather index (FWI) system. Our results suggest that the number of large forest fires may double or even triple during the present century. This would increase the risk that some of the fires could develop into real conflagrations which have become almost extinct in Finland due to active and efficient fire suppression. However, the results reveal substantial inter-model variability in the rate of the projected increase of forest-fire danger, emphasizing the large uncertainty related to the climate change signal in fire activity. We moreover showed that the majority of large fires in Finland occur within a relatively short period in May and June due to human activities and that FWI correlates poorer with the fire activity during this time of year than later in summer when lightning is a more important cause of fires.« less

  17. Vadose zone transport field study: Detailed test plan for simulated leak tests

    SciTech Connect (OSTI)

    AL Ward; GW Gee

    2000-06-23

    : identify mechanisms controlling transport processes in soils typical of the hydrogeologic conditions of Hanford's waste disposal sites; reduce uncertainty in conceptual models; develop a detailed and accurate database of hydraulic and transport parameters for validation of three-dimensional numerical models; identify and evaluate advanced, cost-effective characterization methods with the potential to assess changing conditions in the vadose zone, particularly as surrogates of currently undetectable high-risk contaminants. This plan provides details for conducting field tests during FY 2000 to accomplish these objectives. Details of additional testing during FY 2001 and FY 2002 will be developed as part of the work planning process implemented by the Integration Project.

  18. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    SciTech Connect (OSTI)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, ?, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  19. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    SciTech Connect (OSTI)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  20. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; et al

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less

  1. Smart Infrared Inspection System Field Operational Test Final Report

    SciTech Connect (OSTI)

    Siekmann, Adam; Capps, Gary J; Franzese, Oscar; Lascurain, Mary Beth

    2011-06-01

    The Smart InfraRed Inspection System (SIRIS) is a tool designed to assist inspectors in determining which vehicles passing through the SIRIS system are in need of further inspection by measuring the thermal data from the wheel components. As a vehicle enters the system, infrared cameras on the road measure temperatures of the brakes, tires, and wheel bearings on both wheel ends of commercial motor vehicles (CMVs) in motion. This thermal data is then presented to enforcement personal inside of the inspection station on a user friendly interface. Vehicles that are suspected to have a violation are automatically alerted to the enforcement staff. The main goal of the SIRIS field operational test (FOT) was to collect data to evaluate the performance of the prototype system and determine the viability of such a system being used for commercial motor vehicle enforcement. From March 2010 to September 2010, ORNL facilitated the SIRIS FOT at the Greene County Inspection Station (IS) in Greeneville, Tennessee. During the course of the FOT, 413 CMVs were given a North American Standard (NAS) Level-1 inspection. Of those 413 CMVs, 384 were subjected to a SIRIS screening. A total of 36 (9.38%) of the vehicles were flagged by SIRIS as having one or more thermal issues; with brakes issues making up 33 (91.67%) of those. Of the 36 vehicles flagged as having thermal issues, 31 (86.11%) were found to have a violation and 30 (83.33%) of those vehicles were placed out-of-service (OOS). Overall the enforcement personnel who have used SIRIS for screening purposes have had positive feedback on the potential of SIRIS. With improvements in detection algorithms and stability, the system will be beneficial to the CMV enforcement community and increase overall trooper productivity by accurately identifying a higher percentage of CMVs to be placed OOS with minimal error. No future evaluation of SIRIS has been deemed necessary and specifications for a production system will soon be drafted.

  2. Neutrino Physics from the Cosmic Microwave Background and Large Scale Structure

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Abazajian, K. N.; Arnold, K.; Austermann, J.; Benson, B. A.; Bischoff, C.; Bock, J.; Bond, J. R.; Borrill, J.; Calabrese, E.; Carlstrom, J. E.; et al

    2014-03-15

    This is a report on the status and prospects of the quantification of neutrino properties through the cosmological neutrino background for the Cosmic Frontier of the Division of Particles and Fields Community Summer Study long-term planning exercise. Experiments planned and underway are prepared to study the cosmological neutrino background in detail via its influence on distance-redshift relations and the growth of structure. The program for the next decade described in this document, including upcoming spectroscopic galaxy surveys eBOSS and DESI and a new Stage-IV CMB polarization experiment CMB-S4, will achieve σ (σmv) = 16 meV and σ (Neff)(Neff) = 0.020.more » Such a mass measurement will produce a high significance detection of non-zero σmνσmν, whose lower bound derived from atmospheric and solar neutrino oscillation data is about 58 meV. If neutrinos have a minimal normal mass hierarchy, this measurement will definitively rule out the inverted neutrino mass hierarchy, shedding light on one of the most puzzling aspects of the Standard Model of particle physics — the origin of mass. This precise a measurement of Neff will allow for high sensitivity to any light and dark degrees of freedom produced in the big bang and a precision test of the standard cosmological model prediction that Neff = 3.046.« less

  3. Neutrino physics from the cosmic microwave background and large scale structure

    SciTech Connect (OSTI)

    Abazajian, K. N.; Arnold, K.; Austermann, J. E.; Benson, B. A.; Bischoff, C.; Brock, J.; Bond, J. R.; Borrill, J.; Calabrese, E.; Carlstrom, J. E.; Chang, C. L.

    2015-03-15

    This is a report on the status and prospects of the quantification of neutrino properties through the cosmological neutrino background for the Cosmic Frontier of the Division of Particles and Fields Community Summer Study long-term planning exercise. Experiments planned and underway are prepared to study the cosmological neutrino background in detail via its influence on distance-redshift relations and the growth of structure. The program for the next decade described in this document, including upcoming spectroscopic galaxy surveys eBOSS and DESI and a new Stage-IV CMB polarization experiment CMB-S4, will achieve σ (σmν)(σmν) = 16 meV and σ (Neff)(Neff) = 0.020. Such a mass measurement will produce a high significance detection of non-zero σmνσmν, whose lower bound derived from atmospheric and solar neutrino oscillation data is about 58 meV. If neutrinos have a minimal normal mass hierarchy, this measurement will definitively rule out the inverted neutrino mass hierarchy, shedding light on one of the most puzzling aspects of the Standard Model of particle physics — the origin of mass. This precise a measurement of NeffNeff will allow for high sensitivity to any light and dark degrees of freedom produced in the big bang and a precision test of the standard cosmological model prediction that View the MathML sourceNeff=3.046.

  4. Neutrino Physics from the Cosmic Microwave Background and Large Scale Structure

    SciTech Connect (OSTI)

    Abazajian, K. N.; Arnold, K.; Austermann, J.; Benson, B. A.; Bischoff, C.; Bock, J.; Bond, J. R.; Borrill, J.; Calabrese, E.; Carlstrom, J. E.; Carvalho, C. S.; Chang, C. L.; Chiang, H. C.; Church, S.; Cooray, A.; Crawford, T. M.; Dawson, K. S.; Das, S.; Devlin, M. J.; Dobbs, M.; Dodelson, S.; Dore, O.; Dunkley, J.; Errard, J.; Fraisse, A.; Gallicchio, J.; Halverson, N. W.; Hanany, S.; Hildebrandt, S. R.; Hincks, A.; Hlozek, R.; Holder, G.; Holzapfel, W. L.; Honscheid, K.; Hu, W.; Hubmayr, J.; Irwin, K.; Jones, W. C.; Kamionkowski, M.; Keating, B.; Keisler, R.; Knox, L.; Komatsu, E.; Kovac, J.; Kuo, C. -L.; Lawrence, C.; Lee, A. T.; Leitch, E.; Linder, E.; Lubin, P.; McMahon, J.; Miller, A.; Newburgh, L.; Niemack, M. D.; Nguyen, H.; Nguyen, H. T.; Page, L.; Pryke, C.; Reichardt, C. L.; Ruhl, J. E.; Sehgal, N.; Seljak, U.; Sievers, J.; Silverstein, E.; Slosar, A.; Smith, K. M.; Spergel, D.; Staggs, S. T.; Stark, A.; Stompor, R.; Wang, G.; Watson, S.; Wollack, E. J.; Wu, W. L.K.; Yoon, K. W.; Zahn, O.

    2014-03-15

    This is a report on the status and prospects of the quantification of neutrino properties through the cosmological neutrino background for the Cosmic Frontier of the Division of Particles and Fields Community Summer Study long-term planning exercise. Experiments planned and underway are prepared to study the cosmological neutrino background in detail via its influence on distance-redshift relations and the growth of structure. The program for the next decade described in this document, including upcoming spectroscopic galaxy surveys eBOSS and DESI and a new Stage-IV CMB polarization experiment CMB-S4, will achieve ? (?mv) = 16 meV and ? (Neff)(Neff) = 0.020. Such a mass measurement will produce a high significance detection of non-zero ?m??m?, whose lower bound derived from atmospheric and solar neutrino oscillation data is about 58 meV. If neutrinos have a minimal normal mass hierarchy, this measurement will definitively rule out the inverted neutrino mass hierarchy, shedding light on one of the most puzzling aspects of the Standard Model of particle physics the origin of mass. This precise a measurement of Neff will allow for high sensitivity to any light and dark degrees of freedom produced in the big bang and a precision test of the standard cosmological model prediction that Neff = 3.046.

  5. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    SciTech Connect (OSTI)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  6. Analysis of long-term flows resulting from large-scale sodium-water reactions in an LMFBR secondary system

    SciTech Connect (OSTI)

    Shin, Y.W.; Chung, H.; Choi, U.S.; Wiedermann, A.H.; Ockert, C.E.

    1984-07-01

    Leaks in LMFBR steam generators cannot entirely be prevented; thus the steam generators and the intermediate heat transport system (IHTS) of an LMFBR must be designed to withstand the effects of the leaks. A large-scale leak which might result from a sudden break of a steam generator tube, and the resulting sodium-water reaction (SWR) can generate large pressure pulses that propagate through the IHTS and exert large forces on the piping supports. This paper discusses computer programs for analyzing long-term flow and thermal effects in an LMFBR secondary system resulting from large-scale steam generator leaks, and the status of the development of the codes.

  7. Modeling of a Parabolic Trough Solar Field for Acceptance Testing...

    Office of Scientific and Technical Information (OSTI)

    ... Country of Publication: United States Language: English Subject: 14 SOLAR ENERGY; 30 ... LABORATORY; PERFORMANCE; PERFORMANCE TESTING; RECOMMENDATIONS; SIMULATION; ...

  8. Field Testing of Pre-Production Prototype Residential Heat Pump Water

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Heaters | Department of Energy Field Testing of Pre-Production Prototype Residential Heat Pump Water Heaters Field Testing of Pre-Production Prototype Residential Heat Pump Water Heaters Provides and overview of field testing of 18 pre-production prototype residential heat pump water heaters heat_pump_water_heater_testing.pdf (565.45 KB) More Documents & Publications Building America Technology Solutions for New and Existing Homes: Performance of a Heat Pump Water Heater in the Hot-Humid

  9. Techno-economic Modeling of the Integration of 20% Wind and Large-scale Energy Storage in ERCOT by 2030

    SciTech Connect (OSTI)

    Baldick, Ross; Webber, Michael; King, Carey; Garrison, Jared; Cohen, Stuart; Lee, Duehee

    2012-12-21

    This study's objective is to examine interrelated technical and economic avenues for the Electric Reliability Council of Texas (ERCOT) grid to incorporate up to and over 20% wind generation by 2030. Our specific interests are to look at the factors that will affect the implementation of both high level of wind power penetration (> 20% generation) and installation of large scale storage.

  10. Reducing Plug and Process Loads for a Large Scale, Low Energy Office Building: NREL's Research Support Facility; Preprint

    SciTech Connect (OSTI)

    Lobato, C.; Pless, S.; Sheppy, M.; Torcellini, P.

    2011-02-01

    This paper documents the design and operational plug and process load energy efficiency measures needed to allow a large scale office building to reach ultra high efficiency building goals. The appendices of this document contain a wealth of documentation pertaining to plug and process load design in the RSF, including a list of equipment was selected for use.

  11. Hierarchical chlorine-doped rutile TiO{sub 2} spherical clusters of nanorods: Large-scale synthesis and high photocatalytic activity

    SciTech Connect (OSTI)

    Xu Hua; Zheng Zhi; Zhang Lizhi Zhang Hailu; Deng Feng

    2008-09-15

    In this study, we report the synthesis of hierarchical chlorine-doped rutile TiO{sub 2} spherical clusters of nanorods photocatalyst on a large scale via a soft interface approach. This catalyst showed much higher photocatalytic activity than the famous commercial titania (Degussa P25) under visible light ({lambda}>420 nm). The resulting sample was characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), high-resolution TEM (HRTEM), nitrogen adsorption, X-ray photoelectron spectroscopy (XPS), UV-vis diffuse reflectance spectroscopy, {sup 1}H solid magic-angle spinning nuclear magnetic resonance (MAS-NMR) and photoluminescence spectroscopy. On the basis of characterization results, we found that the doping of chlorine resulted in red shift of absorption and higher surface acidity as well as crystal defects in the photocatalyst, which were the reasons for high photocatalytic activity of chlorine-doped TiO{sub 2} under visible light ({lambda}>420 nm). These hierarchical chlorine-doped rutile TiO{sub 2} spherical clusters of nanorods are very attractive in the fields of environmental pollutants removal and solar cell because of their easy separation and high activity. - Graphical abstract: Hierarchical chlorine-doped rutile TiO{sub 2} spherical clusters of nanorods photocatalyst were synthesized on a large scale via a soft interface approach. This catalyst showed much higher photocatalytic activity than the famous commercial titania (Degussa P25) under visible light ({lambda}>420 nm)

  12. Intercomparison of methods of coupling between convection and large-scale circulation: 2. Comparison over nonuniform surface conditions

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; Sessions, S.; Herman, M. J.; Sobel, A.; Wang, S.; Kim, D.; Cheng, A.; Bellon, G.; et al

    2016-03-18

    As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less

  13. The power of event-driven analytics in Large Scale Data Processing

    SciTech Connect (OSTI)

    2011-02-24

    FeedZai is a software company specialized in creating high-­-throughput low-­-latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­-driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­-time web-­-based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­-20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­-scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in the scope of the data processing needs of CERN. Pulse is available as open-­-source and can be licensed both for non-­-commercial and commercial applications. FeedZai is interested in exploring possible synergies with CERN in high-­-volume low-­-latency data processing applications. The seminar will be structured in two sessions, the first one being aimed to expose the general scope of FeedZai's activities, and the second focused on Pulse itself: 10:00-11:00 FeedZai and Large Scale Data Processing Introduction to FeedZai FeedZai Pulse and Complex Event Processing Demonstration Use-Cases and Applications Conclusion and Q&A 11:00-11:15 Coffee break 11:15-12:30 FeedZai Pulse Under the Hood A First FeedZai Pulse Application PulseQL overview Defining KPIs and Baselines Conclusion and Q&A About the speakers Nuno Sebastião is the CEO of FeedZai. Having worked for many years for the European Space Agency (ESA), he was responsible the overall design and development of Satellite Simulation Infrastructure of the agency. Having left ESA to found FeedZai, Nuno is

  14. Large-scale clustering of Lymanα emission intensity from SDSS/BOSS

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Croft, Rupert A. C.; Miralda-Escudé, Jordi; Zheng, Zheng; Bolton, Adam; Dawson, Kyle S.; Peterson, Jeffrey B.; York, Donald G.; Eisenstein, Daniel; Brinkmann, Jon; Brownstein, Joel; et al

    2016-01-27

    Here we present a tentative detection of the large-scale structure of Ly α emission in the Universe at redshifts z = 2–3.5 by measuring the cross-correlation of Ly α surface brightness with quasars in Sloan Digital Sky Survey/Baryon Oscillation Spectroscopic Survey. We use a million spectra targeting luminous red galaxies at z < 0.8, after subtracting a best-fitting model galaxy spectrum from each one, as an estimate of the high-redshift Ly α surface brightness. The quasar–Ly α emission cross-correlation is detected on scales 1 ~ 15h₋1 Mpc, with shape consistent with a ΛCDM model with Ωm =0.30±0.100.07. The predicted amplitudemore » of this cross- correlation is proportional to the product of the mean Lyα surface brightness, {μα}, the amplitude of mass density fluctuations, and the quasar and Lyα emission bias factors. Using published cosmological observations to constrain the amplitude of mass fluctuations and the quasar bias factor, we infer the value of the product {μα} (bα /3) = (3.9±0.9)×10₋21 erg s₋1 cm₋2 °A₋1 arcsec₋2, where bα is the Lyα emission linear bias factor. If the dominant sources of Lyα emission we measure are star forming galaxies, we infer a total mean star formation rate density of ρSFR = (0.28 ± 0.07)(3/bα ) yr₋1 Mpc₋3 at z = 2 ₋ 3.5. For bα = 3, this value is a factor of 21 ₋ 35 above previous estimates relying on individually detected Lyα emitters, although it is consistent with the total star-formation density derived from dust-corrected, continuum UV surveys. Our observations therefore imply that 97% of the Lyα emission in the Universe at these redshifts is undetected in previous surveys of Lyα emitters. Our detected Lyα emission is also much greater, by at least an order of magnitude, than that measured from stacking analyses of faint halos surrounding previously detected Lyα emitters, but we speculate that it arises from similar low surface brightness Lyα halos surrounding all

  15. The power of event-driven analytics in Large Scale Data Processing

    ScienceCinema (OSTI)

    None

    2011-04-25

    FeedZai is a software company specialized in creating high-­-throughput low-­-latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­-driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­-time web-­-based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­-20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­-scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in the scope of the data processing needs of CERN. Pulse is available as open-­-source and can be licensed both for non-­-commercial and commercial applications. FeedZai is interested in exploring possible synergies with CERN in high-­-volume low-­-latency data processing applications. The seminar will be structured in two sessions, the first one being aimed to expose the general scope of FeedZai's activities, and the second focused on Pulse itself: 10:00-11:00 FeedZai and Large Scale Data Processing Introduction to FeedZai FeedZai Pulse and Complex Event Processing Demonstration Use-Cases and Applications Conclusion and Q&A 11:00-11:15 Coffee break 11:15-12:30 FeedZai Pulse Under the Hood A First FeedZai Pulse Application PulseQL overview Defining KPIs and Baselines Conclusion and Q&A About the speakers Nuno Sebastião is the CEO of FeedZai. Having worked for many years for the European Space Agency (ESA), he was responsible the overall design and development of Satellite Simulation Infrastructure of the agency. Having left ESA to found FeedZai, Nuno is

  16. SMART Wind Turbine Rotor: Design and Field Test

    SciTech Connect (OSTI)

    Berg, Jonathan C.; Resor, Brian R.; Paquette, Joshua A.; White, Jonathan R.

    2014-01-29

    This report documents the design, fabrication, and testing of the SMART Rotor. This work established hypothetical approaches for integrating active aerodynamic devices (AADs) into the wind turbine structure and controllers.

  17. Field Testing Research at the NWTC (Fact Sheet), NREL (National...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    performance to IEC 61400- 12-1 and MEASNET * Mechanical loads to IEC 61400-13 * Power quality to IEC 61400-21 and MEASNET * Duration testing to IEC 61400-2 * Safety and function ...

  18. Building America Technology Solutions Case Study: Field Testing...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    test roofs using air-permeable insulation (dense-pack cellulose and fiberglass) in a cold climate (Chicago, Illinois area, zone 5A) and to analyze the moisture effects over time. ...

  19. Large scale two-dimensional arrays of magnesium diboride superconducting quantum interference devices

    SciTech Connect (OSTI)

    Cybart, Shane A. Dynes, R. C.; Wong, T. J.; Cho, E. Y.; Beeman, J. W.; Yung, C. S.; Moeckly, B. H.

    2014-05-05

    Magnetic field sensors based on two-dimensional arrays of superconducting quantum interference devices were constructed from magnesium diboride thin films. Each array contained over 30?000 Josephson junctions fabricated by ion damage of 30?nm weak links through an implant mask defined by nano-lithography. Current-biased devices exhibited very large voltage modulation as a function of magnetic field, with amplitudes as high as 8?mV.

  20. Text-Alternative Version of Building America Webinar: Field Test Best

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Practices, BEopt, and the National Residential Efficiency Measures Database | Department of Energy Field Test Best Practices, BEopt, and the National Residential Efficiency Measures Database Text-Alternative Version of Building America Webinar: Field Test Best Practices, BEopt, and the National Residential Efficiency Measures Database Building America Research Tools: Field Test Best Practices, BEopt, and the National Residential Efficiency Measures Database March 18, 2015 Lieko Earle, Senior

  1. Upcoming Funding Opportunity to Develop and Field Test Wind Energy Bat

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Impact Minimization Technologies | Department of Energy and Field Test Wind Energy Bat Impact Minimization Technologies Upcoming Funding Opportunity to Develop and Field Test Wind Energy Bat Impact Minimization Technologies October 6, 2014 - 1:33pm Addthis On October 6, EERE's Wind Program announced a Notice of Intent to issue a funding opportunity entitled "Wind Energy Bat Impact Minimization Technologies and Field Testing Opportunities." This funding would help address

  2. Design and Installation of a Disposal Cell Cover Field Test

    SciTech Connect (OSTI)

    Benson, C.H.; Waugh, W.J.; Albright, W.H.; Smith, G.M.; Bush, R.P.

    2011-02-27

    The U.S. Department of Energy’s Office of Legacy Management (LM) initiated a cover assessment project in September 2007 to evaluate an inexpensive approach to enhancing the hydrological performance of final covers for disposal cells. The objective is to accelerate and enhance natural processes that are transforming existing conventional covers, which rely on low-conductivity earthen barriers, into water balance covers, that store water in soil and release it as soil evaporation and plant transpiration. A low conductivity cover could be modified by deliberately blending the upper layers of the cover profile and planting native shrubs. A test facility was constructed at the Grand Junction, Colorado, Disposal Site to evaluate the proposed methodology. The test cover was constructed in two identical sections, each including a large drainage lysimeter. The test cover was constructed with the same design and using the same materials as the existing disposal cell in order to allow for a direct comparison of performance. One test section will be renovated using the proposed method; the other is a control. LM is using the lysimeters to evaluate the effectiveness of the renovation treatment by monitoring hydrologic conditions within the cover profile as well as all water entering and leaving the system. This paper describes the historical experience of final covers employing earthen barrier layers, the design and operation of the lysimeter test facility, testing conducted to characterize the as-built engineering and edaphic properties of the lysimeter soils, the calibration of instruments installed at the test facility, and monitoring data collected since the lysimeters were constructed.

  3. Field Testing of Compartmentalization Methods for Multifamily Construction

    SciTech Connect (OSTI)

    Ueno, K.; Lstiburek, J.

    2015-03-01

    The 2012 IECC has an airtightness requirement of 3 air changes per hour at 50 Pascals test pressure for both single-family and multifamily construction in Climate Zones 3-8. Other programs (LEED, ASHRAE 189, ASHRAE 62.2) have similar or tighter compartmentalization requirements, driving the need for easier and more effective methods of compartmentalization in multifamily buildings. Builders and practitioners have found that fire-resistance rated wall assemblies are a major source of difficulty in air sealing/compartmentalization, particularly in townhouse construction. This problem is exacerbated when garages are “tucked in” to the units and living space is located over the garages. In this project, Building Science Corporation examined the taping of exterior sheathing details to improve air sealing results in townhouse and multifamily construction, when coupled with a better understanding of air leakage pathways. Current approaches are cumbersome, expensive, time consuming, and ineffective; these details were proposed as a more effective and efficient method. The effectiveness of these air sealing methods was tested with blower door testing, including “nulled” or “guarded” testing (adjacent units run at equal test pressure to null out inter-unit air leakage, or “pressure neutralization”). Pressure diagnostics were used to evaluate unit-to-unit connections and series leakage pathways (i.e., air leakage from exterior, into the fire-resistance rated wall assembly, and to the interior).

  4. Rooftop unit embedded diagnostics: Automated fault detection and diagnostics (AFDD) development, field testing and validation

    SciTech Connect (OSTI)

    Katipamula, Srinivas; Kim, Woohyun; Lutes, Robert G.; Underhill, Ronald M.

    2015-09-30

    This report documents the development, testing and field validation of the integrated AFDD and advanced rooftop unit (RTU) controls using a single controller in buildings.

  5. Field Test of a DHW Distribution System: Temperature and Flow Analyses (Presentation)

    SciTech Connect (OSTI)

    Barley, C. D.; Hendron, B.; Magnusson, L.

    2010-05-13

    This presentation discusses a field test of a DHW distribution system in an occupied townhome. It includes measured fixture flows and temperatures, a tested recirculation system, evaluated disaggregation of flow by measured temperatures, Aquacraft Trace Wizard analysis, and comparison.

  6. A Test Of The Transiel Method On The Travale Geothermal Field...

    Open Energy Info (EERE)

    Test Of The Transiel Method On The Travale Geothermal Field Jump to: navigation, search OpenEI Reference LibraryAdd to library Journal Article: A Test Of The Transiel Method On The...

  7. DOE Awards $126.6 Million for Two More Large-Scale Carbon Sequestratio...

    Office of Environmental Management (EM)

    May 6, 2008 - 11:30am Addthis Projects in California and Ohio Join Four Others in Effort ... will conduct large volume tests in California and Ohio to demonstrate the ability of ...

  8. Large-Scale Mercury Control Technology Testing for Lignite-Fired Utilities - Oxidation Systems for Wet FGD

    SciTech Connect (OSTI)

    Steven A. Benson; Michael J. Holmes; Donald P. McCollor; Jill M. Mackenzie; Charlene R. Crocker; Lingbu Kong; Kevin C. Galbreath

    2007-03-31

    Mercury (Hg) control technologies were evaluated at Minnkota Power Cooperative's Milton R. Young (MRY) Station Unit 2, a 450-MW lignite-fired cyclone unit near Center, North Dakota, and TXU Energy's Monticello Steam Electric Station (MoSES) Unit 3, a 793-MW lignite--Powder River Basin (PRB) subbituminous coal-fired unit near Mt. Pleasant, Texas. A cold-side electrostatic precipitator (ESP) and wet flue gas desulfurization (FGD) scrubber are used at MRY and MoSES for controlling particulate and sulfur dioxide (SO{sub 2}) emissions, respectively. Several approaches for significantly and cost-effectively oxidizing elemental mercury (Hg{sup 0}) in lignite combustion flue gases, followed by capture in an ESP and/or FGD scrubber were evaluated. The project team involved in performing the technical aspects of the project included Babcock & Wilcox, the Energy & Environmental Research Center (EERC), the Electric Power Research Institute, and URS Corporation. Calcium bromide (CaBr{sub 2}), calcium chloride (CaCl{sub 2}), magnesium chloride (MgCl{sub 2}), and a proprietary sorbent enhancement additive (SEA), hereafter referred to as SEA2, were added to the lignite feeds to enhance Hg capture in the ESP and/or wet FGD. In addition, powdered activated carbon (PAC) was injected upstream of the ESP at MRY Unit 2. The work involved establishing Hg concentrations and removal rates across existing ESP and FGD units, determining costs associated with a given Hg removal efficiency, quantifying the balance-of-plant impacts of the control technologies, and facilitating technology commercialization. The primary project goal was to achieve ESP-FGD Hg removal efficiencies of {ge}55% at MRY and MoSES for about a month.

  9. Placement of the dam for the no. 2 kambaratinskaya HPP by large-scale blasting: some observations

    SciTech Connect (OSTI)

    Shuifer, M. I.; Argal, E. S.

    2011-11-15

    Results of complex instrument observations of large-scale blasting during construction of the dam for the No. 2 Kambaratinskaya HPP on the Naryn River in the Republic of Kirgizia are analyzed. The purpose of these observations was: to determine the actual parameters of the seismic process, evaluate the effect of air and acoustic shock waves, and investigate the kinematics of the surface formed by the blast in its core region within the mass of fractured rocks.

  10. Microsoft Word - NRAP-TRS-III-002-2012_Modeling the Performance of Large Scale CO2 Storage_20121024.docx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Modeling the Performance of Large- Scale CO 2 Storage Systems: A Comparison of Different Sensitivity Analysis Methods 24 October 2012 Office of Fossil Energy NRAP-TRS-III-002-2012 Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy,

  11. Microsoft PowerPoint - 2-A-3-OK-Real-Time Data Infrastructure for Large Scale Wind Fleets.pptx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Real Time Data Infrastructure for Large Real-Time Data Infrastructure for Large Scale Wind Fleets - Return on Investment vs Fundamental Business Requirements Value now. Value over time. © Copyright 2011, OSIsoft, LLC All Rights Reserved. vs. Fundamental Business Requirements Reliability - 4 Ws and an H * What is reliability? - Uptime, OEE, profitable wind plants? (OEE Availability % * Production % * Quality %) * (OEE = Availability % * Production % * Quality %) * Why should money be spent to

  12. Microsoft Word - The_Advanced_Networks_and_Services_Underpinning_Modern,Large-Scale_Science.SciDAC.v5.doc

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ESnet4: Advanced Networking and Services Supporting the Science Mission of DOE's Office of Science William E. Johnston ESnet Dept. Head and Senior Scientist Lawrence Berkeley National Laboratory May, 2007 1 Introduction In many ways, the dramatic achievements in scientific discovery through advanced computing and the discoveries of the increasingly large-scale instruments with their enormous data handling and remote collaboration requirements, have been made possible by accompanying

  13. QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP),

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP), Bethesda MD, April 29-30, 2014 NY Center for Computational Science 2 Defining questions of nuclear physics research in US: Nuclear Science Advisory Committee (NSAC) "The Frontiers of Nuclear Science", 2007 Long Range Plan "What are the phases of strongly interacting matter and what roles do they play in the cosmos ?" "What does QCD predict for

  14. Coordinated Multi-layer Multi-domain Optical Network (COMMON) for Large-Scale Science Applications (COMMON)

    SciTech Connect (OSTI)

    Vokkarane, Vinod

    2013-09-01

    We intend to implement a Coordinated Multi-layer Multi-domain Optical Network (COMMON) Framework for Large-scale Science Applications. In the COMMON project, specific problems to be addressed include 1) anycast/multicast/manycast request provisioning, 2) deployable OSCARS enhancements, 3) multi-layer, multi-domain quality of service (QoS), and 4) multi-layer, multidomain path survivability. In what follows, we outline the progress in the above categories (Year 1, 2, and 3 deliverables).

  15. A membrane-free lithium/polysulfide semi-liquid battery for large-scale energy storage

    SciTech Connect (OSTI)

    Yang, Yuan; Zheng, Guangyuan; Cui, Yi

    2013-01-01

    Large-scale energy storage represents a key challenge for renewable energy and new systems with low cost, high energy density and long cycle life are desired. In this article, we develop a new lithium/polysulfide (Li/PS) semi-liquid battery for large-scale energy storage, with lithium polysulfide (Li{sub 2}S{sub 8}) in ether solvent as a catholyte and metallic lithium as an anode. Unlike previous work on Li/S batteries with discharge products such as solid state Li{sub 2}S{sub 2} and Li{sub 2}S, the catholyte is designed to cycle only in the range between sulfur and Li{sub 2}S{sub 4}. Consequently all detrimental effects due to the formation and volume expansion of solid Li{sub 2}S{sub 2}/Li{sub 2}S are avoided. This novel strategy results in excellent cycle life and compatibility with flow battery design. The proof-of-concept Li/PS battery could reach a high energy density of 170 W h kg{sup -1} and 190 W h L{sup -1} for large scale storage at the solubility limit, while keeping the advantages of hybrid flow batteries. We demonstrated that, with a 5 M Li{sub 2}S{sub 8} catholyte, energy densities of 97 W h kg{sup -1} and 108 W h L{sup -1} can be achieved. As the lithium surface is well passivated by LiNO{sub 3} additive in ether solvent, internal shuttle effect is largely eliminated and thus excellent performance over 2000 cycles is achieved with a constant capacity of 200 mA h g{sup -1}. This new system can operate without the expensive ion-selective membrane, and it is attractive for large-scale energy storage.

  16. CX-100 and TX-100 blade field tests.

    SciTech Connect (OSTI)

    Holman, Adam (USDA-Agriculture Research Service, Bushland, TX); Jones, Perry L.; Zayas, Jose R.

    2005-12-01

    In support of the DOE Low Wind Speed Turbine (LWST) program two of the three Micon 65/13M wind turbines at the USDA Agricultural Research Service (ARS) center in Bushland, Texas will be used to test two sets of experimental blades, the CX-100 and TX-100. The blade aerodynamic and structural characterization, meteorological inflow and wind turbine structural response will be monitored with an array of 75 instruments: 33 to characterize the blades, 15 to characterize the inflow, and 27 to characterize the time-varying state of the turbine. For both tests, data will be sampled at a rate of 30 Hz using the ATLAS II (Accurate GPS Time-Linked Data Acquisition System) data acquisition system. The system features a time-synchronized continuous data stream and telemetered data from the turbine rotor. This paper documents the instruments and infrastructure that have been developed to monitor these blades, turbines and inflow.

  17. Large-Scale Condensed Matter and Fluid Dynamics Simulations in Three

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Diverse Areas: Whole Brain Blood Flow Simulations | Argonne Leadership Computing Facility A snapshot of blood flow simulated and visualized within a digitally reconstructed patient-specific middle cerebral artery aneurysm. The figure depicts a snapshot of blood flow simulated and visualized using the parallel LB code HemeLB within a digitally reconstructed patient-specific middle cerebral artery aneurysm. The top-left and top-right images show the volume rendering of the velocity field and

  18. Field Testing of Compartmentalization Methods for Multifamily Construction

    SciTech Connect (OSTI)

    Ueno, K.; Lstiburek, J. W.

    2015-03-01

    The 2012 International Energy Conservation Code (IECC) has an airtightness requirement of 3 air changes per hour at 50 Pascals test pressure (3 ACH50) for single-family and multifamily construction (in climate zones 3–8). The Leadership in Energy & Environmental Design certification program and ASHRAE Standard 189 have comparable compartmentalization requirements. ASHRAE Standard 62.2 will soon be responsible for all multifamily ventilation requirements (low rise and high rise); it has an exceptionally stringent compartmentalization requirement. These code and program requirements are driving the need for easier and more effective methods of compartmentalization in multifamily buildings.

  19. Exploration 3-D Seismic Field Test/Native Tribes Initiative

    SciTech Connect (OSTI)

    Carroll, Herbert B.; Chen, K.C.; Guo, Genliang; Johnson, W.I.; Reeves,T.K.; Sharma,Bijon

    1999-04-27

    To determine current acquisition procedures and costs and to further the goals of the President's Initiative for Native Tribes, a seismic-survey project is to be conducted on Osage tribal lands. The goals of the program are to demonstrate the capabilities, costs, and effectiveness of 3-D seismic work in a small-operator setting and to determine the economics of such a survey. For these purposes, typical small-scale independent-operator practices are being followed and a shallow target chose in an area with a high concentration of independent operators. The results will be analyzed in detail to determine if there are improvements and/or innovations which can be easily introduced in field-acquisition procedures, in processing, or in data manipulation and interpretation to further reduce operating costs and to make the system still more active to the small-scale operator.

  20. INTERAGENCY FIELD TEST & EVALUATION OF WIND TURBINE – RADAR INTERFERENCE MITIGATION TECHNOLOGIES

    Broader source: Energy.gov [DOE]

    These documents include a final report on the Interagency Field Test & Evaluation (IFT&E) program and summaries of three field tests designed to measure the impact of wind turbines on current air surveillance radars and the effectiveness of private sector technologies in mitigating that interference.

  1. LARGE-SCALE DISTRIBUTION OF ARRIVAL DIRECTIONS OF COSMIC RAYS DETECTED ABOVE 10{sup 18} eV AT THE PIERRE AUGER OBSERVATORY

    SciTech Connect (OSTI)

    Abreu, P.; Andringa, S.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Alves Batista, R.; Ambrosio, M.; Aramo, C.; Aminaei, A.; Anchordoqui, L.; Antici'c, T.; Arganda, E.; Collaboration: Pierre Auger Collaboration; and others

    2012-12-15

    A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above 10{sup 18} eV at the Pierre Auger Observatory is presented. This search is performed as a function of both declination and right ascension in several energy ranges above 10{sup 18} eV, and reported in terms of dipolar and quadrupolar coefficients. Within the systematic uncertainties, no significant deviation from isotropy is revealed. Assuming that any cosmic-ray anisotropy is dominated by dipole and quadrupole moments in this energy range, upper limits on their amplitudes are derived. These upper limits allow us to test the origin of cosmic rays above 10{sup 18} eV from stationary Galactic sources densely distributed in the Galactic disk and predominantly emitting light particles in all directions.

  2. Transient thermal analysis for radioactive liquid mixing operations in a large-scaled tank

    SciTech Connect (OSTI)

    Lee, S. Y. [Savannah River Site Nuclear Solutions, LLC, Aiken, SC (United States). Savannah River National Lab. (SRNL); Smith, III, F. G. [Savannah River Site Nuclear Solutions, LLC, Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2014-07-25

    A transient heat balance model was developed to assess the impact of a Submersible Mixer Pump (SMP) on radioactive liquid temperature during the process of waste mixing and removal for the high-level radioactive materials stored in Savannah River Site (SRS) tanks. The model results will be mainly used to determine the SMP design impacts on the waste tank temperature during operations and to develop a specification for a new SMP design to replace existing longshaft mixer pumps used during waste removal. The present model was benchmarked against the test data obtained by the tank measurement to examine the quantitative thermal response of the tank and to establish the reference conditions of the operating variables under no SMP operation. The results showed that the model predictions agreed with the test data of the waste temperatures within about 10%.

  3. Transient thermal analysis for radioactive liquid mixing operations in a large-scaled tank

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Lee, S. Y.; Smith, III, F. G.

    2014-07-25

    A transient heat balance model was developed to assess the impact of a Submersible Mixer Pump (SMP) on radioactive liquid temperature during the process of waste mixing and removal for the high-level radioactive materials stored in Savannah River Site (SRS) tanks. The model results will be mainly used to determine the SMP design impacts on the waste tank temperature during operations and to develop a specification for a new SMP design to replace existing longshaft mixer pumps used during waste removal. The present model was benchmarked against the test data obtained by the tank measurement to examine the quantitative thermalmore » response of the tank and to establish the reference conditions of the operating variables under no SMP operation. The results showed that the model predictions agreed with the test data of the waste temperatures within about 10%.« less

  4. Field Testing of Thermoplastic Encapsulants in High-Temperature Installations

    SciTech Connect (OSTI)

    Kempe, Michael D.; Miller, David C.; Wohlgemuth, John H.; Kurtz, Sarah R.; Moseley, John M.; Shah, Qurat A.; Tamizhmani, Govindasamy; Sakurai, Keiichiro; Inoue, Masanao; Doi, Takuya; Masuda, Atsushi; Samuels, Sam L.; Vanderpan, Crystal E.

    2015-11-01

    Recently there has been increased interest in using thermoplastic encapsulant materials in photovoltaic modules, but concerns have been raised about whether these would be mechanically stable at high temperatures in the field. This has become a significant topic of discussion in the development of IEC 61730 and IEC 61215. We constructed eight pairs of crystalline-silicon modules and eight pairs of glass/encapsulation/glass thin-film mock modules using different encapsulant materials, of which only two were formulated to chemically crosslink. One module set was exposed outdoors with thermal insulation on the back side in Mesa, Arizona, in the summer (hot-dry), and an identical module set was exposed in environmental chambers. High-precision creep measurements (±20 μm) and electrical performance measurements indicate that despite many of these polymeric materials operating in the melt or rubbery state during outdoor deployment, no significant creep was seen because of their high viscosity, lower operating temperature at the edges, and/or the formation of chemical crosslinks in many of the encapsulants with age despite the absence of a crosslinking agent. Only an ethylene-vinyl acetate (EVA) encapsulant formulated without a peroxide crosslinking agent crept significantly. When the crystalline-silicon modules, the physical restraint of the backsheet reduced creep further and was not detectable even for the EVA without peroxide. Because of the propensity of some polymeric materials to crosslink as they age, typical thermoplastic encapsulants would be unlikely to result in creep in the vast majority of installations.

  5. Field Testing of Thermoplastic Encapsulants in High-Temperature Installations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Kempe, Michael D.; Miller, David C.; Wohlgemuth, John H.; Kurtz, Sarah R.; Moseley, John M.; Shah, Qurat A.; Tamizhmani, Govindasamy; Sakurai, Keiichiro; Inoue, Masanao; Doi, Takuya; et al

    2015-11-01

    Recently there has been increased interest in using thermoplastic encapsulant materials in photovoltaic modules, but concerns have been raised about whether these would be mechanically stable at high temperatures in the field. This has become a significant topic of discussion in the development of IEC 61730 and IEC 61215. We constructed eight pairs of crystalline-silicon modules and eight pairs of glass/encapsulation/glass thin-film mock modules using different encapsulant materials, of which only two were formulated to chemically crosslink. One module set was exposed outdoors with thermal insulation on the back side in Mesa, Arizona, in the summer (hot-dry), and an identicalmore » module set was exposed in environmental chambers. High-precision creep measurements (±20 μm) and electrical performance measurements indicate that despite many of these polymeric materials operating in the melt or rubbery state during outdoor deployment, no significant creep was seen because of their high viscosity, lower operating temperature at the edges, and/or the formation of chemical crosslinks in many of the encapsulants with age despite the absence of a crosslinking agent. Only an ethylene-vinyl acetate (EVA) encapsulant formulated without a peroxide crosslinking agent crept significantly. When the crystalline-silicon modules, the physical restraint of the backsheet reduced creep further and was not detectable even for the EVA without peroxide. Because of the propensity of some polymeric materials to crosslink as they age, typical thermoplastic encapsulants would be unlikely to result in creep in the vast majority of installations.« less

  6. Testing a Stakeholder Participation Framework for Fielding Bioremediation Technologies

    SciTech Connect (OSTI)

    Anex, Robert P.; Focht, Will

    2004-03-17

    This research is investigating stakeholder attitudes about the use of bioremediation technologies with the objective of reducing conflict among stakeholders. The research protocol includes four closely related components. First, we are testing a framework for stakeholder participation that prescribes appropriate stakeholder involvement strategies based on stakeholders trust of the other parties involved in technology deployment decision-making. Second, we are assessing conflict among stakeholders regarding the acceptability of in situ bioremediation as a means to reduce risks posed by radionuclides and metals in the environment. Third, we are assessing the role that awareness of risk exposure plays in the willingness of stakeholders to engage in problem-solving and making risk tradeoffs. Fourth, we are assessing the potential of using the results of these first three components to forge consensus among stakeholders regarding the use and oversight of bioremediation technologies and stakeholder involvement in the decision process. This poster presents preliminary results of a Q methodological survey of stakeholders who are familiar with radionuclide and heavy metal contamination and DOE efforts to remediate that contamination at Los Alamos, Oak Ridge and Hanford reservations. The Q study allows the research team to diagnose conflict among stakeholders and discover opportunities for consensus.

  7. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    SciTech Connect (OSTI)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B.

    2015-07-15

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  8. Characterization of Pliocene and Miocene Formations in the Wilmington Graben, Offshore Los Angeles, for Large-Scale Geologic Storage of CO₂

    SciTech Connect (OSTI)

    Bruno, Michael

    2014-12-08

    Geomechanics Technologies has completed a detailed characterization study of the Wilmington Graben offshore Southern California area for large-scale CO₂ storage. This effort has included: an evaluation of existing wells in both State and Federal waters, field acquisition of about 175 km (109 mi) of new seismic data, new well drilling, development of integrated 3D geologic, geomechanics, and fluid flow models for the area. The geologic analysis indicates that more than 796 MMt of storage capacity is available within the Pliocene and Miocene formations in the Graben for midrange geologic estimates (P50). Geomechanical analyses indicate that injection can be conducted without significant risk for surface deformation, induced stresses or fault activation. Numerical analysis of fluid migration indicates that injection into the Pliocene Formation at depths of 1525 m (5000 ft) would lead to undesirable vertical migration of the CO₂ plume. Recent well drilling however, indicates that deeper sand is present at depths exceeding 2135 m (7000 ft), which could be viable for large volume storage. For vertical containment, injection would need to be limited to about 250,000 metric tons per year per well, would need to be placed at depths greater than 7000ft, and would need to be placed in new wells located at least 1 mile from any existing offset wells. As a practical matter, this would likely limit storage operations in the Wilmington Graben to about 1 million tons per year or less. A quantitative risk analysis for the Wilmington Graben indicate that such large scale CO₂ storage in the area would represent higher risk than other similar size projects in the US and overseas.

  9. Low-Risk and Cost-Effective Prior Savings Estimates for Large-Scale Energy Conservation Projects in Housing: Learning from the Fort Polk GHP Project

    SciTech Connect (OSTI)

    Shonder, John A; Hughes, Patrick; Thornton, Jeff W.

    1997-08-01

    Many opportunities exist for large-scale energy conservation projects in housing: military housing, federally-subsidized low-income housing, and planned communities (condominiums, townhomes, senior centers) to name a few. Energy savings performance contracting (ESPC) is now receiving greater attention, as a means to implement such projects. This paper proposes an improved method for prior (to construction) savings estimates for these projects. More accurate prior estimates reduce project risk, decrease financing costs, and help avoid post-construction legal disputes over performance contract baseline adjustments. The proposed approach to prior estimates is verified against data from Fort Polk, LA. In the course of evaluating the ESPC at Fort Polk, Louisiana, we have collected energy use data - both at the electrical feeder level and at the level of individual residences - which allowed us to develop calibrated engineering models which accurately predict pre-retrofit energy consumption. We believe that such calibrated models could be used to provide much more accurate estimates of energy savings in retrofit projects, particularly in cases where the energy consumption of large populations of housing can be captured on one or a few meters. The improved savings estimating approach described here is based on an engineering model calibrated to field-collected data from the pre-retrofit period. A dynamic model of pre-retrofit energy use was developed for all housing and non-housing loads on a complete electrical feeder at Fort Polk. The feeder serves 46 buildings containing a total of 200 individual apartments. Of the 46 buildings, there are three unique types, and among these types the only difference is compass orientation. The model included the heat transfer characteristics of the buildings, the pre-retrofit air source heat pump, a hot water consumption model and a profile for electrical use by lights and other appliances. Energy consumption for all 200 apartments was

  10. Government commercialization of large scale technology: the United States Breeder Reactor Program 1964-1976

    SciTech Connect (OSTI)

    Stiefel, M.D.

    1981-06-01

    The US Liquid Metal Fast Breeder Reactor program was an attempt by the Atomic Energy Commission to develop, in partnership with industry, a particular nuclear technology. Not only did the AEC provide subsidies and test facilities for the private sector, but the agency attempted to direct which technological options would be developed. The national laboratories, nuclear vendors, and electric utilities were not amenable to government direction. The resulting time delays and cost overruns stalled the program until the anti-nuclear movement arose and undermined the political consensus behind the program. As a result, a breeder demonstration plant has not yet been built in the United States. The analysis of this thesis suggests two conclusions. First, future government directed commercialization programs are unlikely to succeed. Second, breeder development should be slowed down until the political problems in the nuclear industry are solved.

  11. Combined Experiment Phase 1. [Horizontal axis wind turbines: wind tunnel testing versus field testing

    SciTech Connect (OSTI)

    Butterfield, C.P.; Musial, W.P.; Simms, D.A.

    1992-10-01

    How does wind tunnel airfoil data differ from the airfoil performance on an operating horizontal axis wind turbine (HAWT) The National Renewable Energy laboratory has been conducting a comprehensive test program focused on answering this question and understanding the basic fluid mechanics of rotating HAWT stall aerodynamics. The basic approach was to instrument a wind rotor, using an airfoil that was well documented by wind tunnel tests, and measure operating pressure distributions on the rotating blade. Based an the integrated values of the pressure data, airfoil performance coefficients were obtained, and comparisons were made between the rotating data and the wind tunnel data. Care was taken to the aerodynamic and geometric differences between the rotating and the wind tunnel models. This is the first of two reports describing the Combined Experiment Program and its results. This Phase I report covers background information such as test setup and instrumentation. It also includes wind tunnel test results and roughness testing.

  12. Organo-sulfur molecules enable iron-based battery electrodes to meet the challenges of large-scale electrical energy storage

    SciTech Connect (OSTI)

    Yang, B; Malkhandi, S; Manohar, AK; Prakash, GKS; Narayanan, SR

    2014-07-03

    Rechargeable iron-air and nickel-iron batteries are attractive as sustainable and inexpensive solutions for large-scale electrical energy storage because of the global abundance and eco-friendliness of iron, and the robustness of iron-based batteries to extended cycling. Despite these advantages, the commercial use of iron-based batteries has been limited by their low charging efficiency. This limitation arises from the iron electrodes evolving hydrogen extensively during charging. The total suppression of hydrogen evolution has been a significant challenge. We have found that organo-sulfur compounds with various structural motifs (linear and cyclic thiols, dithiols, thioethers and aromatic thiols) when added in milli-molar concentration to the aqueous alkaline electrolyte, reduce the hydrogen evolution rate by 90%. These organo-sulfur compounds form strongly adsorbed layers on the iron electrode and block the electrochemical process of hydrogen evolution. The charge-transfer resistance and double-layer capacitance of the iron/electrolyte interface confirm that the extent of suppression of hydrogen evolution depends on the degree of surface coverage and the molecular structure of the organo-sulfur compound. An unanticipated electrochemical effect of the adsorption of organo-sulfur molecules is "de-passivation" that allows the iron electrode to be discharged at high current values. The strongly adsorbed organo-sulfur compounds were also found to resist electro-oxidation even at the positive electrode potentials at which oxygen evolution can occur. Through testing on practical rechargeable battery electrodes we have verified the substantial improvements to the efficiency during charging and the increased capability to discharge at high rates. We expect these performance advances to enable the design of efficient, inexpensive and eco-friendly iron-based batteries for large-scale electrical energy storage.

  13. Method for large-scale fabrication of atomic-scale structures on material surfaces using surface vacancies

    DOE Patents [OSTI]

    Lim, Chong Wee; Ohmori, Kenji; Petrov, Ivan Georgiev; Greene, Joseph E.

    2004-07-13

    A method for forming atomic-scale structures on a surface of a substrate on a large-scale includes creating a predetermined amount of surface vacancies on the surface of the substrate by removing an amount of atoms on the surface of the material corresponding to the predetermined amount of the surface vacancies. Once the surface vacancies have been created, atoms of a desired structure material are deposited on the surface of the substrate to enable the surface vacancies and the atoms of the structure material to interact. The interaction causes the atoms of the structure material to form the atomic-scale structures.

  14. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect (OSTI)

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  15. Laboratory measurements of large-scale carbon sequestration flows in saline reservoirs

    SciTech Connect (OSTI)

    Backhaus, Scott N

    2010-01-01

    Brine saturated with CO{sub 2} is slightly denser than the original brine causing it to sink to the bottom of a saline reservoir where the CO{sub 2} is safely sequestered. However, the buoyancy of pure CO{sub 2} relative to brine drives it to the top of the reservoir where it collects underneath the cap rock as a separate phase of supercritical fluid. Without additional processes to mix the brine and CO{sub 2}, diffusion in this geometry is slow and would require an unacceptably long time to consume the pure CO{sub 2}. However, gravity and diffusion-driven convective instabilities have been hypothesized that generate enhanced CO{sub 2}-brine mixing promoting dissolution of CO{sub 2} into the brine on time scale of a hundred years. These flows involve a class of hydrodynamic problems that are notoriously difficult to simulate; the simultaneous flow of mUltiple fluids (CO{sub 2} and brine) in porous media (rock or sediment). The hope for direct experimental confirmation of simulations is dim due to the difficulty of obtaining high resolution data from the subsurface and the high pressures ({approx}100 bar), long length scales ({approx}100 meters), and long time scales ({approx}100 years) that are characteristic of these flows. We have performed imaging and mass transfer measurements in similitude-scaled laboratory experiments that provide benchmarks to test reservoir simulation codes and enhance their predictive power.

  16. Reduced Order Modeling for Prediction and Control of Large-Scale Systems.

    SciTech Connect (OSTI)

    Kalashnikova, Irina; Arunajatesan, Srinivasan; Barone, Matthew Franklin; van Bloemen Waanders, Bart Gustaaf; Fike, Jeffrey A.

    2014-05-01

    -Stokes equations is derived, and it is demonstrated that if a Galerkin ROM is constructed in this inner product, the ROM system energy will be bounded in a way that is consistent with the behavior of the exact solution to these PDEs, i.e., the ROM will be energy-stable. The viability of the linear as well as nonlinear continuous projection model reduction approaches developed as a part of this project is evaluated on several test cases, including the cavity configuration of interest in the targeted application area. In the second part of this report, some POD/Galerkin approaches for building stable ROMs using discrete projection are explored. It is shown that, for generic linear time-invariant (LTI) systems, a discrete counterpart of the continuous symmetry inner product is a weighted L2 inner product obtained by solving a Lyapunov equation. This inner product was first proposed by Rowley et al., and is termed herein the Lyapunov inner product. Comparisons between the symmetry inner product and the Lyapunov inner product are made, and the performance of ROMs constructed using these inner products is evaluated on several benchmark test cases. Also in the second part of this report, a new ROM stabilization approach, termed ROM stabilization via optimization-based eigenvalue reassignment, is developed for generic LTI systems. At the heart of this method is a constrained nonlinear least-squares optimization problem that is formulated and solved numerically to ensure accuracy of the stabilized ROM. Numerical studies reveal that the optimization problem is computationally inexpensive to solve, and that the new stabilization approach delivers ROMs that are stable as well as accurate. Summaries of lessons learned and perspectives for future work motivated by this LDRD project are provided at the end of each of the two main chapters.

  17. Large-scale Nanostructure Simulations from X-ray Scattering Data On Graphics Processor Clusters

    SciTech Connect (OSTI)

    Sarje, Abhinav; Pien, Jack; Li, Xiaoye; Chan, Elaine; Chourou, Slim; Hexemer, Alexander; Scholz, Arthur; Kramer, Edward

    2012-01-15

    X-ray scattering is a valuable tool for measuring the structural properties of materialsused in the design and fabrication of energy-relevant nanodevices (e.g., photovoltaic, energy storage, battery, fuel, and carbon capture andsequestration devices) that are key to the reduction of carbon emissions. Although today's ultra-fast X-ray scattering detectors can provide tremendousinformation on the structural properties of materials, a primary challenge remains in the analyses of the resulting data. We are developing novelhigh-performance computing algorithms, codes, and software tools for the analyses of X-ray scattering data. In this paper we describe two such HPCalgorithm advances. Firstly, we have implemented a flexible and highly efficient Grazing Incidence Small Angle Scattering (GISAXS) simulation code based on theDistorted Wave Born Approximation (DWBA) theory with C++/CUDA/MPI on a cluster of GPUs. Our code can compute the scattered light intensity from any givensample in all directions of space; thus allowing full construction of the GISAXS pattern. Preliminary tests on a single GPU show speedups over 125x compared tothe sequential code, and almost linear speedup when executing across a GPU cluster with 42 nodes, resulting in an additional 40x speedup compared to usingone GPU node. Secondly, for the structural fitting problems in inverse modeling, we have implemented a Reverse Monte Carlo simulation algorithm with C++/CUDAusing one GPU. Since there are large numbers of parameters for fitting in the in X-ray scattering simulation model, the earlier single CPU code required weeks ofruntime. Deploying the AccelerEyes Jacket/Matlab wrapper to use GPU gave around 100x speedup over the pure CPU code. Our further C++/CUDA optimization deliveredan additional 9x speedup.

  18. Tensor to scalar ratio and large scale power suppression from pre-slow roll initial conditions

    SciTech Connect (OSTI)

    Lello, Louis; Boyanovsky, Daniel, E-mail: lal81@pitt.edu, E-mail: boyan@pitt.edu [Department of Physics and Astronomy, University of Pittsburgh, 3941 O'Hara St, Pittsburgh, PA 15260 (United States)

    2014-05-01

    We study the corrections to the power spectra of curvature and tensor perturbations and the tensor-to-scalar ratio r in single field slow roll inflation with standard kinetic term due to initial conditions imprinted by a ''fast-roll'' stage prior to slow roll. For a wide range of initial inflaton kinetic energy, this stage lasts only a few e-folds and merges smoothly with slow-roll thereby leading to non-Bunch-Davies initial conditions for modes that exit the Hubble radius during slow roll. We describe a program that yields the dynamics in the fast-roll stage while matching to the slow roll stage in a manner that is independent of the inflationary potentials. Corrections to the power spectra are encoded in a ''transfer function'' for initial conditions T{sub ?}(k), P{sub ?}(k) = P{sup BD}{sub ?}(k)T{sub ?}(k), implying a modification of the ''consistency condition'' for the tensor to scalar ratio at a pivot scale k{sub 0}: r(k{sub 0}) = ?8n{sub T}(k{sub 0})[T{sub T}(k{sub 0})/T{sub R}(k{sub 0})]. We obtain T{sub ?}(k) to leading order in a Born approximation valid for modes of observational relevance today. A fit yields T{sub ?}(k) = 1+A{sub ?}k{sup ?p}cos [2??k/H{sub sr}+?{sub ?}], with 1.5?

  19. Factors Affecting the Rate of Penetration of Large-Scale Electricity Technologies: The Case of Carbon Sequestration

    SciTech Connect (OSTI)

    James R. McFarland; Howard J. Herzog

    2007-05-14

    This project falls under the Technology Innovation and Diffusion topic of the Integrated Assessment of Climate Change Research Program. The objective was to better understand the critical variables that affect the rate of penetration of large-scale electricity technologies in order to improve their representation in integrated assessment models. We conducted this research in six integrated tasks. In our first two tasks, we identified potential factors that affect penetration rates through discussions with modeling groups and through case studies of historical precedent. In the next three tasks, we investigated in detail three potential sets of critical factors: industrial conditions, resource conditions, and regulatory/environmental considerations. Research to assess the significance and relative importance of these factors involved the development of a microeconomic, system dynamics model of the US electric power sector. Finally, we implemented the penetration rate models in an integrated assessment model. While the focus of this effort is on carbon capture and sequestration technologies, much of the work will be applicable to other large-scale energy conversion technologies.

  20. Large-scale Environmental Variables and Transition to Deep Convection in Cloud Resolving Model Simulations: A Vector Representation

    SciTech Connect (OSTI)

    Hagos, Samson M.; Leung, Lai-Yung R.

    2012-11-01

    Cloud resolving model simulations and vector analysis are used to develop a quantitative method of assessing regional variations in the relationships between various large-scale environmental variables and the transition to deep convection. Results of the CRM simulations from three tropical regions are used to cluster environmental conditions under which transition to deep convection does and does not take place. Projections of the large-scale environmental variables on the difference between these two clusters are used to quantify the roles of these variables in the transition to deep convection. While the transition to deep convection is most sensitive to moisture and vertical velocity perturbations, the details of the profiles of the anomalies vary from region to region. In comparison, the transition to deep convection is found to be much less sensitive to temperature anomalies over all three regions. The vector formulation presented in this study represents a simple general framework for quantifying various aspects of how the transition to deep convection is sensitive to environmental conditions.

  1. THE DETECTION OF THE LARGE-SCALE ALIGNMENT OF MASSIVE GALAXIES AT z {approx} 0.6

    SciTech Connect (OSTI)

    Li Cheng [Partner Group of the Max Planck Institute for Astrophysics at the Shanghai Astronomical Observatory and Key Laboratory for Research in Galaxies and Cosmology of Chinese Academy of Sciences, Nandan Road 80, Shanghai 200030 (China); Jing, Y. P. [Center for Astronomy and Astrophysics, Department of Physics, Shanghai Jiao Tong University, Shanghai 200240 (China); Faltenbacher, A. [School of Physics, University of the Witwatersrand, P.O. Box Wits, Johannesburg 2050 (South Africa); Wang Jie, E-mail: leech@shao.ac.cn [National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)

    2013-06-10

    We report on the detection of the alignment between galaxies and large-scale structure at z {approx} 0.6 based on the CMASS galaxy sample from the Baryon Oscillation Spectroscopy Survey Data Release 9. We use two statistics to quantify the alignment signal: (1) the alignment two-point correlation function that probes the dependence of galaxy clustering at a given separation in redshift space on the projected angle ({theta}{sub p}) between the orientation of galaxies and the line connecting to other galaxies, and (2) the cos (2{theta})-statistic that estimates the average of cos (2{theta}{sub p}) for all correlated pairs at a given separation s. We find a significant alignment signal out to about 70 h {sup -1} Mpc in both statistics. Applications of the same statistics to dark matter halos of mass above 10{sup 12} h {sup -1} M{sub Sun} in a large cosmological simulation show scale-dependent alignment signals similar to the observation, but with higher amplitudes at all scales probed. We show that this discrepancy may be partially explained by a misalignment angle between central galaxies and their host halos, though detailed modeling is needed in order to better understand the link between the orientations of galaxies and host halos. In addition, we find systematic trends of the alignment statistics with the stellar mass of the CMASS galaxies, in the sense that more massive galaxies are more strongly aligned with the large-scale structure.

  2. Development of Residential Prototype Building Models and Analysis System for Large-Scale Energy Efficiency Studies Using EnergyPlus

    SciTech Connect (OSTI)

    Mendon, Vrushali V.; Taylor, Zachary T.

    2014-09-10

    ABSTRACT: Recent advances in residential building energy efficiency and codes have resulted in increased interest in detailed residential building energy models using the latest energy simulation software. One of the challenges of developing residential building models to characterize new residential building stock is to allow for flexibility to address variability in house features like geometry, configuration, HVAC systems etc. Researchers solved this problem in a novel way by creating a simulation structure capable of creating fully-functional EnergyPlus batch runs using a completely scalable residential EnergyPlus template system. This system was used to create a set of thirty-two residential prototype building models covering single- and multifamily buildings, four common foundation types and four common heating system types found in the United States (US). A weighting scheme with detailed state-wise and national weighting factors was designed to supplement the residential prototype models. The complete set is designed to represent a majority of new residential construction stock. The entire structure consists of a system of utility programs developed around the core EnergyPlus simulation engine to automate the creation and management of large-scale simulation studies with minimal human effort. The simulation structure and the residential prototype building models have been used for numerous large-scale studies, one of which is briefly discussed in this paper.

  3. Economic Impact of Large-Scale Deployment of Offshore Marine and Hydrokinetic Technology in Oregon Coastal Counties

    SciTech Connect (OSTI)

    Jimenez, T.; Tegen, S.; Beiter, P.

    2015-03-01

    To begin understanding the potential economic impacts of large-scale WEC technology, the Bureau of Ocean Energy Management (BOEM) commissioned the National Renewable Energy Laboratory (NREL) to conduct an economic impact analysis of largescale WEC deployment for Oregon coastal counties. This report follows a previously published report by BOEM and NREL on the jobs and economic impacts of WEC technology for the entire state (Jimenez and Tegen 2015). As in Jimenez and Tegen (2015), this analysis examined two deployment scenarios in the 2026-2045 timeframe: the first scenario assumed 13,000 megawatts (MW) of WEC technology deployed during the analysis period, and the second assumed 18,000 MW of WEC technology deployed by 2045. Both scenarios require major technology and cost improvements in the WEC devices. The study is on very large-scale deployment so readers can examine and discuss the potential of a successful and very large WEC industry. The 13,000-MW is used as the basis for the county analysis as it is the smaller of the two scenarios. Sensitivity studies examined the effects of a robust in-state WEC supply chain. The region of analysis is comprised of the seven coastal counties in Oregon—Clatsop, Coos, Curry, Douglas, Lane, Lincoln, and Tillamook—so estimates of jobs and other economic impacts are specific to this coastal county area.

  4. Impacts of Array Configuration on Land-Use Requirements for Large-Scale Photovoltaic Deployment in the United States: Preprint

    SciTech Connect (OSTI)

    Denholm, P.; Margolis, R. M.

    2008-05-01

    Land use is often cited as an important issue for renewable energy technologies. In this paper we examine the relationship between land-use requirements for large-scale photovoltaic (PV) deployment in the U.S. and PV-array configuration. We estimate the per capita land requirements for solar PV and find that array configuration is a stronger driver of energy density than regional variations in solar insolation. When deployed horizontally, the PV land area needed to meet 100% of an average U.S. citizen's electricity demand is about 100 m2. This requirement roughly doubles to about 200 m2 when using 1-axis tracking arrays. By comparing these total land-use requirements with other current per capita land uses, we find that land-use requirements of solar photovoltaics are modest, especially when considering the availability of zero impact 'land' on rooftops. Additional work is need to examine the tradeoffs between array spacing, self-shading losses, and land use, along with possible techniques to mitigate land-use impacts of large-scale PV deployment.

  5. Field Scale Test and Verification of CHP System at the Ritz Carlton, San Francisco, August 2007

    Broader source: Energy.gov [DOE]

    ITP Industrial Distributed Energy: National Account Energy Alliance Final Report for the Field Scale Test and Verification of a PureComfort® 240M Combined Heat and Power System at the Ritz Carlton, San Francisco

  6. DOE-Sponsored Field Test Finds Potential for Permanent Storage of CO2 in Lignite Seams

    Broader source: Energy.gov [DOE]

    A field test sponsored by the U.S. Department of Energy has demonstrated that opportunities to permanently store carbon in unmineable seams of lignite may be more widespread than previously documented.

  7. Field Testing of a Wet FGD Additive for Enhanced Mercury Control - Task 3 Full-scale Test Results

    SciTech Connect (OSTI)

    Gary Blythe

    2007-05-01

    This Topical Report summarizes progress on Cooperative Agreement DE-FC26-04NT42309, 'Field Testing of a Wet FGD Additive'. The objective of the project is to demonstrate the use of a flue gas desulfurization (FGD) additive, Degussa Corporation's TMT-15, to prevent the reemission of elemental mercury (Hg{sup 0}) in flue gas exiting wet FGD systems on coal-fired boilers. Furthermore, the project intends to demonstrate whether the additive can be used to precipitate most of the mercury (Hg) removed in the wet FGD system as a fine TMT salt that can be separated from the FGD liquor and bulk solid byproducts for separate disposal. The project is conducting pilot- and full-scale tests of the TMT-15 additive in wet FGD absorbers. The tests are intended to determine required additive dosages to prevent Hg{sup 0} reemissions and to separate mercury from the normal FGD byproducts for three coal types: Texas lignite/Power River Basin (PRB) coal blend, high-sulfur Eastern bituminous coal, and low-sulfur Eastern bituminous coal. The project team consists of URS Group, Inc., EPRI, TXU Generation Company LP, Southern Company, and Degussa Corporation. TXU Generation has provided the Texas lignite/PRB cofired test site for pilot FGD tests, Monticello Steam Electric Station Unit 3. Southern Company is providing the low-sulfur Eastern bituminous coal host site for wet scrubbing tests, as well as the pilot- and full-scale jet bubbling reactor (JBR) FGD systems to be tested. IPL, an AES company, provided the high-sulfur Eastern bituminous coal full-scale FGD test site and cost sharing. Degussa Corporation is providing the TMT-15 additive and technical support to the test program as cost sharing. The project is being conducted in six tasks. Of the six project tasks, Task 1 involves project planning and Task 6 involves management and reporting. The other four tasks involve field testing on FGD systems, either at pilot or full scale. The four tasks include: Task 2 - Pilot Additive Testing

  8. Fireside corrosion testing of candidate superheater tube alloys, coatings, and claddings -- Phase 2 field testing

    SciTech Connect (OSTI)

    Blough, J.L.

    1996-08-01

    In Phase 1 of this project, a variety of developmental and commercial tubing alloys and claddings was exposed to laboratory fireside corrosion testing simulating a superheater or reheater in a coal-fired boiler. Phase 2 (in situ testing) has exposed samples of 347, RA85H, HR3C, 253MA, Fe{sub 3}Al + 5Cr, 310 modified, NF 709, 690 clad, and 671 clad for over 10,000 hours to the actual operating conditions of a 250-MW coal-fired boiler. The samples were installed on air-cooled, retractable corrosion probes, installed in the reheater cavity, controlled to the operating metal temperatures of an existing and advanced-cycle, coal-fired boiler. Samples of each alloy are being exposed for 4,000, 12,000, and 16,000 hours of operation. The present results are for the metallurgical examination of the corrosion probe samples after approximately 4,400 hours of exposure.

  9. Autonomie Large Scale Deployment

    Broader source: Energy.gov [DOE]

    2011 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation

  10. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    peta-scale production systems. For example, certain applications may not have enough memory per core, the default environment variables may need to be adjusted, or IO dominates...

  11. Engineering test plan for field radionuclide migration experiments in climax granite

    SciTech Connect (OSTI)

    Isherwood, D.; Raber, E.; Stone, R.; Lord, D.; Rector, N.; Failor, R.

    1982-05-01

    This Engineering Test Plan (ETP) describes field studies of radionuclide migration in fractured rock designed for the Climax grainite at the Nevada Test Site. The purpose of the ETP is to provide a detailed written document of the method of accomplishing these studies. The ETP contains the experimental test plans, an instrumentation plan, system schematics, a description of the test facility, and a brief outline of the laboratory support studies needed to understand the chemistry of the rock/water/radionuclide interactions. Results of our initial hydrologic investigations are presented along with pretest predictions based on the hydrologic test results.

  12. Field Testing of Low-Cost Bio-Based Phase Change Material

    SciTech Connect (OSTI)

    Biswas, Kaushik; Childs, Phillip W; Atchley, Jerald Allen

    2013-03-01

    A test wall built with phase change material (PCM)-enhanced loose-fill cavity insulation was monitored for a period of about a year in the warm-humid climate of Charleston, South Carolina. The test wall was divided into various sections, one of which contained only loose-fill insulation and served as a control for comparing and evaluating the wall sections with the PCM-enhanced insulation. This report summarizes the findings of the field test.

  13. Hanford 100-D Area Biostimulation Soluble Substrate Field Test: Interim Data Summary for the Substrate Injection and Process Monitoring Phases of the Field Test

    SciTech Connect (OSTI)

    Truex, Michael J.; Vermeul, Vincent R.; Mackley, Rob D.; Fritz, Brad G.; Mendoza, Donaldo P.; Johnson, Christian D.; Elmore, Rebecca P.; Brockman, Fred J.; Bilskis, Christina L.

    2008-06-01

    Pacific Northwest National Laboratory is conducting a treatability test designed to demonstrate that in situ biostimulation can be applied to help meet cleanup goals in the Hanford Site 100-D Area. The in situ biostimulation technology is intended to provide supplemental treatment upgradient of the In Situ Redox Manipulation (ISRM) barrier by reducing the concentration of the primary oxidizing species in groundwater (i.e., nitrate and dissolved oxygen) and chromate, and thereby increasing the longevity of the ISRM barrier. This report summarizes the initial results from field testing of an in situ biological treatment zone implemented through injection of a soluble substrate. The field test is divided into operational phases that include substrate injection, process monitoring, and performance monitoring. The results summarized herein are for the substrate injection and process monitoring phase encompassing the first approximately three months of field testing. Performance monitoring is ongoing at the time this report was prepared and is planned to extend over approximately 18 months. As such, this report is an interim data summary report for the field test. The treatability testing has multiple objectives focused on evaluating the performance of biostimulation as a reducing barrier for nitrate, oxygen, and chromate. The following conclusions related to these objectives are supported by the data provided in this report. Substrate was successfully distributed to a radius of about 15 m (50 ft) from the injection well. Monitoring data indicate that microbial growth initiated rapidly, and this rapid growth would limit the ability to inject substrate to significantly larger zones from a single injection well. As would be expected, the uniformity of substrate distribution was impacted by subsurface heterogeneity. However, subsequent microbial activity and ability to reduce the targeted species was observed throughout the monitored zone during the process monitoring

  14. Fireside corrosion testing of candidate superheater tube alloys, coatings, and claddings -- Phase 2 field testing

    SciTech Connect (OSTI)

    Blough, J.L.; Seitz, W.W.; Girshik, A.

    1998-06-01

    In Phase 1 of this project, laboratory experiments were performed on a variety of developmental and commercial tubing alloys and claddings by exposing them to fireside corrosion tests which simulated a superheater or reheater in a coal-fired boiler. Phase 2 (in situ testing) has exposed samples of 347, RA85H, HR3C, RA253MA, Fe{sub 3}Al + 5Cr, Ta-modified 310, NF 709, 690 clad, 671 clad, and 800HT for up to approximately 16,000 hours to the actual operating conditions of a 250-MW, coal-fired boiler. The samples were installed on air-cooled, retractable corrosion probes, installed in the reheater cavity, and controlled to the operating metal temperatures of an existing and advanced-cycle, coal-fired boiler. Samples of each alloy were exposed for 4,483, 11,348, and 15,883 hours of operation. The present results are for the metallurgical examination of the corrosion probe samples after the full 15,883 hours of exposure. A previous topical report has been issued for the 4,483 hours of exposure.

  15. Scattering of electromagnetic waves by vortex density structures associated with interchange instability: Analytical and large scale plasma simulation results

    SciTech Connect (OSTI)

    Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T. A.

    2014-05-15

    The presence of plasma turbulence can strongly influence propagation properties of electromagnetic signals used for surveillance and communication. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of coherent vortex structures. Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. These types of density irregularities play an important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics, and in many other applications. We will discuss scattering of high frequency electromagnetic waves on low frequency density irregularities due to the presence of vortex density structures associated with interchange instability. We will also present particle-in-cell simulation results of electromagnetic scattering on vortex type density structures using the large scale plasma code LSP and compare them with analytical results.

  16. Hydrogen atom temperature measured with wavelength-modulated laser absorption spectroscopy in large scale filament arc negative hydrogen ion source

    SciTech Connect (OSTI)

    Nakano, H. Goto, M.; Tsumori, K.; Kisaki, M.; Ikeda, K.; Nagaoka, K.; Osakabe, M.; Takeiri, Y.; Kaneko, O.; Nishiyama, S.; Sasaki, K.

    2015-04-08

    The velocity distribution function of hydrogen atoms is one of the useful parameters to understand particle dynamics from negative hydrogen production to extraction in a negative hydrogen ion source. Hydrogen atom temperature is one of the indicators of the velocity distribution function. To find a feasibility of hydrogen atom temperature measurement in large scale filament arc negative hydrogen ion source for fusion, a model calculation of wavelength-modulated laser absorption spectroscopy of the hydrogen Balmer alpha line was performed. By utilizing a wide range tunable diode laser, we successfully obtained the hydrogen atom temperature of ∼3000 K in the vicinity of the plasma grid electrode. The hydrogen atom temperature increases as well as the arc power, and becomes constant after decreasing with the filling of hydrogen gas pressure.

  17. Research project on CO2 geological storage and groundwaterresources: Large-scale hydrological evaluation and modeling of impact ongroundwater systems

    SciTech Connect (OSTI)

    Birkholzer, Jens; Zhou, Quanlin; Rutqvist, Jonny; Jordan,Preston; Zhang,K.; Tsang, Chin-Fu

    2007-10-24

    If carbon dioxide capture and storage (CCS) technologies areimplemented on a large scale, the amounts of CO2 injected and sequesteredunderground could be extremely large. The stored CO2 then replaces largevolumes of native brine, which can cause considerable pressureperturbation and brine migration in the deep saline formations. Ifhydraulically communicating, either directly via updipping formations orthrough interlayer pathways such as faults or imperfect seals, theseperturbations may impact shallow groundwater or even surface waterresources used for domestic or commercial water supply. Possibleenvironmental concerns include changes in pressure and water table,changes in discharge and recharge zones, as well as changes in waterquality. In compartmentalized formations, issues related to large-scalepressure buildup and brine displacement may also cause storage capacityproblems, because significant pressure buildup can be produced. Toaddress these issues, a three-year research project was initiated inOctober 2006, the first part of which is summarized in this annualreport.

  18. Large-scale mapping of landslides in the epicentral area Loma Prieta earthquake of October 17, 1989, Santa Cruz County

    SciTech Connect (OSTI)

    Spittler, T.E.; Sydnor, R.H.; Manson, M.W.; Levine, P.; McKittrick, M.M.

    1990-01-01

    The Loma Prieta earthquake of October 17, 1989 triggered landslides throughout the Santa Cruz Mountains in central California. The California Department of Conservation, Division of Mines and Geology (DMG) responded to a request for assistance from the County of Santa Cruz, Office of Emergency Services to evaluate the geologic hazard from major reactivated large landslides. DMG prepared a set of geologic maps showing the landslide features that resulted from the October 17 earthquake. The principal purpose of large-scale mapping of these landslides is: (1) to provide county officials with regional landslide information that can be used for timely recovery of damaged areas; (2) to identify disturbed ground which is potentially vulnerable to landslide movement during winter rains; (3) to provide county planning officials with timely geologic information that will be used for effective land-use decisions; (4) to document regional landslide features that may not otherwise be available for individual site reconstruction permits and for future development.

  19. Final Report on DOE Project entitled Dynamic Optimized Advanced Scheduling of Bandwidth Demands for Large-Scale Science Applications

    SciTech Connect (OSTI)

    Ramamurthy, Byravamurthy

    2014-05-05

    In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published several conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.

  20. Sensitivity analysis for joint inversion of ground-penetratingradar and thermal-hydrological data from a large-scale underground heatertest

    SciTech Connect (OSTI)

    Kowalsky, M.B.; Birkholzer, J.; Peterson, J.; Finsterle, S.; Mukhopadhya y, S.; Tsang, Y.T.

    2007-06-25

    We describe a joint inversion approach that combinesgeophysical and thermal-hydrological data for the estimation of (1)thermal-hydrological parameters (such as permeability, porosity, thermalconductivity, and parameters of the capillary pressure and relativepermeability functions) that are necessary for predicting the flow offluids and heat in fractured porous media, and (2) parameters of thepetrophysical function that relates water saturation, porosity andtemperature to the dielectric constant. The approach incorporates thecoupled simulation of nonisothermal multiphase fluid flow andground-penetrating radar (GPR) travel times within an optimizationframework. We discuss application of the approach to a large-scale insitu heater test which was conducted at Yucca Mountain, Nevada, to betterunderstand the coupled thermal, hydrological, mechanical, and chemicalprocesses that may occur in the fractured rock mass around a geologicrepository for high-level radioactive waste. We provide a description ofthe time-lapse geophysical data (i.e., cross-borehole ground-penetratingradar) and thermal-hydrological data (i.e., temperature and water contentdata) collected before and during the four-year heating phase of thetest, and analyze the sensitivity of the most relevantthermal-hydrological and petrophysical parameters to the available data.To demonstrate feasibility of the approach, and as a first step towardcomprehensive inversion of the heater test data, we apply the approach toestimate one parameter, the permeability of the rock matrix.