National Library of Energy BETA

Sample records for large scale geologic

  1. LUCI: A facility at DUSEL for large-scale experimental study of geologic carbon sequestration

    SciTech Connect (OSTI)

    Peters, C. A.; Dobson, P.F.; Oldenburg, C.M.; Wang, J. S. Y.; Onstott, T.C.; Scherer, G.W.; Freifeld, B.M.; Ramakrishnan, T.S.; Stabinski, E.L.; Liang, K.; Verma, S.

    2010-10-01

    LUCI, the Laboratory for Underground CO{sub 2} Investigations, is an experimental facility being planned for the DUSEL underground laboratory in South Dakota, USA. It is designed to study vertical flow of CO{sub 2} in porous media over length scales representative of leakage scenarios in geologic carbon sequestration. The plan for LUCI is a set of three vertical column pressure vessels, each of which is {approx}500 m long and {approx}1 m in diameter. The vessels will be filled with brine and sand or sedimentary rock. Each vessel will have an inner column to simulate a well for deployment of down-hole logging tools. The experiments are configured to simulate CO{sub 2} leakage by releasing CO{sub 2} into the bottoms of the columns. The scale of the LUCI facility will permit measurements to study CO{sub 2} flow over pressure and temperature variations that span supercritical to subcritical gas conditions. It will enable observation or inference of a variety of relevant processes such as buoyancy-driven flow in porous media, Joule-Thomson cooling, thermal exchange, viscous fingering, residual trapping, and CO{sub 2} dissolution. Experiments are also planned for reactive flow of CO{sub 2} and acidified brines in caprock sediments and well cements, and for CO{sub 2}-enhanced methanogenesis in organic-rich shales. A comprehensive suite of geophysical logging instruments will be deployed to monitor experimental conditions as well as provide data to quantify vertical resolution of sensor technologies. The experimental observations from LUCI will generate fundamental new understanding of the processes governing CO{sub 2} trapping and vertical migration, and will provide valuable data to calibrate and validate large-scale model simulations.

  2. On scale and magnitude of pressure build-up induced by large-scale geologic storage of CO2

    SciTech Connect (OSTI)

    Zhou, Q.; Birkholzer, J. T.

    2011-05-01

    The scale and magnitude of pressure perturbation and brine migration induced by geologic carbon sequestration is discussed assuming a full-scale deployment scenario in which enough CO{sub 2} is captured and stored to make relevant contributions to global climate change mitigation. In this scenario, the volumetric rates and cumulative volumes of CO{sub 2} injection would be comparable to or higher than those related to existing deep-subsurface injection and extraction activities, such as oil production. Large-scale pressure build-up in response to the injection may limit the dynamic storage capacity of suitable formations, because over-pressurization may fracture the caprock, may drive CO{sub 2}/brine leakage through localized pathways, and may cause induced seismicity. On the other hand, laterally extensive sedimentary basins may be less affected by such limitations because (i) local pressure effects are moderated by pressure propagation and brine displacement into regions far away from the CO{sub 2} storage domain; and (ii) diffuse and/or localized brine migration into overlying and underlying formations allows for pressure bleed-off in the vertical direction. A quick analytical estimate of the extent of pressure build-up induced by industrial-scale CO{sub 2} storage projects is presented. Also discussed are pressure perturbation and attenuation effects simulated for two representative sedimentary basins in the USA: the laterally extensive Illinois Basin and the partially compartmentalized southern San Joaquin Basin in California. These studies show that the limiting effect of pressure build-up on dynamic storage capacity is not as significant as suggested by Ehlig-Economides and Economides, who considered closed systems without any attenuation effects.

  3. Research project on CO2 geological storage and groundwaterresources: Large-scale hydrological evaluation and modeling of impact ongroundwater systems

    SciTech Connect (OSTI)

    Birkholzer, Jens; Zhou, Quanlin; Rutqvist, Jonny; Jordan,Preston; Zhang,K.; Tsang, Chin-Fu

    2007-10-24

    If carbon dioxide capture and storage (CCS) technologies areimplemented on a large scale, the amounts of CO2 injected and sequesteredunderground could be extremely large. The stored CO2 then replaces largevolumes of native brine, which can cause considerable pressureperturbation and brine migration in the deep saline formations. Ifhydraulically communicating, either directly via updipping formations orthrough interlayer pathways such as faults or imperfect seals, theseperturbations may impact shallow groundwater or even surface waterresources used for domestic or commercial water supply. Possibleenvironmental concerns include changes in pressure and water table,changes in discharge and recharge zones, as well as changes in waterquality. In compartmentalized formations, issues related to large-scalepressure buildup and brine displacement may also cause storage capacityproblems, because significant pressure buildup can be produced. Toaddress these issues, a three-year research project was initiated inOctober 2006, the first part of which is summarized in this annualreport.

  4. Constructing a large-scale 3D Geologic Model for Analysis of the Non-Proliferation Experiment

    SciTech Connect (OSTI)

    Wagoner, J; Myers, S

    2008-04-09

    We have constructed a regional 3D geologic model of the southern Great Basin, in support of a seismic wave propagation investigation of the 1993 Nonproliferation Experiment (NPE) at the Nevada Test Site (NTS). The model is centered on the NPE and spans longitude -119.5{sup o} to -112.6{sup o} and latitude 34.5{sup o} to 39.8{sup o}; the depth ranges from the topographic surface to 150 km below sea level. The model includes the southern half of Nevada, as well as parts of eastern California, western Utah, and a portion of northwestern Arizona. The upper crust is constrained by both geologic and geophysical studies, while the lower crust and upper mantle are constrained by geophysical studies. The mapped upper crustal geologic units are Quaternary basin fill, Tertiary deposits, pre-Tertiary deposits, intrusive rocks of all ages, and calderas. The lower crust and upper mantle are parameterized with 5 layers, including the Moho. Detailed geologic data, including surface maps, borehole data, and geophysical surveys, were used to define the geology at the NTS. Digital geologic outcrop data were available for both Nevada and Arizona, whereas geologic maps for California and Utah were scanned and hand-digitized. Published gravity data (2km spacing) were used to determine the thickness of the Cenozoic deposits and thus estimate the depth of the basins. The free surface is based on a 10m lateral resolution DEM at the NTS and a 90m lateral resolution DEM elsewhere. Variations in crustal thickness are based on receiver function analysis and a framework compilation of reflection/refraction studies. We used Earthvision (Dynamic Graphics, Inc.) to integrate the geologic and geophysical information into a model of x,y,z,p nodes, where p is a unique integer index value representing the geologic unit. For seismic studies, the geologic units are mapped to specific seismic velocities. The gross geophysical structure of the crust and upper mantle is taken from regional surface-wave studies. For regional seismic simulations we convert this realistic geologic model into elastic parameters. Upper crustal units are treated as seismically homogeneous while the lower crust and upper mantle are parameterized by a smoothly varying velocity profile. In order to mitigate spurious reflections, the lower crust and upper mantle are treated as velocity gradients as a function of depth.

  5. Characterization of Pliocene and Miocene Formations in the Wilmington Graben, Offshore Los Angeles, for Large-Scale Geologic Storage of CO₂

    SciTech Connect (OSTI)

    Bruno, Michael

    2014-12-08

    Geomechanics Technologies has completed a detailed characterization study of the Wilmington Graben offshore Southern California area for large-scale CO₂ storage. This effort has included: an evaluation of existing wells in both State and Federal waters, field acquisition of about 175 km (109 mi) of new seismic data, new well drilling, development of integrated 3D geologic, geomechanics, and fluid flow models for the area. The geologic analysis indicates that more than 796 MMt of storage capacity is available within the Pliocene and Miocene formations in the Graben for midrange geologic estimates (P50). Geomechanical analyses indicate that injection can be conducted without significant risk for surface deformation, induced stresses or fault activation. Numerical analysis of fluid migration indicates that injection into the Pliocene Formation at depths of 1525 m (5000 ft) would lead to undesirable vertical migration of the CO₂ plume. Recent well drilling however, indicates that deeper sand is present at depths exceeding 2135 m (7000 ft), which could be viable for large volume storage. For vertical containment, injection would need to be limited to about 250,000 metric tons per year per well, would need to be placed at depths greater than 7000ft, and would need to be placed in new wells located at least 1 mile from any existing offset wells. As a practical matter, this would likely limit storage operations in the Wilmington Graben to about 1 million tons per year or less. A quantitative risk analysis for the Wilmington Graben indicate that such large scale CO₂ storage in the area would represent higher risk than other similar size projects in the US and overseas.

  6. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running Large Scale Jobs Running Large Scale Jobs Users face various challenges with running and scaling large scale jobs on peta-scale production systems. For example, certain applications may not have enough memory per core, the default environment variables may need to be adjusted, or I/O dominates run time. This page lists some available programming and run time tuning options and tips users can try on their large scale applications on Hopper for better performance. Try different compilers

  7. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    try on their large scale applications on Hopper for better performance. Try different compilers and compiler options The available compilers on Hopper are PGI, Cray, Intel, GNU,...

  8. large-scale conveyance

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    large-scale conveyance - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

  9. Large scale tracking algorithms.

    SciTech Connect (OSTI)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  10. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    peta-scale production systems. For example, certain applications may not have enough memory per core, the default environment variables may need to be adjusted, or IO dominates...

  11. The Potential for Increased Atmospheric CO2 Emissions and Accelerated Consumption of Deep Geologic CO2 Storage Resources Resulting from the Large-Scale Deployment of a CCS-Enabled Unconventional Fossil Fuels Industry in the U.S.

    SciTech Connect (OSTI)

    Dooley, James J.; Dahowski, Robert T.; Davidson, Casie L.

    2009-11-02

    Desires to enhance the energy security of the United States have spurred significant interest in the development of abundant domestic heavy hydrocarbon resources including oil shale and coal to produce unconventional liquid fuels to supplement conventional oil supplies. However, the production processes for these unconventional fossil fuels create large quantities of carbon dioxide (CO2) and this remains one of the key arguments against such development. Carbon dioxide capture and storage (CCS) technologies could reduce these emissions and preliminary analysis of regional CO2 storage capacity in locations where such facilities might be sited within the U.S. indicates that there appears to be sufficient storage capacity, primarily in deep saline formations, to accommodate the CO2 from these industries. Nevertheless, even assuming wide-scale availability of cost-effective CO2 capture and geologic storage resources, the emergence of a domestic U.S. oil shale or coal-to-liquids (CTL) industry would be responsible for significant increases in CO2 emissions to the atmosphere. The authors present modeling results of two future hypothetical climate policy scenarios that indicate that the oil shale production facilities required to produce 3MMB/d from the Eocene Green River Formation of the western U.S. using an in situ retorting process would result in net emissions to the atmosphere of between 3000-7000 MtCO2, in addition to storing potentially 900-5000 MtCO2 in regional deep geologic formations via CCS in the period up to 2050. A similarly sized, but geographically more dispersed domestic CTL industry could result in 4000-5000 MtCO2 emitted to the atmosphere in addition to potentially 21,000-22,000 MtCO2 stored in regional deep geologic formations over the same period. While this analysis shows that there is likely adequate CO2 storage capacity in the regions where these technologies are likely to deploy, the reliance by these industries on large-scale CCS could result in an accelerated rate of utilization of the nations CO2 storage resource, leaving less high-quality storage capacity for other carbon-producing industries including electric power generation.

  12. Large-Scale Information Systems

    SciTech Connect (OSTI)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  13. Large-Scale Renewable Energy Guide Webinar

    Broader source: Energy.gov [DOE]

    Webinar introduces the Large Scale Renewable Energy Guide." The webinar will provide an overview of this important FEMP guide, which describes FEMP's approach to large-scale renewable energy projects and provides guidance to Federal agencies and the private sector on how to develop a common process for large-scale renewable projects.

  14. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.

    2016-02-18

    The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less

  15. Autonomie Large Scale Deployment | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large Scale Deployment Autonomie Large Scale Deployment 2011 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation PDF icon vss009_rousseau_2011_o.pdf More Documents & Publications Autonomie Plug&Play Software Architecture Vehicle Technologies Office Merit Review 2015: Accelerate the Development and Introduction of Advanced Technologies Through Model Based System Engineering Vehicle Technologies Office Merit Review 2014:

  16. Large-Scale PV Integration Study

    SciTech Connect (OSTI)

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  17. Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives PDF icon nanoparticulate-basedlubricati...

  18. The Effective Field Theory of Cosmological Large Scale Structures...

    Office of Scientific and Technical Information (OSTI)

    The Effective Field Theory of Cosmological Large Scale Structures Citation Details In-Document Search Title: The Effective Field Theory of Cosmological Large Scale Structures...

  19. Large Scale Computing and Storage Requirements for Advanced Scientific...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for ...

  20. Large-Scale Renewable Energy Guide: Developing Renewable Energy...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities Large-Scale Renewable Energy Guide: Developing Renewable Energy ...

  1. Large-Scale Residential Energy Efficiency Programs Based on CFLs...

    Open Energy Info (EERE)

    Large-Scale Residential Energy Efficiency Programs Based on CFLs Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Large-Scale Residential Energy Efficiency Programs Based...

  2. ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and Analyses of Automotive Engines Title ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and...

  3. DLFM library tools for large scale dynamic applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    applications DLFM library tools for large scale dynamic applications Large scale Python and other dynamic applications may spend huge time at startup. The DLFM library,...

  4. Large-scale quasi-geostrophic magnetohydrodynamics

    SciTech Connect (OSTI)

    Balk, Alexander M.

    2014-12-01

    We consider the ideal magnetohydrodynamics (MHD) of a shallow fluid layer on a rapidly rotating planet or star. The presence of a background toroidal magnetic field is assumed, and the 'shallow water' beta-plane approximation is used. We derive a single equation for the slow large length scale dynamics. The range of validity of this equation fits the MHD of the lighter fluid at the top of Earth's outer core. The form of this equation is similar to the quasi-geostrophic (Q-G) equation (for usual ocean or atmosphere), but the parameters are essentially different. Our equation also implies the inverse cascade; but contrary to the usual Q-G situation, the energy cascades to smaller length scales, while the enstrophy cascades to the larger scales. We find the Kolmogorov-type spectrum for the inverse cascade. The spectrum indicates the energy accumulation in larger scales. In addition to the energy and enstrophy, the obtained equation possesses an extra (adiabatic-type) invariant. Its presence implies energy accumulation in the 30° sector around zonal direction. With some special energy input, the extra invariant can lead to the accumulation of energy in zonal magnetic field; this happens if the input of the extra invariant is small, while the energy input is considerable.

  5. First U.S. Large-Scale CO2 Storage Project Advances | Department of Energy

    Office of Environmental Management (EM)

    First U.S. Large-Scale CO2 Storage Project Advances First U.S. Large-Scale CO2 Storage Project Advances April 6, 2009 - 1:00pm Addthis Washington, DC - Drilling nears completion for the first large-scale carbon dioxide (CO2) injection well in the United States for CO2 sequestration. This project will be used to demonstrate that CO2 emitted from industrial sources - such as coal-fired power plants - can be stored in deep geologic formations to mitigate large quantities of greenhouse gas

  6. Large-Scale Renewable Energy Guide | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Renewable Energy Guide Large-Scale Renewable Energy Guide Presentation covers the Large-scale RE Guide: Developing Renewable Energy Projects Larger than 10 MWs at Federal Facilities for the FUPWG Spring meeting, held on May 22, 2013 in San Francisco, California. PDF icon FEMP's Large-Scale Renewable Energy Guide - Presented by Brad Gustafson More Documents & Publications Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal

  7. Batteries for Large Scale Energy Storage

    SciTech Connect (OSTI)

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with ?-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  8. Supporting large-scale computational science

    SciTech Connect (OSTI)

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  9. Modeling and Risk Assessment of CO2 Sequestration at the Geologic-basin Scale

    SciTech Connect (OSTI)

    Juanes, Ruben

    2013-11-30

    The overall objective of this proposal was to develop tools for better understanding, modeling and risk assessment of CO2 permanence in geologic formations at the geologic basin scale.

  10. Energy Department Loan Guarantee Would Support Large-Scale Rooftop...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S. Military Housing Energy Department Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S....

  11. Large-Scale First-Principles Molecular Dynamics Simulations on...

    Office of Scientific and Technical Information (OSTI)

    Conference: Large-Scale First-Principles Molecular Dynamics Simulations on the BlueGeneL Platform using the Qbox Code Citation Details In-Document Search Title: Large-Scale...

  12. Locations of Smart Grid Demonstration and Large-Scale Energy...

    Office of Environmental Management (EM)

    Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Map of the United States ...

  13. SimFS: A Large Scale Parallel File System Simulator

    Energy Science and Technology Software Center (OSTI)

    2011-08-30

    The software provides both framework and tools to simulate a large-scale parallel file system such as Lustre.

  14. DLFM library tools for large scale dynamic applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    DLFM library tools for large scale dynamic applications DLFM library tools for large scale dynamic applications Large scale Python and other dynamic applications may spend huge time at startup. The DLFM library, developed by Mike Davis at Cray, Inc., is a set of functions that can be incorporated into a dynamically-linked application to provide improved performance during the loading of dynamic libraries when running the application at large scale on Edison. To access this library, do module

  15. Sensitivity technologies for large scale simulation.

    SciTech Connect (OSTI)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete version for error estimation. We investigate the advantages and disadvantages of continuous and discre

  16. Large-Scale Federal Renewable Energy Projects | Department of Energy

    Office of Environmental Management (EM)

    Large-Scale Federal Renewable Energy Projects Large-Scale Federal Renewable Energy Projects Renewable energy projects larger than 10 megawatts (MW), also known as utility-scale projects, are complex and typically require private-sector financing. The Federal Energy Management Program (FEMP) developed a guide to help federal agencies, and the developers and financiers that work with them, to successfully install these projects at federal facilities. FEMP's Large-Scale Renewable Energy Guide,

  17. Large-Scale Wind Training Program

    SciTech Connect (OSTI)

    Porter, Richard L.

    2013-07-01

    Project objective is to develop a credit-bearing wind technician program and a non-credit safety training program, train faculty, and purchase/install large wind training equipment.

  18. Large-Scale First-Principles Molecular Dynamics Simulations on...

    Office of Scientific and Technical Information (OSTI)

    Qbox is an FPMD implementation specifically designed for large-scale parallel platforms such as BlueGeneL. Strong scaling tests for a Materials Science application show an 86% ...

  19. Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Larger Than 10 MWs at Federal Facilities | Department of Energy Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities The Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities provides best practices and other helpful guidance for federal agencies developing

  20. Energy Department Applauds Nation's First Large-Scale Industrial Carbon

    Energy Savers [EERE]

    Capture and Storage Facility | Department of Energy Nation's First Large-Scale Industrial Carbon Capture and Storage Facility Energy Department Applauds Nation's First Large-Scale Industrial Carbon Capture and Storage Facility August 24, 2011 - 6:23pm Addthis Washington, D.C. - The U.S. Department of Energy issued the following statement in support of today's groundbreaking for construction of the nation's first large-scale industrial carbon capture and storage (ICCS) facility in Decatur,

  1. Large Scale Computing and Storage Requirements for Advanced Scientific

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Research: Target 2014 Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research An ASCR / NERSC Review January 5-6, 2011 Final Report Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research, Report of the Joint ASCR / NERSC Workshop conducted January 5-6, 2011 Goals This workshop is being

  2. Large Scale Computing and Storage Requirements for Basic Energy Sciences:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Target 2014 Large Scale Computing and Storage Requirements for Basic Energy Sciences: Target 2014 BESFrontcover.png Final Report Large Scale Computing and Storage Requirements for Basic Energy Sciences, Report of the Joint BES/ ASCR / NERSC Workshop conducted February 9-10, 2010 Workshop Agenda The agenda for this workshop is presented here: including presentation times and speaker information. Read More » Workshop Presentations Large Scale Computing and Storage Requirements for Basic

  3. Large Scale Computing and Storage Requirements for High Energy Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for High Energy Physics HEPFrontcover.png Large Scale Computing and Storage Requirements for High Energy Physics An HEP / ASCR / NERSC Workshop November 12-13, 2009 Report Large Scale Computing and Storage Requirements for High Energy Physics, Report of the Joint HEP / ASCR / NERSC Workshop conducted Nov. 12-13, 2009 https://www.nersc.gov/assets/HPC-Requirements-for-Science/HEPFrontcover.png Goals This workshop was organized by the Department of

  4. Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction |

    Office of Environmental Management (EM)

    Department of Energy Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction August 24, 2011 - 1:00pm Addthis Washington, DC - Construction activities have begun at an Illinois ethanol plant that will demonstrate carbon capture and storage. The project, sponsored by the U.S. Department of Energy's Office of Fossil Energy, is the first large-scale integrated carbon capture and storage (CCS) demonstration

  5. Sandia Energy - Large-Scale Computational Fluid Dynamics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    that were originally designed for nuclear-weapons-related problems for use in coal and biomass energy applications. These tools allow large-scale simulations of turbulent...

  6. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2013 Hilton Washington DCRockville Hotel and Executive Meeting Center 1750 Rockville Pike, Rockville, MD 20852-1699 Final Report Large Scale Computing and Storage Requirements...

  7. Optimizing Cluster Heads for Energy Efficiency in Large-Scale...

    Office of Scientific and Technical Information (OSTI)

    Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks Gu, Yi; Wu, Qishi; Rao, Nageswara S. V. Hindawi Publishing Corporation None...

  8. Computational Fluid Dynamics & Large-Scale Uncertainty Quantification...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Fluid Dynamics & Large-Scale Uncertainty Quantification for Wind Energy - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure ...

  9. Optimizing Cluster Heads for Energy Efficiency in Large-Scale...

    Office of Scientific and Technical Information (OSTI)

    clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy...

  10. Strategies to Finance Large-Scale Deployment of Renewable Energy...

    Open Energy Info (EERE)

    to Finance Large-Scale Deployment of Renewable Energy Projects: An Economic Development and Infrastructure Approach Jump to: navigation, search Tool Summary LAUNCH TOOL Name:...

  11. MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE...

    Office of Scientific and Technical Information (OSTI)

    ...IABILITY-LUMINOSITY RELATION Citation Details In-Document Search Title: MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE USING THE VARIABILITY-LUMINOSITY ...

  12. Effects of Volcanism, Crustal Thickness, and Large Scale Faulting...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    and Evolution of Geothermal Systems: Collaborative Project in Chile Effects of Volcanism, Crustal Thickness, and Large Scale Faulting on the Development and Evolution of ...

  13. FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy...

    Broader source: Energy.gov (indexed) [DOE]

    jobs, and advancing national goals for energy security. The guide describes the fundamentals of deploying financially attractive, large-scale renewable energy projects and...

  14. Stimulated forward Raman scattering in large scale-length laser...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: Stimulated forward Raman scattering in large scale-length laser-produced plasmas Citation Details In-Document Search Title: Stimulated forward Raman scattering in...

  15. A Model for Turbulent Combustion Simulation of Large Scale Hydrogen...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A Model for Turbulent Combustion Simulation of Large Scale Hydrogen Explosions Event Sponsor: Argonne Leadership Computing Facility Seminar Start Date: Oct 6 2015 - 10:00am...

  16. Overcoming the Barrier to Achieving Large-Scale Production -...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Overcoming the Barrier to Achieving Large-Scale Production - A Case Study This presentation summarizes the information given by Semprius during the Photovoltaic Validation and ...

  17. Large-Scale Hydropower Basics | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Renewable Energy » Hydropower » Large-Scale Hydropower Basics Large-Scale Hydropower Basics August 14, 2013 - 3:11pm Addthis Large-scale hydropower plants are generally developed to produce electricity for government or electric utility projects. These plants are more than 30 megawatts (MW) in size, and there is more than 80,000 MW of installed generation capacity in the United States today. Most large-scale hydropower projects use a dam and a reservoir to retain water from a river. When the

  18. Goethite Bench-scale and Large-scale Preparation Tests

    SciTech Connect (OSTI)

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the ferrous ion, Fe{sup 2+}-Fe{sup 2+} is oxidized to Fe{sup 3+} - in the presence of goethite seed particles. Rhenium does not mimic that process; it is not a strong enough reducing agent to duplicate the TcO{sub 4}{sup -}/Fe{sup 2+} redox reactions. Laboratory tests conducted in parallel with these scaled tests identified modifications to the liquid chemistry necessary to reduce ReO{sub 4}{sup -} and capture rhenium in the solids at levels similar to those achieved by Um (2010) for inclusion of Tc into goethite. By implementing these changes, Re was incorporated into Fe-rich solids for testing at VSL. The changes also changed the phase of iron that was in the slurry product: rather than forming goethite ({alpha}-FeOOH), the process produced magnetite (Fe{sub 3}O{sub 4}). Magnetite was considered by Pacific Northwest National Laboratory (PNNL) and VSL to probably be a better product to improve Re retention in the melter because it decomposes at a higher temperature than goethite (1538 C vs. 136 C). The feasibility tests at VSL were conducted using Re-rich magnetite. The tests did not indicate an improved retention of Re in the glass during vitrification, but they did indicate an improved melting rate (+60%), which could have significant impact on HLW processing. It is still to be shown whether the Re is a solid solution in the magnetite as {sup 99}Tc was determined to be in goethite.

  19. Large-scale mapping of landslides in the epicentral area Loma Prieta earthquake of October 17, 1989, Santa Cruz County

    SciTech Connect (OSTI)

    Spittler, T.E.; Sydnor, R.H.; Manson, M.W.; Levine, P.; McKittrick, M.M.

    1990-01-01

    The Loma Prieta earthquake of October 17, 1989 triggered landslides throughout the Santa Cruz Mountains in central California. The California Department of Conservation, Division of Mines and Geology (DMG) responded to a request for assistance from the County of Santa Cruz, Office of Emergency Services to evaluate the geologic hazard from major reactivated large landslides. DMG prepared a set of geologic maps showing the landslide features that resulted from the October 17 earthquake. The principal purpose of large-scale mapping of these landslides is: (1) to provide county officials with regional landslide information that can be used for timely recovery of damaged areas; (2) to identify disturbed ground which is potentially vulnerable to landslide movement during winter rains; (3) to provide county planning officials with timely geologic information that will be used for effective land-use decisions; (4) to document regional landslide features that may not otherwise be available for individual site reconstruction permits and for future development.

  20. ARM - Evaluation Product - Vertical Air Motion during Large-Scale

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Stratiform Rain ProductsVertical Air Motion during Large-Scale Stratiform Rain ARM Data Discovery Browse Data Documentation Use the Data File Inventory tool to view data availability at the file level. Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Evaluation Product : Vertical Air Motion during Large-Scale Stratiform Rain The Vertical Air Motion during Large-Scale Stratiform Rain (VERVELSR) value-added product (VAP) uses the unique

  1. Pore scale modeling of reactive transport involved in geologic CO2 sequestration

    SciTech Connect (OSTI)

    Kang, Qinjin; Lichtner, Peter C; Viswanathan, Hari S; Abdel-fattah, Amr I

    2009-01-01

    We apply a multi-component reactive transport lattice Boltzmann model developed in previolls studies to modeling the injection of a C02 saturated brine into various porous media structures at temperature T=25 and 80 C. The porous media are originally consisted of calcite. A chemical system consisting of Na+, Ca2+, Mg2+, H+, CO2(aq), and CI-is considered. The fluid flow, advection and diHusion of aqueous species, homogeneous reactions occurring in the bulk fluid, as weB as the dissolution of calcite and precipitation of dolomite are simulated at the pore scale. The effects of porous media structure on reactive transport are investigated. The results are compared with continuum scale modeling and the agreement and discrepancy are discussed. This work may shed some light on the fundamental physics occurring at the pore scale for reactive transport involved in geologic C02 sequestration.

  2. How Three Retail Buyers Source Large-Scale Solar Electricity

    Broader source: Energy.gov [DOE]

    Large-scale, non-utility solar power purchase agreements (PPAs) are still a rarity despite the growing popularity of PPAs across the country. In this webinar, participants will learn more about how...

  3. Energy Department Applauds Nation's First Large-Scale Industrial...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    in Phase 1 of its ICCS program, aimed at testing large-scale industrial CCS technologies. ... Find out more about DOE's support of research, development and deployment of CCS ...

  4. Revised Environmental Assessment Large-Scale, Open-Air Explosive

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Environmental Assessment Large-Scale, Open-Air Explosive Detonation, DIVINE STRAKE, at the Nevada Test Site May 2006 Prepared by Department of Energy National Nuclear Security Administration Nevada Site Office Environmental Assessment May 2006 Large-Scale, Open-Air Explosive Detonation, DIVINE STRAKE, at the Nevada Test Site TABLE OF CONTENTS 1.0 PURPOSE AND NEED FOR ACTION.....................................................1-1 1.1 Introduction and

  5. MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE USING

    Office of Scientific and Technical Information (OSTI)

    THE VARIABILITY-LUMINOSITY RELATION (Journal Article) | SciTech Connect MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE USING THE VARIABILITY-LUMINOSITY RELATION Citation Details In-Document Search Title: MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE USING THE VARIABILITY-LUMINOSITY RELATION We introduce a technique to measure gravitational lensing magnification using the variability of type I quasars. Quasars' variability amplitudes and luminosities

  6. The Cielo Petascale Capability Supercomputer: Providing Large-Scale

    Office of Scientific and Technical Information (OSTI)

    Computing for Stockpile Stewardship (Conference) | SciTech Connect Conference: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship Citation Details In-Document Search Title: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information

  7. COMMENTS OF THE LARGE-SCALE SOLAR ASSOCIATION TO DEPARTMENT

    Broader source: Energy.gov (indexed) [DOE]

    COMMENTS OF THE LARGE-SCALE SOLAR ASSOCIATION TO DEPARTMENT OF ENERGY'S RAPID RESPONSE TEAM FOR TRANSMISSION'S REQUEST FOR INFORMATION Submitted by electronic mail to: Lamont.Jackson@hq.doe.gov The Large-scale Solar Association appreciates this opportunity to respond to the Department of Energy's (DOE) Rapid Response Team for Transmission's (RRTT) Request for Information. 1 We applaud the DOE for creating the RRTT and continuing to advance the efforts already made under the Memorandum of

  8. Large Scale Production Computing and Storage Requirements for Fusion Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences" is organized by the Department of Energy's Office of Fusion Energy Sciences (FES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to

  9. Large Scale Production Computing and Storage Requirements for High Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 HEPlogo.jpg The NERSC Program Requirements Review "Large Scale Computing and Storage Requirements for High Energy Physics" is organized by the Department of Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to characterize

  10. Cosmological Simulations for Large-Scale Sky Surveys | Argonne Leadership

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Facility Cosmological Simulations for Large-Scale Sky Surveys PI Name: Salman Habib PI Email: habib@anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 80 Million Year: 2016 Research Domain: Physics The focus of cosmology today is on its two mysterious pillars, dark matter and dark energy. Large-scale sky surveys are the current drivers of precision cosmology and have been instrumental in making fundamental discoveries in these

  11. The Cielo Petascale Capability Supercomputer: Providing Large-Scale

    Office of Scientific and Technical Information (OSTI)

    Computing for Stockpile Stewardship (Conference) | SciTech Connect Conference: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship Citation Details In-Document Search Title: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship Authors: Vigil, Benny Manuel [1] ; Doerfler, Douglas W. [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2013-03-11 OSTI Identifier:

  12. Understanding large scale HPC systems through scalable monitoring and

    Office of Scientific and Technical Information (OSTI)

    analysis. (Conference) | SciTech Connect Understanding large scale HPC systems through scalable monitoring and analysis. Citation Details In-Document Search Title: Understanding large scale HPC systems through scalable monitoring and analysis. As HPC systems grow in size and complexity, diagnosing problems and understanding system behavior, including failure modes, becomes increasingly difficult and time consuming. At Sandia National Laboratories we have developed a tool, OVIS, to facilitate

  13. Breakthrough Large-Scale Industrial Project Begins Carbon Capture and

    Energy Savers [EERE]

    Utilization | Department of Energy Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization January 25, 2013 - 12:00pm Addthis Washington, DC - A breakthrough carbon capture, utilization, and storage (CCUS) project in Texas has begun capturing carbon dioxide (CO2) and piping it to an oilfield for use in enhanced oil recovery (EOR). Read the project factsheet The project at Air Products

  14. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect (OSTI)

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  15. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect (OSTI)

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ?CDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ?. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ?, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ?. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  16. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect (OSTI)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  17. Large-Scale Industrial CCS Projects Selected for Continued Testing |

    Office of Environmental Management (EM)

    Department of Energy CCS Projects Selected for Continued Testing Large-Scale Industrial CCS Projects Selected for Continued Testing June 10, 2010 - 1:00pm Addthis Washington, DC - Three Recovery Act funded projects have been selected by the U.S. Department of Energy (DOE) to continue testing large-scale carbon capture and storage (CCS) from industrial sources. The projects - located in Texas, Illinois, and Louisiana - were initially selected for funding in October 2009 as part of a $1.4

  18. Large-Scale Algal Cultivation, Harvesting and Downstream Processing Workshop

    Broader source: Energy.gov [DOE]

    ATP3 (Algae Testbed Public-Private Partnership) is hosting the Large-Scale Algal Cultivation, Harvesting and Downstream Processing Workshop on November 2–6, 2015, at the Arizona Center for Algae Technology and Innovation in Mesa, Arizona. Topics will include practical applications of growing and managing microalgal cultures at production scale (such as methods for handling cultures, screening strains for desirable characteristics, identifying and mitigating contaminants, scaling up cultures for outdoor growth, harvesting and processing technologies, and the analysis of lipids, proteins, and carbohydrates). Related training will include hands-on laboratory and field opportunities.

  19. Large-scale anisotropy in stably stratified rotating flows

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  20. Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Manufacturing of Nanoparticulate-Based Lubrication Additives Development of Boron-Based Nanolubrication Additives for Improved Energy Efficiency and Reduced Emissions Lubricants play a vital role in machine life and performance, reducing friction and wear and preventing component failure. Poor lubricant performance can cause signifcant energy and material losses. The already large global demand for lubricants is expected to continue growing in the future. Engine oils account for

  1. Large Scale Computing and Storage Requirements for Fusion Energy Sciences:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Target 2014 High Energy Physics (HEP) Nuclear Physics (NP) Overview Published Reports Case Study FAQs NERSC HPC Achievement Awards Share Your Research User Submitted Research Citations NERSC Citations Home » Science at NERSC » HPC Requirements Reviews » Requirements Reviews: Target 2014 » Fusion Energy Sciences (FES) Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2014 FESFrontcover.png An FES / ASCR / NERSC Workshop August 3-4, 2010 Final Report Large

  2. The effective field theory of cosmological large scale structures

    SciTech Connect (OSTI)

    Carrasco, John Joseph M.; Hertzberg, Mark P.; Senatore, Leonardo

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ? 106c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations ?(k) for all the observables. As an example, we calculate the correction to the power spectrum at order ?(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ? 0.24h Mpc1.

  3. LARGE-SCALE MOTIONS IN THE PERSEUS GALAXY CLUSTER

    SciTech Connect (OSTI)

    Simionescu, A.; Werner, N.; Urban, O.; Allen, S. W.; Fabian, A. C.; Sanders, J. S.; Mantz, A.; Nulsen, P. E. J.; Takei, Y.

    2012-10-01

    By combining large-scale mosaics of ROSAT PSPC, XMM-Newton, and Suzaku X-ray observations, we present evidence for large-scale motions in the intracluster medium of the nearby, X-ray bright Perseus Cluster. These motions are suggested by several alternating and interleaved X-ray bright, low-temperature, low-entropy arcs located along the east-west axis, at radii ranging from {approx}10 kpc to over a Mpc. Thermodynamic features qualitatively similar to these have previously been observed in the centers of cool-core clusters, and were successfully modeled as a consequence of the gas sloshing/swirling motions induced by minor mergers. Our observations indicate that such sloshing/swirling can extend out to larger radii than previously thought, on scales approaching the virial radius.

  4. Performance Health Monitoring of Large-Scale Systems

    SciTech Connect (OSTI)

    Rajamony, Ram

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-?scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  5. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect (OSTI)

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  6. Feasibility of Large-Scale Ocean CO2 Sequestration

    SciTech Connect (OSTI)

    Peter Brewer

    2008-08-31

    Scientific knowledge of natural clathrate hydrates has grown enormously over the past decade, with spectacular new findings of large exposures of complex hydrates on the sea floor, the development of new tools for examining the solid phase in situ, significant progress in modeling natural hydrate systems, and the discovery of exotic hydrates associated with sea floor venting of liquid CO{sub 2}. Major unresolved questions remain about the role of hydrates in response to climate change today, and correlations between the hydrate reservoir of Earth and the stable isotopic evidence of massive hydrate dissociation in the geologic past. The examination of hydrates as a possible energy resource is proceeding apace for the subpermafrost accumulations in the Arctic, but serious questions remain about the viability of marine hydrates as an economic resource. New and energetic explorations by nations such as India and China are quickly uncovering large hydrate findings on their continental shelves. In this report we detail research carried out in the period October 1, 2007 through September 30, 2008. The primary body of work is contained in a formal publication attached as Appendix 1 to this report. In brief we have surveyed the recent literature with respect to the natural occurrence of clathrate hydrates (with a special emphasis on methane hydrates), the tools used to investigate them and their potential as a new source of natural gas for energy production.

  7. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect (OSTI)

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  8. Relic vector field and CMB large scale anomalies

    SciTech Connect (OSTI)

    Chen, Xingang; Wang, Yi E-mail: yw366@cam.ac.uk

    2014-10-01

    We study the most general effects of relic vector fields on the inflationary background and density perturbations. Such effects are observable if the number of inflationary e-folds is close to the minimum requirement to solve the horizon problem. We show that this can potentially explain two CMB large scale anomalies: the quadrupole-octopole alignment and the quadrupole power suppression. We discuss its effect on the parity anomaly. We also provide analytical template for more detailed data comparison.

  9. Large Scale Production Computing and Storage Requirements for Advanced

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scientific Computing Research: Target 2017 Large Scale Production Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2017 ASCRLogo.png This is an invitation-only review organized by the Department of Energy's Office of Advanced Scientific Computing Research (ASCR) and NERSC. The general goal is to determine production high-performance computing, storage, and services that will be needed for ASCR to achieve its science goals through 2017. A specific focus

  10. Large Scale Production Computing and Storage Requirements for Basic Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Basic Energy Sciences: Target 2017 BES-Montage.png This is an invitation-only review organized by the Department of Energy's Office of Basic Energy Sciences (BES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The goal is to determine production high-performance computing, storage, and services that will be needed for BES to

  11. Large Scale Production Computing and Storage Requirements for Biological

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Environmental Research: Target 2017 Large Scale Production Computing and Storage Requirements for Biological and Environmental Research: Target 2017 BERmontage.gif September 11-12, 2012 Hilton Rockville Hotel and Executive Meeting Center 1750 Rockville Pike Rockville, MD, 20852-1699 TEL: 1-301-468-1100 Sponsored by: U.S. Department of Energy Office of Science Office of Advanced Scientific Computing Research (ASCR) Office of Biological and Environmental Research (BER) National Energy

  12. Large Scale Production Computing and Storage Requirements for Nuclear

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for Nuclear Physics: Target 2017 NPicon.png This invitation-only review is organized by the Department of Energy's Offices of Nuclear Physics (NP) and Advanced Scientific Computing Research (ASCR) and by NERSC. The goal is to determine production high-performance computing, storage, and services that will be needed for NP to achieve its science goals through 2017. The review brings together DOE Program Managers,

  13. Computational Fluid Dynamics & Large-Scale Uncertainty Quantification for

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Wind Energy Fluid Dynamics & Large-Scale Uncertainty Quantification for Wind Energy - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery

  14. Economical Large Scale Advanced Membrane and Sorbent Strategies

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Demand growth for chemical commodities, plus the high energy intensity of separations used in commodity production, present opportunities. William J. Koros Georgia Institute of Technology Economical Large Scale Advanced Membrane & Sorbent Strategies *Membranes and sorbents, offering up to 10X reductions in process energy intensity and CO 2 emissions, enable many opportunities. *An approach is outlined to pursue these opportunities and to provide competitive advantages and environmental

  15. Electrochemical cells for medium- and large-scale energy storage

    SciTech Connect (OSTI)

    Wang, Wei; Wei, Xiaoliang; Choi, Daiwon; Lu, Xiaochuan; Yang, G.; Sun, C.

    2014-12-12

    This is one of the chapters in the book titled “Advances in batteries for large- and medium-scale energy storage: Applications in power systems and electric vehicles” that will be published by the Woodhead Publishing Limited. The chapter discusses the basic electrochemical fundamentals of electrochemical energy storage devices with a focus on the rechargeable batteries. Several practical secondary battery systems are also discussed as examples

  16. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect (OSTI)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.

  17. Nuclear-pumped lasers for large-scale applications

    SciTech Connect (OSTI)

    Anderson, R.E.; Leonard, E.M.; Shea, R.F.; Berggren, R.R.

    1989-05-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficiently short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system; to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to demonstrate the performance of large-scale optics and the beam quality that may be obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 8 figs., 5 tabs.

  18. Nuclear-pumped lasers for large-scale applications

    SciTech Connect (OSTI)

    Anderson, R.E.; Leonard, E.M.; Shea, R.E.; Berggren, R.R.

    1988-01-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficient short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system: to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to determine the performance of large-scale optics and the beam quality that may bo obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 7 figs., 5 tabs.

  19. Just enough inflation: power spectrum modifications at large scales

    SciTech Connect (OSTI)

    Cicoli, Michele [Dipartimento di Fisica ed Astronomia, Universit di Bologna, via Irnerio 46, 40126 Bologna (Italy); Downes, Sean [Leung Center for Cosmology and Particle Astrophysics, National Taiwan University, No. 1, Section 4, Roosevelt Road, Taipei 10617, Taiwan (China); Dutta, Bhaskar [Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843-4242 (United States); Pedro, Francisco G.; Westphal, Alexander, E-mail: mcicoli@ictp.it, E-mail: ssdownes@phys.ntu.edu.tw, E-mail: dutta@physics.tamu.edu, E-mail: francisco.pedro@desy.de, E-mail: alexander.westphal@desy.de [Deutsches Elektronen-Synchrotron DESY, Theory Group, D-22603 Hamburg (Germany)

    2014-12-01

    We show that models of 'just enough' inflation, where the slow-roll evolution lasted only 50- 60 e-foldings, feature modifications of the CMB power spectrum at large angular scales. We perform a systematic analytic analysis in the limit of a sudden transition between any possible non-slow-roll background evolution and the final stage of slow-roll inflation. We find a high degree of universality since most common backgrounds like fast-roll evolution, matter or radiation-dominance give rise to a power loss at large angular scales and a peak together with an oscillatory behaviour at scales around the value of the Hubble parameter at the beginning of slow-roll inflation. Depending on the value of the equation of state parameter, different pre-inflationary epochs lead instead to an enhancement of power at low ?, and so seem disfavoured by recent observational hints for a lack of CMB power at ??<40. We also comment on the importance of initial conditions and the possibility to have multiple pre-inflationary stages.

  20. High Fidelity Simulations of Large-Scale Wireless Networks

    SciTech Connect (OSTI)

    Onunkwo, Uzoma; Benz, Zachary

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandias simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandias current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  1. Large scale obscuration and related climate effects open literature bibliography

    SciTech Connect (OSTI)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  2. Large-Scale Spray Releases: Additional Aerosol Test Results

    SciTech Connect (OSTI)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNLs test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used to mimic the relevant physical properties projected for actual WTP process streams.

  3. Planning under uncertainty solving large-scale stochastic linear programs

    SciTech Connect (OSTI)

    Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  4. Large scale, urban decontamination; developments, historical examples and lessons learned

    SciTech Connect (OSTI)

    Demmer, R.L.

    2007-07-01

    Recent terrorist threats and actions have lead to a renewed interest in the technical field of large scale, urban environment decontamination. One of the driving forces for this interest is the prospect for the cleanup and removal of radioactive dispersal device (RDD or 'dirty bomb') residues. In response, the United States Government has spent many millions of dollars investigating RDD contamination and novel decontamination methodologies. The efficiency of RDD cleanup response will be improved with these new developments and a better understanding of the 'old reliable' methodologies. While an RDD is primarily an economic and psychological weapon, the need to cleanup and return valuable or culturally significant resources to the public is nonetheless valid. Several private companies, universities and National Laboratories are currently developing novel RDD cleanup technologies. Because of its longstanding association with radioactive facilities, the U. S. Department of Energy National Laboratories are at the forefront in developing and testing new RDD decontamination methods. However, such cleanup technologies are likely to be fairly task specific; while many different contamination mechanisms, substrate and environmental conditions will make actual application more complicated. Some major efforts have also been made to model potential contamination, to evaluate both old and new decontamination techniques and to assess their readiness for use. There are a number of significant lessons that can be gained from a look at previous large scale cleanup projects. Too often we are quick to apply a costly 'package and dispose' method when sound technological cleaning approaches are available. Understanding historical perspectives, advanced planning and constant technology improvement are essential to successful decontamination. (authors)

  5. Large-Angular-Scale Anisotropy in the Cosmic Background Radiation

    DOE R&D Accomplishments [OSTI]

    Gorenstein, M. V.; Smoot, G. F.

    1980-05-01

    We report the results of an extended series of airborne measurements of large-angular-scale anisotropy in the 3 K cosmic background radiation. Observations were carried out with a dual-antenna microwave radiometer operating at 33 GHz (.089 cm wavelength) flown on board a U-2 aircraft to 20 km altitude. In eleven flights, between December 1976 and May 1978, the radiometer measured differential intensity between pairs of directions distributed over most of the northern hemisphere with an rms sensitivity of 47 mK Hz{sup 1�}. The measurements how clear evidence of anisotropy that is readily interpreted as due to the solar motion relative to the sources of the radiation. The anisotropy is well fit by a first order spherical harmonic of amplitude 360{+ or -}50km sec{sup -1} toward the direction 11.2{+ or -}0.5 hours of right ascension and 19 {+ or -}8 degrees declination. A simultaneous fit to a combined hypotheses of dipole and quadrupole angular distributions places a 1 mK limit on the amplitude of most components of quadrupole anisotropy with 90% confidence. Additional analysis places a 0.5 mK limit on uncorrelated fluctuations (sky-roughness) in the 3 K background on an angular scale of the antenna beam width, about 7 degrees.

  6. Cosmological implications of the CMB large-scale structure

    SciTech Connect (OSTI)

    Melia, Fulvio

    2015-01-01

    The Wilkinson Microwave Anisotropy Probe (WMAP) and Planck may have uncovered several anomalies in the full cosmic microwave background (CMB) sky that could indicate possible new physics driving the growth of density fluctuations in the early universe. These include an unusually low power at the largest scales and an apparent alignment of the quadrupole and octopole moments. In a ?CDM model where the CMB is described by a Gaussian Random Field, the quadrupole and octopole moments should be statistically independent. The emergence of these low probability features may simply be due to posterior selections from many such possible effects, whose occurrence would therefore not be as unlikely as one might naively infer. If this is not the case, however, and if these features are not due to effects such as foreground contamination, their combined statistical significance would be equal to the product of their individual significances. In the absence of such extraneous factors, and ignoring the biasing due to posterior selection, the missing large-angle correlations would have a probability as low as ?0.1% and the low-l multipole alignment would be unlikely at the ?4.9% level; under the least favorable conditions, their simultaneous observation in the context of the standard model could then be likely at only the ?0.005% level. In this paper, we explore the possibility that these features are indeed anomalous, and show that the corresponding probability of CMB multipole alignment in the R{sub h}=ct universe would then be ?710%, depending on the number of large-scale SachsWolfe induced fluctuations. Since the low power at the largest spatial scales is reproduced in this cosmology without the need to invoke cosmic variance, the overall likelihood of observing both of these features in the CMB is ?7%, much more likely than in ?CDM, if the anomalies are real. The key physical ingredient responsible for this difference is the existence in the former of a maximum fluctuation size at the time of recombination, which is absent in the latter because of inflation.

  7. Ferroelectric opening switches for large-scale pulsed power drivers.

    SciTech Connect (OSTI)

    Brennecka, Geoffrey L.; Rudys, Joseph Matthew; Reed, Kim Warren; Pena, Gary Edward; Tuttle, Bruce Andrew; Glover, Steven Frank

    2009-11-01

    Fast electrical energy storage or Voltage-Driven Technology (VDT) has dominated fast, high-voltage pulsed power systems for the past six decades. Fast magnetic energy storage or Current-Driven Technology (CDT) is characterized by 10,000 X higher energy density than VDT and has a great number of other substantial advantages, but it has all but been neglected for all of these decades. The uniform explanation for neglect of CDT technology is invariably that the industry has never been able to make an effective opening switch, which is essential for the use of CDT. Most approaches to opening switches have involved plasma of one sort or another. On a large scale, gaseous plasmas have been used as a conductor to bridge the switch electrodes that provides an opening function when the current wave front propagates through to the output end of the plasma and fully magnetizes the plasma - this is called a Plasma Opening Switch (POS). Opening can be triggered in a POS using a magnetic field to push the plasma out of the A-K gap - this is called a Magnetically Controlled Plasma Opening Switch (MCPOS). On a small scale, depletion of electron plasmas in semiconductor devices is used to affect opening switch behavior, but these devices are relatively low voltage and low current compared to the hundreds of kilo-volts and tens of kilo-amperes of interest to pulsed power. This work is an investigation into an entirely new approach to opening switch technology that utilizes new materials in new ways. The new materials are Ferroelectrics and using them as an opening switch is a stark contrast to their traditional applications in optics and transducer applications. Emphasis is on use of high performance ferroelectrics with the objective of developing an opening switch that would be suitable for large scale pulsed power applications. Over the course of exploring this new ground, we have discovered new behaviors and properties of these materials that were here to fore unknown. Some of these unexpected discoveries have lead to new research directions to address challenges.

  8. Large Scale Obscuration and Related Climate Effects Workshop: Proceedings

    SciTech Connect (OSTI)

    Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K.

    1994-05-01

    A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

  9. Design advanced for large-scale, economic, floating LNG plant

    SciTech Connect (OSTI)

    Naklie, M.M.

    1997-06-30

    A floating LNG plant design has been developed which is technically feasible, economical, safe, and reliable. This technology will allow monetization of small marginal fields and improve the economics of large fields. Mobil`s world-scale plant design has a capacity of 6 million tons/year of LNG and up to 55,000 b/d condensate produced from 1 bcfd of feed gas. The plant would be located on a large, secure, concrete barge with a central moonpool. LNG storage is provided for 250,000 cu m and condensate storage for 650,000 bbl. And both products are off-loaded from the barge. Model tests have verified the stability of the barge structure: barge motions are low enough to permit the plant to continue operation in a 100-year storm in the Pacific Rim. Moreover, the barge is spread-moored, eliminating the need for a turret and swivel. Because the design is generic, the plant can process a wide variety of feed gases and operate in different environments, should the plant be relocated. This capability potentially gives the plant investment a much longer project life because its use is not limited to the life of only one producing area.

  10. Large scale electromechanical transistor with application in mass sensing

    SciTech Connect (OSTI)

    Jin, Leisheng; Li, Lijie

    2014-12-07

    Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibrationan external force has to be used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.

  11. Ground movements associated with large-scale underground coal gasification

    SciTech Connect (OSTI)

    Siriwardane, H.J.; Layne, A.W.

    1989-09-01

    The primary objective of this work was to predict the surface and underground movement associated with large-scale multiwell burn sites in the Illinois Basin and Appalachian Basin by using the subsidence/thermomechanical model UCG/HEAT. This code is based on the finite element method. In particular, it can be used to compute (1) the temperature field around an underground cavity when the temperature variation of the cavity boundary is known, and (2) displacements and stresses associated with body forces (gravitational forces) and a temperature field. It is hypothesized that large Underground Coal Gasification (UCG) cavities generated during the line-drive process will be similar to those generated by longwall mining. If that is the case, then as a UCG process continues, the roof of the cavity becomes unstable and collapses. In the UCG/HEAT computer code, roof collapse is modeled using a simplified failure criterion (Lee 1985). It is anticipated that roof collapse would occur behind the burn front; therefore, forward combustion can be continued. As the gasification front propagates, the length of the cavity would become much larger than its width. Because of this large length-to-width ratio in the cavity, ground response behavior could be analyzed by considering a plane-strain idealization. In a plane-strain idealization of the UCG cavity, a cross-section perpendicular to the axis of propagation could be considered, and a thermomechanical analysis performed using a modified version of the two-dimensional finite element code UCG/HEAT. 15 refs., 9 figs., 3 tabs.

  12. Large-Scale Data Challenges in Future Power Grids

    SciTech Connect (OSTI)

    Yin, Jian; Sharma, Poorva; Gorton, Ian; Akyol, Bora A.

    2013-03-25

    This paper describes technical challenges in supporting large-scale real-time data analysis for future power grid systems and discusses various design options to address these challenges. Even though the existing U.S. power grid has served the nation remarkably well over the last 120 years, big changes are in the horizon. The widespread deployment of renewable generation, smart grid controls, energy storage, plug-in hybrids, and new conducting materials will require fundamental changes in the operational concepts and principal components. The whole system becomes highly dynamic and needs constant adjustments based on real time data. Even though millions of sensors such as phase measurement units (PMUs) and smart meters are being widely deployed, a data layer that can support this amount of data in real time is needed. Unlike the data fabric in cloud services, the data layer for smart grids must address some unique challenges. This layer must be scalable to support millions of sensors and a large number of diverse applications and still provide real time guarantees. Moreover, the system needs to be highly reliable and highly secure because the power grid is a critical piece of infrastructure. No existing systems can satisfy all the requirements at the same time. We examine various design options. In particular, we explore the special characteristics of power grid data to meet both scalability and quality of service requirements. Our initial prototype can improve performance by orders of magnitude over existing general-purpose systems. The prototype was demonstrated with several use cases from PNNL’s FPGI and was shown to be able to integrate huge amount of data from a large number of sensors and a diverse set of applications.

  13. Intermediate Scale Laboratory Testing to Understand Mechanisms of Capillary and Dissolution Trapping during Injection and Post-Injection of CO2 in Heterogeneous Geological Formations

    SciTech Connect (OSTI)

    Illangasekare, Tissa; Trevisan, Luca; Agartan, Elif; Mori, Hiroko; Vargas-Johnson, Javier; Gonzlez-Nicols, Ana; Cihan, Abdullah; Birkholzer, Jens; Zhou, Quanlin

    2015-03-31

    Carbon Capture and Storage (CCS) represents a technology aimed to reduce atmospheric loading of CO2 from power plants and heavy industries by injecting it into deep geological formations, such as saline aquifers. A number of trapping mechanisms contribute to effective and secure storage of the injected CO2 in supercritical fluid phase (scCO2) in the formation over the long term. The primary trapping mechanisms are structural, residual, dissolution and mineralization. Knowledge gaps exist on how the heterogeneity of the formation manifested at all scales from the pore to the site scales affects trapping and parameterization of contributing mechanisms in models. An experimental and modeling study was conducted to fill these knowledge gaps. Experimental investigation of fundamental processes and mechanisms in field settings is not possible as it is not feasible to fully characterize the geologic heterogeneity at all relevant scales and gathering data on migration, trapping and dissolution of scCO2. Laboratory experiments using scCO2 under ambient conditions are also not feasible as it is technically challenging and cost prohibitive to develop large, two- or three-dimensional test systems with controlled high pressures to keep the scCO2 as a liquid. Hence, an innovative approach that used surrogate fluids in place of scCO2 and formation brine in multi-scale, synthetic aquifers test systems ranging in scales from centimeter to meter scale developed used. New modeling algorithms were developed to capture the processes controlled by the formation heterogeneity, and they were tested using the data from the laboratory test systems. The results and findings are expected to contribute toward better conceptual models, future improvements to DOE numerical codes, more accurate assessment of storage capacities, and optimized placement strategies. This report presents the experimental and modeling methods and research results.

  14. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    SciTech Connect (OSTI)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i.e., Newtonian or non-Newtonian). The most important properties for testing with Newtonian slurries are the Archimedes number distribution and the particle concentration. For some test objectives, the shear strength is important. In the testing to collect data for CFD V and V and CFD comparison, the liquid density and liquid viscosity are important. In the high temperature testing, the liquid density and liquid viscosity are important. The Archimedes number distribution combines effects of particle size distribution, solid-liquid density difference, and kinematic viscosity. The most important properties for testing with non-Newtonian slurries are the slurry yield stress, the slurry consistency, and the shear strength. The solid-liquid density difference and the particle size are also important. It is also important to match multiple properties within the same simulant to achieve behavior representative of the waste. Other properties such as particle shape, concentration, surface charge, and size distribution breadth, as well as slurry cohesiveness and adhesiveness, liquid pH and ionic strength also influence the simulant properties either directly or through other physical properties such as yield stress.

  15. Large-Scale Spray Releases: Initial Aerosol Test Results

    SciTech Connect (OSTI)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and rectangular slots. The round holes ranged in size from 0.2 to 4.46 mm. The slots ranged from (width × length) 0.3 × 5 to 2.74 × 76.2 mm. Most slots were oriented longitudinally along the pipe, but some were oriented circumferentially. In addition, a limited number of multi-hole test pieces were tested in an attempt to assess the impact of a more complex breach. Much of the testing was conducted at pressures of 200 and 380 psi, but some tests were conducted at 100 psi. Testing the largest postulated breaches was deemed impractical because of the large size of some of the WTP equipment. The purpose of this report is to present the experimental results and analyses for the aerosol measurements obtained in the large-scale test stand. The report includes a description of the simulants used and their properties, equipment and operations, data analysis methodology, and test results. The results of tests investigating the role of slurry particles in plugging of small breaches are reported in Mahoney et al. 2012a. The results of the aerosol measurements in the small-scale test stand are reported in Mahoney et al. (2012b).

  16. ANALYSIS OF TURBULENT MIXING JETS IN LARGE SCALE TANK

    SciTech Connect (OSTI)

    Lee, S; Richard Dimenna, R; Robert Leishear, R; David Stefanko, D

    2007-03-28

    Flow evolution models were developed to evaluate the performance of the new advanced design mixer pump for sludge mixing and removal operations with high-velocity liquid jets in one of the large-scale Savannah River Site waste tanks, Tank 18. This paper describes the computational model, the flow measurements used to provide validation data in the region far from the jet nozzle, the extension of the computational results to real tank conditions through the use of existing sludge suspension data, and finally, the sludge removal results from actual Tank 18 operations. A computational fluid dynamics approach was used to simulate the sludge removal operations. The models employed a three-dimensional representation of the tank with a two-equation turbulence model. Both the computational approach and the models were validated with onsite test data reported here and literature data. The model was then extended to actual conditions in Tank 18 through a velocity criterion to predict the ability of the new pump design to suspend settled sludge. A qualitative comparison with sludge removal operations in Tank 18 showed a reasonably good comparison with final results subject to significant uncertainties in actual sludge properties.

  17. Large-Scale Algal Cultivation, Harvesting and Downstream Processing...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    screening strains for desirable characteristics, identifying and mitigating contaminants, scaling up cultures for outdoor growth, harvesting and processing technologies,...

  18. Parallel I/O Software Infrastructure for Large-Scale Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel IO Software Infrastructure for Large-Scale Systems Parallel IO Software Infrastructure for Large-Scale Systems Choudhary.png An illustration of how MPI---IO file domain...

  19. Large-Scale Deep Learning on the YFCC100M Dataset (Conference...

    Office of Scientific and Technical Information (OSTI)

    Large-Scale Deep Learning on the YFCC100M Dataset Citation Details In-Document Search Title: Large-Scale Deep Learning on the YFCC100M Dataset You are accessing a document from...

  20. The IR-resummed Effective Field Theory of Large Scale Structures (Journal

    Office of Scientific and Technical Information (OSTI)

    Article) | SciTech Connect The IR-resummed Effective Field Theory of Large Scale Structures Citation Details In-Document Search Title: The IR-resummed Effective Field Theory of Large Scale Structures We present a new method to resum the effect of large scale motions in the Effective Field Theory of Large Scale Structures. Because the linear power spectrum in ΛCDM is not scale free the effects of the large scale flows are enhanced. Although previous EFT calculations of the equal-time density

  1. FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    | Department of Energy Helps Federal Facilities Develop Large-Scale Renewable Energy Projects FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects August 21, 2013 - 12:00am Addthis EERE's Federal Energy Management Program issued a new resource that provides best practices and helpful guidance for federal agencies developing large-scale renewable energy projects. The resource, Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger than 10 MWs at

  2. EERE Success Story-FEMP Helps Federal Facilities Develop Large-Scale

    Office of Environmental Management (EM)

    Renewable Energy Projects | Department of Energy FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects EERE Success Story-FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects August 21, 2013 - 12:00am Addthis EERE's Federal Energy Management Program issued a new resource that provides best practices and helpful guidance for federal agencies developing large-scale renewable energy projects. The resource, Large-Scale Renewable Energy Guide:

  3. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    SciTech Connect (OSTI)

    Margaret Riley; Merry Buckley

    2009-01-01

    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin, since not only are their genomes available, but they are also accompanied by data on environment and physiology that can be used to understand the resulting data. As single cell isolation methods improve, there should be a shift toward incorporating uncultured organisms and communities into this effort. Efforts to sequence cultivated isolates should target characterized isolates from culture collections for which biochemical data are available, as well as other cultures of lasting value from personal collections. The genomes of type strains should be among the first targets for sequencing, but creative culture methods, novel cell isolation, and sorting methods would all be helpful in obtaining organisms we have not yet been able to cultivate for sequencing. The data that should be provided for strains targeted for sequencing will depend on the phylogenetic context of the organism and the amount of information available about its nearest relatives. Annotation is an important part of transforming genome sequences into useful resources, but it represents the most significant bottleneck to the field of comparative genomics right now and must be addressed. Furthermore, there is a need for more consistency in both annotation and achieving annotation data. As new annotation tools become available over time, re-annotation of genomes should be implemented, taking advantage of advancements in annotation techniques in order to capitalize on the genome sequences and increase both the societal and scientific benefit of genomics work. Given the proper resources, the knowledge and ability exist to be able to select model systems, some simple, some less so, and dissect them so that we may understand the processes and interactions at work in them. Colloquium participants suggest a five-pronged, coordinated initiative to exhaustively describe six different microbial ecosystems, designed to describe all the gene diversity, across genomes. In this effort, sequencing should be complemented by other experimental data, particularly transcriptomics and metabolomics data, all of which

  4. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  5. Development of explosive event scale model testing capability at Sandia`s large scale centrifuge facility

    SciTech Connect (OSTI)

    Blanchat, T.K.; Davie, N.T.; Calderone, J.J.

    1998-02-01

    Geotechnical structures such as underground bunkers, tunnels, and building foundations are subjected to stress fields produced by the gravity load on the structure and/or any overlying strata. These stress fields may be reproduced on a scaled model of the structure by proportionally increasing the gravity field through the use of a centrifuge. This technology can then be used to assess the vulnerability of various geotechnical structures to explosive loading. Applications of this technology include assessing the effectiveness of earth penetrating weapons, evaluating the vulnerability of various structures, counter-terrorism, and model validation. This document describes the development of expertise in scale model explosive testing on geotechnical structures using Sandia`s large scale centrifuge facility. This study focused on buried structures such as hardened storage bunkers or tunnels. Data from this study was used to evaluate the predictive capabilities of existing hydrocodes and structural dynamics codes developed at Sandia National Laboratories (such as Pronto/SPH, Pronto/CTH, and ALEGRA). 7 refs., 50 figs., 8 tabs.

  6. Large-Scale Manufacturing of Nanoparticulate-Based Lubrication Additives

    SciTech Connect (OSTI)

    2009-06-01

    This factsheet describes a research project whose goal is to design, develop, manufacture, and scale up boron-based nanoparticulate lubrication additives.

  7. The Dark Energy of Turbulent Damping: Large Scale Dissipation...

    Office of Scientific and Technical Information (OSTI)

    Resource Relation: Conference: Plasma Energization: Exchanges between Fluid and Kinetic Scales ; 2015-05-04 - 2015-05-06 ; Los Alamos, New Mexico, United States Research Org: Los ...

  8. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.

  9. The linearly scaling 3D fragment method for large scale electronic structure calculations

    SciTech Connect (OSTI)

    Zhao, Zhengji; Meza, Juan; Lee, Byounghak; Shan, Hongzhang; Strohmaier, Erich; Bailey, David; Wang, Lin-Wang

    2009-07-28

    The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  10. The Linearly Scaling 3D Fragment Method for Large Scale Electronic Structure Calculations

    SciTech Connect (OSTI)

    Zhao, Zhengji; Meza, Juan; Lee, Byounghak; Shan, Hongzhang; Strohmaier, Erich; Bailey, David; Wang, Lin-Wang

    2009-06-26

    The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  11. Joint EM-NE-International Study of Glass Behavior over Geologic Time Scales

    SciTech Connect (OSTI)

    Ryan, Joseph V.; Ebert, W. L.; Icenhower, Jonathan P.; Schreiber, Daniel K.; Strachan, Denis M.; Vienna, John D.

    2012-03-30

    Vitrification has been chosen as the best demonstrated available technology for waste immobilization worldwide. To date, the contributions of physical and chemical processes controlling the long-term glass dissolution rate in geologic disposal remain uncertain; leading to a lack of international consensus on a glass corrosion rate law. Existing rate laws have overcome the uncertainty through conservatism, but a thorough mechanistic understanding of waste form durability in geologic environments would improve public and regulator confidence, as well as lead to cost savings if it is possible to take credit for the true durability of the waste form itself in system evaluations. To this end, six nations have joined together to formulate a joint plan for collaborative research into the mechanisms controlling the long-term corrosion of glass. This report highlights the technical program plan behind the US portion of this effort, with an emphasis on the current understanding (and limitations) of several mechanistic theories for glass corrosion. Some recent results are presented to provide an example of the ongoing research.

  12. UNIVERSITY OF CALIFORNIA The Future of Large Scale Visual Data

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    - Vector, then single-core MPPs - "Large" SMP platforms - Relatively well balanced: memory, FLOPS,IO 16 June 2014 The World that Was: Software Architecture * Data Analysis and...

  13. Creating Large Scale Database Servers (Technical Report) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    access to such a large quantity of data through a database server is a daunting task. ... This paper will describe the design of the database and the changes that we needed to make ...

  14. COLLOQUIUM: Large Scale Superconducting Magnets for Variety of...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    These developments have been made using the low temperature superconductors (LTS) NbTi and Nb3Sn. The now operating Large Hadron Collider at CERN has demonstrated the scientific ...

  15. Application of DYNA3D in large scale crashworthiness calculations

    SciTech Connect (OSTI)

    Benson, D.J.; Hallquist, J.O.; Igarashi, M.; Shimomaki, K.; Mizuno, M.

    1986-01-01

    This paper presents an example of an automobile crashworthiness calculation. Based on our experiences with the example calculation, we make recommendations to those interested in performing crashworthiness calculations. The example presented in this paper was supplied by Suzuki Motor Co., Ltd., and provided a significant shakedown for the new large deformation shell capability of the DYNA3D code. 15 refs., 3 figs.

  16. In-situ sampling of a large-scale particle simulation for interactive

    Office of Scientific and Technical Information (OSTI)

    visualization and analysis (Journal Article) | SciTech Connect In-situ sampling of a large-scale particle simulation for interactive visualization and analysis Citation Details In-Document Search Title: In-situ sampling of a large-scale particle simulation for interactive visualization and analysis We propose storing a random sampling of data from large scale particle simulations, such as the Roadrunner Universe MC{sup 3} cosmological simulation, to be used for interactive post-analysis and

  17. Locations of Smart Grid Demonstration and Large-Scale Energy Storage

    Energy Savers [EERE]

    Projects | Department of Energy Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Map of the United States showing the location of all projects created with funding from the Smart Grid Demonstration and Energy Storage Project, funded through the American Recovery and Reinvestment Act. PDF icon Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects More Documents

  18. DOE's Office of Science Seeks Proposals for Expanded Large-Scale Scientific

    Energy Savers [EERE]

    Computing | Department of Energy Seeks Proposals for Expanded Large-Scale Scientific Computing DOE's Office of Science Seeks Proposals for Expanded Large-Scale Scientific Computing May 16, 2005 - 12:47pm Addthis WASHINGTON, D.C. -- Secretary of Energy Samuel W. Bodman announced today that DOE's Office of Science is seeking proposals to support innovative, large-scale computational science projects to enable high-impact advances through the use of advanced computers not commonly available in

  19. Effects of Volcanism, Crustal Thickness, and Large Scale Faulting on the

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Development and Evolution of Geothermal Systems: Collaborative Project in Chile | Department of Energy Effects of Volcanism, Crustal Thickness, and Large Scale Faulting on the Development and Evolution of Geothermal Systems: Collaborative Project in Chile Effects of Volcanism, Crustal Thickness, and Large Scale Faulting on the Development and Evolution of Geothermal Systems: Collaborative Project in Chile Effects of Volcanism, Crustal Thickness, and Large Scale Faulting on the Development

  20. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    researchLANSeventslistn Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based...

  1. Large-scale Offshore Wind Power in the United States. Assessment of Opportunities and Barriers

    SciTech Connect (OSTI)

    Musial, Walter; Ram, Bonnie

    2010-09-01

    This report describes the benefits of and barriers to large-scale deployment of offshore wind energy systems in U.S. waters.

  2. U.S. Signs International Fusion Energy Agreement; Large-Scale...

    Office of Science (SC) Website

    U.S. Signs International Fusion Energy Agreement; Large-Scale, Clean Fusion Energy Project to Begin Construction News News Home Featured Articles Science Headlines 2015 2014 2013 ...

  3. Large Scale Comparative Visualisation of Regulatory Networks with TRNDiff

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Chua, Xin-Yi; Buckingham, Lawrence; Hogan, James M.; Novichkov, Pavel

    2015-06-01

    The advent of Next Generation Sequencing (NGS) technologies has seen explosive growth in genomic datasets, and dense coverage of related organisms, supporting study of subtle, strain-specific variations as a determinant of function. Such data collections present fresh and complex challenges for bioinformatics, those of comparing models of complex relationships across hundreds and even thousands of sequences. Transcriptional Regulatory Network (TRN) structures document the influence of regulatory proteins called Transcription Factors (TFs) on associated Target Genes (TGs). TRNs are routinely inferred from model systems or iterative search, and analysis at these scales requires simultaneous displays of multiple networks well beyond thosemore » of existing network visualisation tools [1]. In this paper we describe TRNDiff, an open source system supporting the comparative analysis and visualization of TRNs (and similarly structured data) from many genomes, allowing rapid identification of functional variations within species. The approach is demonstrated through a small scale multiple TRN analysis of the Fur iron-uptake system of Yersinia, suggesting a number of candidate virulence factors; and through a larger study exploiting integration with the RegPrecise database (http://regprecise.lbl.gov; [2]) - a collection of hundreds of manually curated and predicted transcription factor regulons drawn from across the entire spectrum of prokaryotic organisms.« less

  4. Joint EM-NE-International Study of Glass Behavior over Geologic Time Scales - 12303

    SciTech Connect (OSTI)

    Ryan, J.V.; Schreiber, D.K.; Strachan, D.M.; Vienna, J.D. [Pacific Northwest National Laboratory, P. O. Box 999, Richland, WA 99352 (United States); Ebert, W.L. [Argonne National Laboratory, Argonne, IL 60439 (United States); Icenhower, J.P. [Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720 (United States)

    2012-07-01

    Vitrification has been chosen as the best demonstrated available technology for waste immobilization worldwide. To date, the contributions of physical and chemical processes controlling the long-term glass dissolution rate in geologic disposal remain uncertain, leading to a lack of international consensus on a rate law for glass corrosion. Existing rate laws have overcome uncertainty through conservatism, but a thorough mechanistic understanding of waste form durability in geologic environments would improve public and regulator confidence. If it is possible to take credit for the true durability of the waste form in repository system evaluations, then it is possible to design the repository with less conservatism with concomitant cost savings. To gain a fundamental understanding of the dissolution rate law, six nations have joined together to formulate a joint plan for collaborative research into the mechanisms controlling the long-term corrosion of glass. This report highlights the technical program plan behind the US portion of this effort, with an emphasis on the current understanding (and limitations) of several mechanistic theories for glass corrosion. Some recent results are presented to provide an example of the ongoing research. Atom probe tomography has been used to provide a high-resolution analysis of elemental concentration gradients present at the hydrated glass / pristine glass interface in SON68 after 25.75 years of corrosion in a simulated granitic groundwater at 90 deg. C. The most valuable result of these initial studies is the success of the technique. Characterization by APT had never been previously demonstrated for glass corrosion layers. The resolution of APT is a powerful addition to the tools with which we can investigate the mechanisms dominating glass corrosion. Some other key results of this study include the observation that the elemental interfacial width between the hydrated glass and pristine glass appears to be much sharper (?2 nm for B, Na and Al) than had been previously measured using nanoSIMS (?240 nm). It is not clear whether the APT analysis and nanoSIMS characterizations were possibly performed on topographically unique regions, or whether nanoSIMS overestimated the elemental width. However, the APT data seems very convincing that the elemental width can be much sharper than was previously thought. This result calls into question some of the assumptions made for the diffusion-control models of glass dissolution, since such a sharp profile would not match the diffusion coefficients used to date. Other results, such as the observation of apparently layered concentration profiles, show that gel evolution is not as simple as is currently assumed in nearly every model. This task is a good example of the collaborative nature of the I-TEAM effort. Based on experimental needs and differences in expertise, scientists from DOE and CEA worked together to change the level of understanding in the field. These types of interactions are nearly ubiquitous among the tasks in the technical program plan. With the excellence of the team in place and the willingness of the participants to work together for a common understanding, the stated goal of consensus on the mechanistic basis for radionuclide release from glass is well within reach. (authors)

  5. Deep geological isolation of nuclear waste: numerical modeling of repository scale hydrology

    SciTech Connect (OSTI)

    Dettinger, M.D.

    1980-04-01

    The Scope of Work undertaken covers three main tasks, described as follows: (Task 1) CDM provided consulting services to the University on modeling aspects of the study having to do with transport processes involving the local groundwater system near the repository and the flow of fluids and vapors through the various porous media making up the repository system. (Task 2) CDM reviewed literature related to repository design, concentrating on effects of the repository geometry, location and other design factors on the flow of fluids within the repository boundaries, drainage from the repository structure, and the eventual transport of radionucldies away from the repository site. (Task 3) CDM, in a joint effort with LLL personnel, identified generic boundary and initial conditions, identified processes to be modeled, and recommended a modeling approach with suggestions for appropriate simplifications and approximations to the problem and identifiying important parameters necessary to model the processes. This report consists of two chapters and an appendix. The first chapter (Chapter III of the LLL report) presents a detailed description and discussion of the modeling approach developed in this project, its merits and weaknesses, and a brief review of the difficulties anticipated in implementing the approach. The second chapter (Chapter IV of the LLL report) presents a summary of a survey of researchers in the field of repository performance analysis and a discussion of that survey in light of the proposed modeling approach. The appendix is a review of the important physical processes involved in the potential hydrologic transport of radionuclides through, around and away from deep geologic nuclear waste repositories.

  6. Large-scale soil bioremediation using white-rot fungi

    SciTech Connect (OSTI)

    Holroyd, M.L.; Caunt, P.

    1995-12-31

    Some organic pollutant compounds are considered resistant to conventional bioremediation because of their structure or behavior in soil. This phenomenon, together with the increasing need to reach lower target levels in shorter time periods, has shown the need for improved or alternative biological processes. It has been known for some time that the white-rot fungi, particularly the species Phanerochaete chrysosporium, have potentially useful abilities to rapidly degrade pollutant molecules. The use of white-rot fungi at the field scale presents a number of challenges, and this paper outlines the use of a process incorporating Phanerochaete to successfully bioremediate over 6,000 m{sup 3} of chlorophenol-contaminated soil at a site in Finland. Moreover, the method developed is very cost-effective and proved capable of reaching the very low target levels within the contracted time span.

  7. Parallel Tensor Compression for Large-Scale Scientific Data.

    SciTech Connect (OSTI)

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  8. Single-field consistency relations of large scale structure

    SciTech Connect (OSTI)

    Creminelli, Paolo; Norea, Jorge; Simonovi?, Marko; Vernizzi, Filippo E-mail: jorge.norena@icc.ub.edu E-mail: filippo.vernizzi@cea.fr

    2013-12-01

    We derive consistency relations for the late universe (CDM and ?CDM): relations between an n-point function of the density contrast ? and an (n+1)-point function in the limit in which one of the (n+1) momenta becomes much smaller than the others. These are based on the observation that a long mode, in single-field models of inflation, reduces to a diffeomorphism since its freezing during inflation all the way until the late universe, even when the long mode is inside the horizon (but out of the sound horizon). These results are derived in Newtonian gauge, at first and second order in the small momentum q of the long mode and they are valid non-perturbatively in the short-scale ?. In the non-relativistic limit our results match with [1]. These relations are a consequence of diffeomorphism invariance; they are not satisfied in the presence of extra degrees of freedom during inflation or violation of the Equivalence Principle (extra forces) in the late universe.

  9. Testing coupled dark energy with large scale structure observation

    SciTech Connect (OSTI)

    Yang, Weiqiang; Xu, Lixin, E-mail: d11102004@mail.dlut.edu.cn, E-mail: lxxu@dlut.edu.cn [Institute of Theoretical Physics, School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian, 116024 (China)

    2014-08-01

    The coupling between the dark components provides a new approach to mitigate the coincidence problem of cosmological standard model. In this paper, dark energy is treated as a fluid with a constant equation of state, whose coupling with dark matter is Q-bar =3H?{sub x}?-bar {sub x}. In the frame of dark energy, we derive the evolution equations for the density and velocity perturbations. According to the Markov Chain Monte Carlo method, we constrain the model by currently available cosmic observations which include cosmic microwave background radiation, baryon acoustic oscillation, type Ia supernovae, and f?{sub 8}(z) data points from redshift-space distortion. The results show the interaction rate in ? regions: ?{sub x}=0.00328{sub -0.00328-0.00328-0.00328}{sup +0.000736+0.00549+0.00816}, which means that the recently cosmic observations favor a small interaction rate which is up to the order of 10{sup -2}, meanwhile, the measurement of redshift-space distortion could rule out the large interaction rate in the ? region.

  10. Development of fine-resolution analyses and expanded large-scale forcing

    Office of Scientific and Technical Information (OSTI)

    properties. Part II: Scale-awareness and application to single-column model experiments (Journal Article) | SciTech Connect II: Scale-awareness and application to single-column model experiments Citation Details In-Document Search Title: Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: Scale-awareness and application to single-column model experiments Fine-resolution three-dimensional fields have been produced using the Community Gridpoint

  11. Method for large-scale fabrication of atomic-scale structures on material surfaces using surface vacancies

    DOE Patents [OSTI]

    Lim, Chong Wee (Urbana, IL); Ohmori, Kenji (Urbana, IL); Petrov, Ivan Georgiev (Champaign, IL); Greene, Joseph E. (Champaign, IL)

    2004-07-13

    A method for forming atomic-scale structures on a surface of a substrate on a large-scale includes creating a predetermined amount of surface vacancies on the surface of the substrate by removing an amount of atoms on the surface of the material corresponding to the predetermined amount of the surface vacancies. Once the surface vacancies have been created, atoms of a desired structure material are deposited on the surface of the substrate to enable the surface vacancies and the atoms of the structure material to interact. The interaction causes the atoms of the structure material to form the atomic-scale structures.

  12. Large-Scale First-Principles Molecular Dynamics Simulations on the

    Office of Scientific and Technical Information (OSTI)

    BlueGene/L Platform using the Qbox Code (Conference) | SciTech Connect Conference: Large-Scale First-Principles Molecular Dynamics Simulations on the BlueGene/L Platform using the Qbox Code Citation Details In-Document Search Title: Large-Scale First-Principles Molecular Dynamics Simulations on the BlueGene/L Platform using the Qbox Code We demonstrate that the Qbox code supports unprecedented large-scale First-Principles Molecular Dynamics (FPMD) applications on the BlueGene/L

  13. Nuclear EMP simulation for large-scale urban environments. FDTD for electrically large problems.

    SciTech Connect (OSTI)

    Smith, William S.; Bull, Jeffrey S.; Wilcox, Trevor; Bos, Randall J.; Shao, Xuan-Min; Goorley, John T.; Costigan, Keeley R.

    2012-08-13

    In case of a terrorist nuclear attack in a metropolitan area, EMP measurement could provide: (1) a prompt confirmation of the nature of the explosion (chemical or nuclear) for emergency response; and (2) and characterization parameters of the device (reaction history, yield) for technical forensics. However, urban environment could affect the fidelity of the prompt EMP measurement (as well as all other types of prompt measurement): (1) Nuclear EMP wavefront would no longer be coherent, due to incoherent production, attenuation, and propagation of gamma and electrons; and (2) EMP propagation from source region outward would undergo complicated transmission, reflection, and diffraction processes. EMP simulation for electrically-large urban environment: (1) Coupled MCNP/FDTD (Finite-difference time domain Maxwell solver) approach; and (2) FDTD tends to be limited to problems that are not 'too' large compared to the wavelengths of interest because of numerical dispersion and anisotropy. We use a higher-order low-dispersion, isotropic FDTD algorithm for EMP propagation.

  14. Hanford Site Guidelines for Preparation and Presentation of Geologic Information

    SciTech Connect (OSTI)

    Lanigan, David C.; Last, George V.; Bjornstad, Bruce N.; Thorne, Paul D.; Webber, William D.

    2010-04-30

    A complex geology lies beneath the Hanford Site of southeastern Washington State. Within this geology is a challenging large-scale environmental cleanup project. Geologic and contaminant transport information generated by several U.S. Department of Energy contractors must be documented in geologic graphics clearly, consistently, and accurately. These graphics must then be disseminated in formats readily acceptable by general graphics and document producing software applications. The guidelines presented in this document are intended to facilitate consistent, defensible, geologic graphics and digital data/graphics sharing among the various Hanford Site agencies and contractors.

  15. ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Analyses of Automotive Engines | Argonne National Laboratory ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and Analyses of Automotive Engines Title ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and Analyses of Automotive Engines Publication Type Conference Proceedings Year of Publication 2015 Authors Aithal, SM, Wild, SM Conference Name High Performance Computing Volume 9137 Pagination 87-95 Publisher Springer Conference Location Frankfurt, Germany

  16. Partition-of-unity finite-element method for large scale quantum molecular

    Office of Scientific and Technical Information (OSTI)

    dynamics on massively parallel computational platforms (Technical Report) | SciTech Connect Technical Report: Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms Citation Details In-Document Search Title: Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms Over the course of the past two decades, quantum mechanical calculations have

  17. HyLights -- Tools to Prepare the Large-Scale European Demonstration

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects on Hydrogen for Transport | Department of Energy HyLights -- Tools to Prepare the Large-Scale European Demonstration Projects on Hydrogen for Transport HyLights -- Tools to Prepare the Large-Scale European Demonstration Projects on Hydrogen for Transport Presented at Refueling Infrastructure for Alternative Fuel Vehicles: Lessons Learned for Hydrogen Conference, April 2-3, 2008, Sacramento, California PDF icon buenger.pdf More Documents & Publications Santa Clara Valley

  18. Overcoming the Barrier to Achieving Large-Scale Production - A Case Study

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Semprius Confidential 1 Overcoming the Barriers to Achieving Large-Scale Production - A Case Study From concept to large-scale production, one manufacturer tells the story and identifies the primary challenges and how a small amount of government support could be most helpful. ____________________________________________________ Scott Burroughs Semprius, Inc. August 31, 2011 Semprius Confidential 2 Semprius Overview / Background Company: * Leading developer of commercial & utility solar

  19. Transport Induced by Large Scale Convective Structures in a Dipole-Confined Plasma

    SciTech Connect (OSTI)

    Grierson, B. A.; Mauel, M. E.; Worstell, M. W.; Klassen, M.

    2010-11-12

    Convective structures characterized by ExB motion are observed in a dipole-confined plasma. Particle transport rates are calculated from density dynamics obtained from multipoint measurements and the reconstructed electrostatic potential. The calculated transport rates determined from the large-scale dynamics and local probe measurements agree in magnitude, show intermittency, and indicate that the particle transport is dominated by large-scale convective structures.

  20. 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Simulations 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale Simulations 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale Simulations Halo Finder Enhancement Puts Supercomputer Users in the Driver's Seat November 2, 2015 Contact: Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov Nyxfilamentsandreeberhalos In this Reeber halo finder simulation, the blueish haze is a volume rendering of the density field that Nyx calculates every time step. The light blue and

  1. Effect of Subgrid Cloud Variability on Parameterization of Indirect Aerosol Effect in Large-Scale Models

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Effect of Subgrid Cloud Variability on Parameterization of Indirect Aerosol Effect in Large-Scale Models M. Ovtchinnikov and S. J. Ghan Pacific Northwest National Laboratory Richland, Washington X. Dong University of Utah Salt Lake City, Utah M. H. Zhang State University of New York Stony Brook, New York Introduction An adequate parameterization of cloud microphysics is essential for estimating the indirect aerosol effect in large-scale models. Such a parameterization must rely on a physically

  2. Towards a Large-Scale Recording System: Demonstration of Polymer-Based

    Office of Scientific and Technical Information (OSTI)

    Penetrating Array for Chronic Neural Recording (Conference) | SciTech Connect Towards a Large-Scale Recording System: Demonstration of Polymer-Based Penetrating Array for Chronic Neural Recording Citation Details In-Document Search Title: Towards a Large-Scale Recording System: Demonstration of Polymer-Based Penetrating Array for Chronic Neural Recording Authors: Tooker, A ; Liu, D ; Anderson, E B ; Felix, S ; Shah, K G ; Lee, K Y ; Chung, J E ; Pannu, S ; Frank, L ; Tolosa, V Publication

  3. BLM and Forest Service Consider Large-Scale Geothermal Leasing | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy and Forest Service Consider Large-Scale Geothermal Leasing BLM and Forest Service Consider Large-Scale Geothermal Leasing June 18, 2008 - 4:29pm Addthis In an effort to encourage appropriate geothermal energy development on public lands, the Bureau of Land Management (BLM) and the U.S. Forest Service have prepared a Draft Programmatic Environmental Impact Statement (PEIS) for geothermal leasing in the West, including Alaska. The draft PEIS considers all public lands and national

  4. Overcoming the Barrier to Achieving Large-Scale Production - A Case Study |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Overcoming the Barrier to Achieving Large-Scale Production - A Case Study Overcoming the Barrier to Achieving Large-Scale Production - A Case Study This presentation summarizes the information given by Semprius during the Photovoltaic Validation and Bankability Workshop in San Jose, California, on August 31, 2011. PDF icon semprius_burroughs_pv_validation_2011_aug.pdf More Documents & Publications Federal Energy Management Program Report Template PV Validation and

  5. A First Step towards Large-Scale Plants to Plastics Engineering |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy A First Step towards Large-Scale Plants to Plastics Engineering A First Step towards Large-Scale Plants to Plastics Engineering November 9, 2010 - 1:56pm Addthis Brookhaven National Laboratory researches making plastics from plants. Niketa Kumar Niketa Kumar Public Affairs Specialist, Office of Public Affairs What does this mean for me? By optimizing the accumulation of particular fatty acids, a Brookhaven team of scientists are developing a method suitable for

  6. CO{sub 2} Sequestration Capacity and Associated Aspects of the Most Promising Geologic Formations in the Rocky Mountain Region: Local-Scale Analyses

    SciTech Connect (OSTI)

    Laes, Denise; Eisinger, Chris; Morgan, Craig; Rauzi, Steve; Scholle, Dana; Scott, Phyllis; Lee, Si-Yong; Zaluski, Wade; Esser, Richard; Matthews, Vince; McPherson, Brian

    2013-07-30

    The purpose of this report is to provide a summary of individual local-?scale CCS site characterization studies conducted in Colorado, New Mexico and Utah. These site-? specific characterization analyses were performed as part of the Characterization of Most Promising Sequestration Formations in the Rocky Mountain Region (RMCCS) project. The primary objective of these local-?scale analyses is to provide a basis for regional-?scale characterization efforts within each state. Specifically, limits on time and funding will typically inhibit CCS projects from conducting high-? resolution characterization of a state-?sized region, but smaller (< 10,000 km{sup 2}) site analyses are usually possible, and such can provide insight regarding limiting factors for the regional-?scale geology. For the RMCCS project, the outcomes of these local-?scale studies provide a starting point for future local-?scale site characterization efforts in the Rocky Mountain region.

  7. Variability of Load and Net Load in Case of Large Scale Distributed Wind Power

    SciTech Connect (OSTI)

    Holttinen, H.; Kiviluoma, J.; Estanqueiro, A.; Gomez-Lazaro, E.; Rawn, B.; Dobschinski, J.; Meibom, P.; Lannoye, E.; Aigner, T.; Wan, Y. H.; Milligan, M.

    2011-01-01

    Large scale wind power production and its variability is one of the major inputs to wind integration studies. This paper analyses measured data from large scale wind power production. Comparisons of variability are made across several variables: time scale (10-60 minute ramp rates), number of wind farms, and simulated vs. modeled data. Ramp rates for Wind power production, Load (total system load) and Net load (load minus wind power production) demonstrate how wind power increases the net load variability. Wind power will also change the timing of daily ramps.

  8. Simultaneous effect of modified gravity and primordial non-Gaussianity in large scale structure observations

    SciTech Connect (OSTI)

    Mirzatuny, Nareg; Khosravi, Shahram; Baghram, Shant; Moshafi, Hossein E-mail: khosravi@mail.ipm.ir E-mail: hosseinmoshafi@iasbs.ac.ir

    2014-01-01

    In this work we study the simultaneous effect of primordial non-Gaussianity and the modification of the gravity in f(R) framework on large scale structure observations. We show that non-Gaussianity and modified gravity introduce a scale dependent bias and growth rate functions. The deviation from ?CDM in the case of primordial non-Gaussian models is in large scales, while the growth rate deviates from ?CDM in small scales for modified gravity theories. We show that the redshift space distortion can be used to distinguish positive and negative f{sub NL} in standard background, while in f(R) theories they are not easily distinguishable. The galaxy power spectrum is generally enhanced in presence of non-Gaussianity and modified gravity. We also obtain the scale dependence of this enhancement. Finally we define galaxy growth rate and galaxy growth rate bias as new observational parameters to constrain cosmology.

  9. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect (OSTI)

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  10. Basin-Scale Leakage Risks from Geologic Carbon Sequestration: Impact on Carbon Capture and Storage Energy Market Competitiveness

    SciTech Connect (OSTI)

    Peters, Catherine; Fitts, Jeffrey; Wilson, Elizabeth; Pollak, Melisa; Bielicki, Jeffrey; Bhatt, Vatsal

    2013-03-13

    This three-year project, performed by Princeton University in partnership with the University of Minnesota and Brookhaven National Laboratory, examined geologic carbon sequestration in regard to CO{sub 2} leakage and potential subsurface liabilities. The research resulted in basin-scale analyses of CO{sub 2} and brine leakage in light of uncertainties in the characteristics of leakage processes, and generated frameworks to monetize the risks of leakage interference with competing subsurface resources. The geographic focus was the Michigan sedimentary basin, for which a 3D topographical model was constructed to represent the hydrostratigraphy. Specifically for Ottawa County, a statistical analysis of the hydraulic properties of underlying sedimentary formations was conducted. For plausible scenarios of injection into the Mt. Simon sandstone, leakage rates were estimated and fluxes into shallow drinking-water aquifers were found to be less than natural analogs of CO{sub 2} fluxes. We developed the Leakage Impact Valuation (LIV) model in which we identified stakeholders and estimated costs associated with leakage events. It was found that costs could be incurred even in the absence of legal action or other subsurface interference because there are substantial costs of finding and fixing the leak and from injection interruption. We developed a model framework called RISCS, which can be used to predict monetized risk of interference with subsurface resources by combining basin-scale leakage predictions with the LIV method. The project has also developed a cost calculator called the Economic and Policy Drivers Module (EPDM), which comprehensively calculates the costs of carbon sequestration and leakage, and can be used to examine major drivers for subsurface leakage liabilities in relation to specific injection scenarios and leakage events. Finally, we examined the competiveness of CCS in the energy market. This analysis, though qualitative, shows that financial incentives, such as a carbon tax, are needed for coal combustion with CCS to gain market share. In another part of the project we studied the role of geochemical reactions in affecting the probability of CO{sub 2} leakage. A basin-scale simulation tool was modified to account for changes in leakage rates due to permeability alterations, based on simplified mathematical rules for the important geochemical reactions between acidified brines and caprock minerals. In studies of reactive flows in fractured caprocks, we examined the potential for permeability increases, and the extent to which existing reactive transport models would or would not be able to predict it. Using caprock specimens from the Eau Claire and Amherstburg, we found that substantial increases in permeability are possible for caprocks that have significant carbonate content, but minimal alteration is expected otherwise. We also found that while the permeability increase may be substantial, it is much less than what would be predicted from hydrodynamic models based on mechanical aperture alone because the roughness that is generated tends to inhibit flow.

  11. What Will the Neighbors Think? Building Large-Scale Science Projects Around the World

    ScienceCinema (OSTI)

    Jones, Craig; Mrotzek, Christian; Toge, Nobu; Sarno, Doug

    2010-01-08

    Public participation is an essential ingredient for turning the International Linear Collider into a reality. Wherever the proposed particle accelerator is sited in the world, its neighbors -- in any country -- will have something to say about hosting a 35-kilometer-long collider in their backyards. When it comes to building large-scale physics projects, almost every laboratory has a story to tell. Three case studies from Japan, Germany and the US will be presented to examine how community relations are handled in different parts of the world. How do particle physics laboratories interact with their local communities? How do neighbors react to building large-scale projects in each region? How can the lessons learned from past experiences help in building the next big project? These and other questions will be discussed to engage the audience in an active dialogue about how a large-scale project like the ILC can be a good neighbor.

  12. Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet.

    SciTech Connect (OSTI)

    Adalsteinsson, Helgi; Armstrong, Robert C.; Chiang, Ken; Gentile, Ann C.; Lloyd, Levi; Minnich, Ronald G.; Vanderveen, Keith; Van Randwyk, Jamie A; Rudish, Don W.

    2008-10-01

    We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3

  13. Large-Scale Computational Screening of Zeolites for Ethane/Ethene

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Separation | Center for Gas SeparationsRelevant to Clean Energy Technologies | Blandine Jerome Scale Computational Screening of Zeolites for Ethane/Ethene Separation Previous Next List J. Kim, L.-C. Lin, R. L. Martin, J. A. Swisher, M. Haranczyk, and B. Smit, Langmuir 28 (32), 11914 (2012) DOI: 10.1021/la302230z Abstract Image Abstract: Large-scale computational screening of thirty thousand zeolite structures was conducted to find optimal structures for separation of ethane/ethene mixtures.

  14. Large scale magnetic fields and coherent structures in nonuniform unmagnetized plasma

    SciTech Connect (OSTI)

    Jucker, Martin; Andrushchenko, Zhanna N.; Pavlenko, Vladimir P.

    2006-07-15

    The properties of streamers and zonal magnetic structures in magnetic electron drift mode turbulence are investigated. The stability of such large scale structures is investigated in the kinetic and the hydrodynamic regime, for which an instability criterion similar to the Lighthill criterion for modulational instability is found. Furthermore, these large scale flows can undergo further nonlinear evolution after initial linear growth, which can lead to the formation of long-lived coherent structures consisting of self-bound wave packets between the surfaces of two different flow velocities with an expected modification of the anomalous electron transport properties.

  15. Energy Department Loan Guarantee Would Support Large-Scale Rooftop Solar

    Energy Savers [EERE]

    Power for U.S. Military Housing | Department of Energy Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S. Military Housing Energy Department Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S. Military Housing September 7, 2011 - 2:10pm Addthis Washington D.C. - U.S. Energy Secretary Steven Chu today announced the offer of a conditional commitment for a partial guarantee of a $344 million loan that will support the SolarStrong Project, which is expected

  16. Comparison of the effects in the rock mass of large-scale chemical and

    Office of Scientific and Technical Information (OSTI)

    nuclear explosions. Final technical report, June 9, 1994--October 9, 1994 (Technical Report) | SciTech Connect Comparison of the effects in the rock mass of large-scale chemical and nuclear explosions. Final technical report, June 9, 1994--October 9, 1994 Citation Details In-Document Search Title: Comparison of the effects in the rock mass of large-scale chemical and nuclear explosions. Final technical report, June 9, 1994--October 9, 1994 × You are accessing a document from the Department

  17. ARM - PI Product - Large Scale Ice Water Path and 3-D Ice Water Content

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ProductsLarge Scale Ice Water Path and 3-D Ice Water Content ARM Data Discovery Browse Data Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send PI Product : Large Scale Ice Water Path and 3-D Ice Water Content Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM

  18. Large-scale Screening of Zeolite Structures for CO2 Membrane Separations |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Center for Gas SeparationsRelevant to Clean Energy Technologies | Blandine Jerome Large-scale Screening of Zeolite Structures for CO2 Membrane Separations Previous Next List J. Kim, M. Abouelnasr, L.-C. Lin, and B. Smit, J Am Chem Soc, 135, 7545-7552 (2013) DOI: 10.1021/ja400267g Abstract: We have conducted large-scale screening of zeolite materials for CO2/CH4 and CO2/N2 membrane separation applications using the free energy landscape of the guest molecules inside these porous materials. We

  19. Robust and scalable scheme to generate large-scale entanglement webs

    Office of Scientific and Technical Information (OSTI)

    (Journal Article) | SciTech Connect Robust and scalable scheme to generate large-scale entanglement webs Citation Details In-Document Search Title: Robust and scalable scheme to generate large-scale entanglement webs We propose a robust and scalable scheme to generate an N-qubit W state among separated quantum nodes (cavity-QED systems) by using linear optics and postselections. The present scheme inherits the robustness of the Barrett-Kok scheme [S. D. Barrett and P. Kok, Phys. Rev. A 71,

  20. Stimulated forward Raman scattering in large scale-length laser-produced

    Office of Scientific and Technical Information (OSTI)

    plasmas (Journal Article) | SciTech Connect Stimulated forward Raman scattering in large scale-length laser-produced plasmas Citation Details In-Document Search Title: Stimulated forward Raman scattering in large scale-length laser-produced plasmas Authors: Niemann, C ; Berger, R L ; Divol, L ; Kirkwood, R K ; Moody, J D ; Sorce, C M ; Glenzer, S H Publication Date: 2011-08-22 OSTI Identifier: 1113524 Report Number(s): LLNL-JRNL-496073 DOE Contract Number: W-7405-ENG-48 Resource Type:

  1. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    SciTech Connect (OSTI)

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, but WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.

  2. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less

  3. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect (OSTI)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  4. Development of large scale production of Nd-doped phosphate glasses for megajoule-scale laser systems

    SciTech Connect (OSTI)

    Ficini, G.; Campbell, J.H.

    1996-05-01

    Nd-doped phosphate glasses are the preferred gain medium for high-peak-power lasers used for Inertial Confinement Fusion research because they have excellent energy storage and extraction characteristics. In addition, these glasses can be manufactured defect-free in large sizes and at relatively low cost. To meet the requirements of the future mega-joule size lasers, advanced laser glass manufacturing methods are being developed that would enable laser glass to be continuously produced at the rate of several thousand large (790 x 440 x 44 mm{sup 3}) plates of glass per year. This represents more than a 10 to 100-fold improvement in the scale of the present manufacturing technology.

  5. Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets

    SciTech Connect (OSTI)

    Patchett, John M; Ahrens, James P; Lo, Li - Ta; Browniee, Carson S; Mitchell, Christopher J; Hansen, Chuck

    2010-10-15

    Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We present a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.

  6. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect (OSTI)

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  7. Molecular Dynamics Simulations from SNL's Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Plimpton, Steve; Thompson, Aidan; Crozier, Paul

    LAMMPS (http://lammps.sandia.gov/index.html) stands for Large-scale Atomic/Molecular Massively Parallel Simulator and is a code that can be used to model atoms or, as the LAMMPS website says, as a parallel particle simulator at the atomic, meso, or continuum scale. This Sandia-based website provides a long list of animations from large simulations. These were created using different visualization packages to read LAMMPS output, and each one provides the name of the PI and a brief description of the work done or visualization package used. See also the static images produced from simulations at http://lammps.sandia.gov/pictures.html The foundation paper for LAMMPS is: S. Plimpton, Fast Parallel Algorithms for Short-Range Molecular Dynamics, J Comp Phys, 117, 1-19 (1995), but the website also lists other papers describing contributions to LAMMPS over the years.

  8. Bacteria Modified to Secrete Biologically Active Protein for Large-Scale

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Production - Energy Innovation Portal Bacteria Modified to Secrete Biologically Active Protein for Large-Scale Production Inventors: Sydnor Withers III, Miguel Dominguez, Matthew DeLisa, Charles Haitjema Great Lakes Bioenergy Research Center Contact GLBRC About This Technology Technology Marketing Summary E. coli is the most common prokaryote used to produce protein. The expressed protein generally accumulates in the cytoplasm. While this approach is useful for some proteins, not all

  9. PARTICLE ACCELERATION BY COLLISIONLESS SHOCKS CONTAINING LARGE-SCALE MAGNETIC-FIELD VARIATIONS

    SciTech Connect (OSTI)

    Guo, F.; Jokipii, J. R.; Kota, J. E-mail: jokipii@lpl.arizona.ed

    2010-12-10

    Diffusive shock acceleration at collisionless shocks is thought to be the source of many of the energetic particles observed in space. Large-scale spatial variations of the magnetic field have been shown to be important in understanding observations. The effects are complex, so here we consider a simple, illustrative model. Here we solve numerically the Parker transport equation for a shock in the presence of large-scale sinusoidal magnetic-field variations. We demonstrate that the familiar planar-shock results can be significantly altered as a consequence of large-scale, meandering magnetic lines of force. Because the perpendicular diffusion coefficient {kappa}{sub perpendicular} is generally much smaller than the parallel diffusion coefficient {kappa}{sub ||}, the energetic charged particles are trapped and preferentially accelerated along the shock front in the regions where the connection points of magnetic field lines intersecting the shock surface converge, and thus create the 'hot spots' of the accelerated particles. For the regions where the connection points separate from each other, the acceleration to high energies will be suppressed. Further, the particles diffuse away from the 'hot spot' regions and modify the spectra of downstream particle distribution. These features are qualitatively similar to the recent Voyager observations in the Heliosheath. These results are potentially important for particle acceleration at shocks propagating in turbulent magnetized plasmas as well as those which contain large-scale nonplanar structures. Examples include anomalous cosmic rays accelerated by the solar wind termination shock, energetic particles observed in propagating heliospheric shocks, galactic cosmic rays accelerated by supernova blast waves, etc.

  10. Panel 1, Towards Sustainable Energy Systems: The Role of Large-Scale Hydrogen Storage in Germany

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Hanno Butsch | Head of International Cooperation NOW GmbH National Organization Hydrogen and Fuel Cell Technology Towards sustainable energy systems - The role of large scale hydrogen storage in Germany May 14th, 2014 | Sacramento Political background for the transition to renewable energies 2 * Climate protection: Global responsibility for the next generation. * Energy security: More independency from fossil fuels. * Securing the economy: Creating new markets and jobs through innovations. Three

  11. Large-Scale Production of Marine Microalgae for Fuel and Feeds

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Bioenergy Technologies Office (BETO) 2015 Project Peer Review Large-Scale Production of Marine Microalgae for Fuel and Feeds March 24, 2015 Algae Platform Review Mark Huntley Cornell Marine Algal Biofuels Consortium This presentation does not contain any proprietary, confidential, or otherwise restricted information Goal Statement  BETO MYPP Goals (3) Demonstrate 1. Performance against clear cost goals and technical targets (Q4 2013) 2. Productivity of 1,500 gal/acre/yr algal oil (Q4 2014)

  12. Development of fine-resolution analyses and expanded large-scale forcing

    Office of Scientific and Technical Information (OSTI)

    properties. Part I: Methodology and evaluation (Journal Article) | SciTech Connect I: Methodology and evaluation Citation Details In-Document Search Title: Development of fine-resolution analyses and expanded large-scale forcing properties. Part I: Methodology and evaluation We produce fine-resolution, three-dimensional fields of meteorological and other variables for the U.S. Department of Energy's Atmospheric Radiation Measurement (ARM) Southern Great Plains site. The Community Gridpoint

  13. DOE/NNSA Participates in Large-Scale CTBT On-Site Inspection Exercise in

    National Nuclear Security Administration (NNSA)

    Jordan | National Nuclear Security Administration Large-Scale CTBT On-Site Inspection Exercise in Jordan | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets

  14. A method of orbital analysis for large-scale first-principles simulations

    SciTech Connect (OSTI)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  15. High Fidelity Simulations of Large-Scale Wireless Networks (Plus-Up)

    SciTech Connect (OSTI)

    Onunkwo, Uzoma

    2015-11-01

    Sandia has built a strong reputation in scalable network simulation and emulation for cyber security studies to protect our nations critical information infrastructures. Georgia Tech has preeminent reputation in academia for excellence in scalable discrete event simulations, with strong emphasis on simulating cyber networks. Many of the experts in this field, such as Dr. Richard Fujimoto, Dr. George Riley, and Dr. Chris Carothers, have strong affiliations with Georgia Tech. The collaborative relationship that we intend to immediately pursue is in high fidelity simulations of practical large-scale wireless networks using ns-3 simulator via Dr. George Riley. This project will have mutual benefits in bolstering both institutions expertise and reputation in the field of scalable simulation for cyber-security studies. This project promises to address high fidelity simulations of large-scale wireless networks. This proposed collaboration is directly in line with Georgia Techs goals for developing and expanding the Communications Systems Center, the Georgia Tech Broadband Institute, and Georgia Tech Information Security Center along with its yearly Emerging Cyber Threats Report. At Sandia, this work benefits the defense systems and assessment area with promise for large-scale assessment of cyber security needs and vulnerabilities of our nations critical cyber infrastructures exposed to wireless communications.

  16. Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: Scale-awareness and application to single-column model experiments

    SciTech Connect (OSTI)

    Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Endo, Satoshi

    2015-01-20

    Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  17. Fingerprints of anomalous primordial Universe on the abundance of large scale structures

    SciTech Connect (OSTI)

    Baghram, Shant; Abolhasani, Ali Akbar; Firouzjahi, Hassan; Namjoo, Mohammad Hossein E-mail: abolhasani@ipm.ir E-mail: MohammadHossein.Namjoo@utdallas.edu

    2014-12-01

    We study the predictions of anomalous inflationary models on the abundance of structures in large scale structure observations. The anomalous features encoded in primordial curvature perturbation power spectrum are (a): localized feature in momentum space, (b): hemispherical asymmetry and (c): statistical anisotropies. We present a model-independent expression relating the number density of structures to the changes in the matter density variance. Models with localized feature can alleviate the tension between observations and numerical simulations of cold dark matter structures on galactic scales as a possible solution to the missing satellite problem. In models with hemispherical asymmetry we show that the abundance of structures becomes asymmetric depending on the direction of observation to sky. In addition, we study the effects of scale-dependent dipole amplitude on the abundance of structures. Using the quasars data and adopting the power-law scaling k{sup n{sub A}-1} for the amplitude of dipole we find the upper bound n{sub A}<0.6 for the spectral index of the dipole asymmetry. In all cases there is a critical mass scale M{sub c} in which for MM{sub c}) the enhancement in variance induced from anomalous feature decreases (increases) the abundance of dark matter structures in Universe.

  18. A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations

    SciTech Connect (OSTI)

    Nomura, K; Seymour, R; Wang, W; Kalia, R; Nakano, A; Vashishta, P; Shimojo, F; Yang, L H

    2009-02-17

    A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based on hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).

  19. Large-scale structure evolution in axisymmetric, compressible free-shear layers

    SciTech Connect (OSTI)

    Aeschliman, D.P.; Baty, R.S.

    1997-05-01

    This paper is a description of work-in-progress. It describes Sandia`s program to study the basic fluid mechanics of large-scale mixing in unbounded, compressible, turbulent flows, specifically, the turbulent mixing of an axisymmetric compressible helium jet in a parallel, coflowing compressible air freestream. Both jet and freestream velocities are variable over a broad range, providing a wide range mixing layer Reynolds number. Although the convective Mach number, M{sub c}, range is currently limited by the present nozzle design to values of 0.6 and below, straightforward nozzle design changes would permit a wide range of convective Mach number, to well in excess of 1.0. The use of helium allows simulation of a hot jet due to the large density difference, and also aids in obtaining optical flow visualization via schlieren due to the large density gradient in the mixing layer. The work comprises a blend of analysis, experiment, and direct numerical simulation (DNS). There the authors discuss only the analytical and experimental efforts to observe and describe the evolution of the large-scale structures. The DNS work, used to compute local two-point velocity correlation data, will be discussed elsewhere.

  20. Technical and economical aspects of large-scale CO{sub 2} storage in deep oceans

    SciTech Connect (OSTI)

    Sarv, H.; John, J.

    2000-07-01

    The authors examined the technical and economical feasibility of two options for large-scale transportation and ocean sequestration of captured CO{sub 2} at depths of 3000 meters or greater. In one case, CO{sub 2} was pumped from a land-based collection center through six parallel-laid subsea pipelines. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating platform or a barge for vertical injection through a large-diameter pipe to the ocean floor. Based on the preliminary technical and economic analyses, tanker transportation and offshore injection through a large-diameter, 3,000-meter vertical pipeline from a floating structure appears to be the best method for delivering liquid CO{sub 2} to deep ocean floor depressions for distances greater than 400 km. Other benefits of offshore injection are high payload capability and ease of relocation. For shorter distances (less than 400 km), CO{sub 2} delivery by subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines or tankers were under 2 dollars per ton of stored CO{sub 2}. Their analyses also indicates that large-scale sequestration of captured CO{sub 2} in oceans is technologically feasible and has many commonalities with other strategies for deepsea natural gas and oil exploration installations.

  1. Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: Scale-awareness and application to single-column model experiments

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Endo, Satoshi

    2015-01-20

    Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less

  2. Primordial non-Gaussianity in the bispectra of large-scale structure

    SciTech Connect (OSTI)

    Tasinato, Gianmassimo; Tellarini, Matteo; Ross, Ashley J.; Wands, David E-mail: matteo.tellarini@port.ac.uk E-mail: david.wands@port.ac.uk

    2014-03-01

    The statistics of large-scale structure in the Universe can be used to probe non-Gaussianity of the primordial density field, complementary to existing constraints from the cosmic microwave background. In particular, the scale dependence of halo bias, which affects the halo distribution at large scales, represents a promising tool for analyzing primordial non-Gaussianity of local form. Future observations, for example, may be able to constrain the trispectrum parameter g{sub NL} that is difficult to study and constrain using the CMB alone. We investigate how galaxy and matter bispectra can distinguish between the two non-Gaussian parameters f{sub NL} and g{sub NL}, whose effects give nearly degenerate contributions to the power spectra. We use a generalization of the univariate bias approach, making the hypothesis that the number density of halos forming at a given position is a function of the local matter density contrast and of its local higher-order statistics. Using this approach, we calculate the halo-matter bispectra and analyze their properties. We determine a connection between the sign of the halo bispectrum on large scales and the parameter g{sub NL}. We also construct a combination of halo and matter bispectra that is sensitive to f{sub NL}, with little contamination from g{sub NL}. We study both the case of single and multiple sources to the primordial gravitational potential, discussing how to extend the concept of stochastic halo bias to the case of bispectra. We use a specific halo mass-function to calculate numerically the bispectra in appropriate squeezed limits, confirming our theoretical findings.

  3. LARGE-SCALE HYDROGEN PRODUCTION FROM NUCLEAR ENERGY USING HIGH TEMPERATURE ELECTROLYSIS

    SciTech Connect (OSTI)

    James E. O'Brien

    2010-08-01

    Hydrogen can be produced from water splitting with relatively high efficiency using high-temperature electrolysis. This technology makes use of solid-oxide cells, running in the electrolysis mode to produce hydrogen from steam, while consuming electricity and high-temperature process heat. When coupled to an advanced high temperature nuclear reactor, the overall thermal-to-hydrogen efficiency for high-temperature electrolysis can be as high as 50%, which is about double the overall efficiency of conventional low-temperature electrolysis. Current large-scale hydrogen production is based almost exclusively on steam reforming of methane, a method that consumes a precious fossil fuel while emitting carbon dioxide to the atmosphere. Demand for hydrogen is increasing rapidly for refining of increasingly low-grade petroleum resources, such as the Athabasca oil sands and for ammonia-based fertilizer production. Large quantities of hydrogen are also required for carbon-efficient conversion of biomass to liquid fuels. With supplemental nuclear hydrogen, almost all of the carbon in the biomass can be converted to liquid fuels in a nearly carbon-neutral fashion. Ultimately, hydrogen may be employed as a direct transportation fuel in a hydrogen economy. The large quantity of hydrogen that would be required for this concept should be produced without consuming fossil fuels or emitting greenhouse gases. An overview of the high-temperature electrolysis technology will be presented, including basic theory, modeling, and experimental activities. Modeling activities include both computational fluid dynamics and large-scale systems analysis. We have also demonstrated high-temperature electrolysis in our laboratory at the 15 kW scale, achieving a hydrogen production rate in excess of 5500 L/hr.

  4. On the possible origin of the large scale cosmic magnetic field

    SciTech Connect (OSTI)

    Coroniti, F. V.

    2014-01-10

    The possibility that the large scale cosmic magnetic field is directly generated at microgauss, equipartition levels during the reionization epoch by collisionless shocks that are forced to satisfy a downstream shear flow boundary condition is investigated through the development of two modelsthe accretion of an ionized plasma onto a weakly ionized cool galactic disk and onto a cool filament of the cosmic web. The dynamical structure and the physical parameters of the models are synthesized from recent cosmological simulations of the early reionization era after the formation of the first stars. The collisionless shock stands upstream of the disk and filament, and its dissipation is determined by ion inertial length Weibel turbulence. The downstream shear boundary condition is determined by the rotational neutral gas flow in the disk and the inward accretion flow along the filament. The shocked plasma is accelerated to the downstream shear flow velocity by the Weibel turbulence, and the relative shearing motion between the electrons and ions produces a strong, ion inertial scale current sheet that generates an equipartition strength, large scale downstream magnetic field, ?10{sup 6} G for the disk and ?6 10{sup 8} G for the filament. By assumption, hydrodynamic turbulence transports the shear-shock generated magnetic flux throughout the disk and filament volume.

  5. Large scale synthesis of nanostructured zirconia-based compounds from freeze-dried precursors

    SciTech Connect (OSTI)

    Gomez, A.; Villanueva, R.; Vie, D.; Murcia-Mascaros, S.; Martinez, E.; Beltran, A.; Sapina, F.; Vicent, M.; Sanchez, E.

    2013-01-15

    Nanocrystalline zirconia powders have been obtained at the multigram scale by thermal decomposition of precursors resulting from the freeze-drying of aqueous acetic solutions. This technique has equally made possible to synthesize a variety of nanostructured yttria or scandia doped zirconia compositions. SEM images, as well as the analysis of the XRD patterns, show the nanoparticulated character of those solids obtained at low temperature, with typical particle size in the 10-15 nm range when prepared at 673 K. The presence of the monoclinic, the tetragonal or both phases depends on the temperature of the thermal treatment, the doping concentration and the nature of the dopant. In addition, Rietveld refinement of the XRD profiles of selected samples allows detecting the coexistence of the tetragonal and the cubic phases for high doping concentration and high thermal treatment temperatures. Raman experiments suggest the presence of both phases also at relatively low treatment temperatures. - Graphical abstract: Zr{sub 1-x}A{sub x}O{sub 2-x/2} (A=Y, Sc; 0{<=}x{<=}0.12) solid solutions have been prepared as nanostructured powders by thermal decomposition of precursors obtained by freeze-drying, and this synthetic procedure has been scaled up to the 100 g scale. Highlights: Black-Right-Pointing-Pointer Zr{sub 1-x}A{sub x}O{sub 2-x/2} (A=Y, Sc; 0{<=}x{<=}0.12) solid solutions have been prepared as nanostructured powders. Black-Right-Pointing-Pointer The synthetic method involves the thermal decomposition of precursors obtained by freeze-drying. Black-Right-Pointing-Pointer The temperature of the thermal treatment controls particle sizes. Black-Right-Pointing-Pointer The preparation procedure has been scaled up to the 100 g scale. Black-Right-Pointing-Pointer This method is appropriate for the large-scale industrial preparation of multimetallic systems.

  6. Nonlinear Seismic Correlation Analysis of the JNES/NUPEC Large-Scale Piping System Tests.

    SciTech Connect (OSTI)

    Nie,J.; DeGrassi, G.; Hofmayer, C.; Ali, S.

    2008-06-01

    The Japan Nuclear Energy Safety Organization/Nuclear Power Engineering Corporation (JNES/NUPEC) large-scale piping test program has provided valuable new test data on high level seismic elasto-plastic behavior and failure modes for typical nuclear power plant piping systems. The component and piping system tests demonstrated the strain ratcheting behavior that is expected to occur when a pressurized pipe is subjected to cyclic seismic loading. Under a collaboration agreement between the US and Japan on seismic issues, the US Nuclear Regulatory Commission (NRC)/Brookhaven National Laboratory (BNL) performed a correlation analysis of the large-scale piping system tests using derailed state-of-the-art nonlinear finite element models. Techniques are introduced to develop material models that can closely match the test data. The shaking table motions are examined. The analytical results are assessed in terms of the overall system responses and the strain ratcheting behavior at an elbow. The paper concludes with the insights about the accuracy of the analytical methods for use in performance assessments of highly nonlinear piping systems under large seismic motions.

  7. Geomechanical effects on CO{sub 2} leakage through fault zones during large-scale underground injection

    SciTech Connect (OSTI)

    Rinaldi, A.P.; Rutqvist, J.; Cappa, F.

    2013-09-01

    The importance of geomechanicsincluding the potential for faults to reactivate during large scale geologic carbon sequestration operationshas recently become more widely recognized. However, notwithstanding the potential for triggering notable (felt) seismic events, the potential for buoyancy-driven CO{sub 2} to reach potable groundwater and the ground surface is actually more important from public safety and storage-efficiency perspectives. In this context, this work extends the previous studies on the geomechanical modeling of fault responses during underground carbon dioxide injection, focusing on the short-term integrity of the sealing caprock, and hence on the potential for leakage of either brine or CO{sub 2} to reach the shallow groundwater aquifers during active injection. We consider stress/strain-dependent permeability and study the leakage through the fault zone as its permeability changes during a reactivation, also causing seismicity. We analyze several scenarios related to the volume of CO{sub 2} injected (and hence as a function of the overpressure), involving both minor and major faults, and analyze the profile risks of leakage for different stress/strain-permeability coupling functions. We conclude that whereas it is very difficult to predict how much fault permeability could change upon reactivation, this process can have a significant impact on the leakage rate. Moreover, our analysis shows that induced seismicity associated with fault reactivation may not necessarily open up a new flow path for leakage. Results show a poor correlation between magnitude and amount of fluid leakage, meaning that a single event is generally not enough to substantially change the permeability along the entire fault length. Consequently, even if some changes in permeability occur, this does not mean that the CO{sub 2} will migrate up along the entire fault, breaking through the caprock to enter the overlying aquifer.

  8. Large-Scale Field Study of Landfill Covers at Sandia National Laboratories

    SciTech Connect (OSTI)

    Dwyer, S.F.

    1998-09-01

    A large-scale field demonstration comparing final landfill cover designs has been constructed and is currently being monitored at Sandia National Laboratories in Albuquerque, New Mexico. Two conventional designs (a RCRA Subtitle `D' Soil Cover and a RCRA Subtitle `C' Compacted Clay Cover) were constructed side-by-side with four alternative cover test plots designed for dry environments. The demonstration is intended to evaluate the various cover designs based on their respective water balance performance, ease and reliability of construction, and cost. This paper presents an overview of the ongoing demonstration.

  9. Networks of silicon nanowires: A large-scale atomistic electronic structure analysis

    SciTech Connect (OSTI)

    Kele?, mit; Bulutay, Ceyhun; Liedke, Bartosz; Heinig, Karl-Heinz

    2013-11-11

    Networks of silicon nanowires possess intriguing electronic properties surpassing the predictions based on quantum confinement of individual nanowires. Employing large-scale atomistic pseudopotential computations, as yet unexplored branched nanostructures are investigated in the subsystem level as well as in full assembly. The end product is a simple but versatile expression for the bandgap and band edge alignments of multiply-crossing Si nanowires for various diameters, number of crossings, and wire orientations. Further progress along this line can potentially topple the bottom-up approach for Si nanowire networks to a top-down design by starting with functionality and leading to an enabling structure.

  10. Harvey Wasserman! Large Scale Computing and Storage Requirements for High Energy Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Harvey Wasserman! Large Scale Computing and Storage Requirements for High Energy Physics Research: Target 2017 Meeting Goals & Process! ! --- 1 --- December 3 , 2 012 Logistics: Schedule * Agenda o n w orkshop w eb p age - h%p://www.nersc.gov/science/requirements/HEP * Mid---morning / a <ernoon b reak, l unch * Self---organizaBon for dinner * MulBple s cience a reas, o ne w orkshop - Science---focused b ut c rosscu?ng d iscussion - Explore a reas o f c ommon n eed ( within H EP) *

  11. Testing the big bang: Light elements, neutrinos, dark matter and large-scale structure

    SciTech Connect (OSTI)

    Schramm, D.N. Fermi National Accelerator Lab., Batavia, IL )

    1991-06-01

    In this series of lectures, several experimental and observational tests of the standard cosmological model are examined. In particular, detailed discussion is presented regarding nucleosynthesis, the light element abundances and neutrino counting; the dark matter problems; and the formation of galaxies and large-scale structure. Comments will also be made on the possible implications of the recent solar neutrino experimental results for cosmology. An appendix briefly discusses the 17 keV thing'' and the cosmological and astrophysical constraints on it. 126 refs., 8 figs., 2 tabs.

  12. NREL Offers an Open-Source Solution for Large-Scale Energy Data Collection

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Analysis - News Releases | NREL NREL Offers an Open-Source Solution for Large-Scale Energy Data Collection and Analysis June 18, 2013 The Energy Department's National Renewable Energy Laboratory (NREL) is launching an open-source system for storing, integrating, and aligning energy-related time-series data. NREL's Energy DataBus is used for tracking and analyzing energy use on its own campus. The system is applicable to other facilities-including anything from a single building to a

  13. Materials Science and Materials Chemistry for Large Scale Electrochemical Energy Storage: From Transportation to Electrical Grid

    SciTech Connect (OSTI)

    Liu, Jun; Zhang, Jiguang; Yang, Zhenguo; Lemmon, John P.; Imhoff, Carl H.; Graff, Gordon L.; Li, Liyu; Hu, Jian Z.; Wang, Chong M.; Xiao, Jie; Xia, Guanguang; Viswanathan, Vilayanur V.; Baskaran, Suresh; Sprenkle, Vincent L.; Li, Xiaolin; Shao, Yuyan; Schwenzer, Birgit

    2013-02-15

    Large-scale electrical energy storage has become more important than ever for reducing fossil energy consumption in transportation and for the widespread deployment of intermittent renewable energy in electric grid. However, significant challenges exist for its applications. Here, the status and challenges are reviewed from the perspective of materials science and materials chemistry in electrochemical energy storage technologies, such as Li-ion batteries, sodium (sulfur and metal halide) batteries, Pb-acid battery, redox flow batteries, and supercapacitors. Perspectives and approaches are introduced for emerging battery designs and new chemistry combinations to reduce the cost of energy storage devices.

  14. Detecting and mitigating abnormal events in large scale networks: budget constrained placement on smart grids

    SciTech Connect (OSTI)

    Santhi, Nandakishore; Pan, Feng

    2010-10-19

    Several scenarios exist in the modern interconnected world which call for an efficient network interdiction algorithm. Applications are varied, including various monitoring and load shedding applications on large smart energy grids, computer network security, preventing the spread of Internet worms and malware, policing international smuggling networks, and controlling the spread of diseases. In this paper we consider some natural network optimization questions related to the budget constrained interdiction problem over general graphs, specifically focusing on the sensor/switch placement problem for large-scale energy grids. Many of these questions turn out to be computationally hard to tackle. We present a particular form of the interdiction question which is practically relevant and which we show as computationally tractable. A polynomial-time algorithm will be presented for solving this problem.

  15. Large Scale Ice Water Path and 3-D Ice Water Content

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Liu, Guosheng

    2008-01-15

    Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.

  16. A High-Performance Rechargeable Iron Electrode for Large-Scale Battery-Based Energy Storage

    SciTech Connect (OSTI)

    Manohar, AK; Malkhandi, S; Yang, B; Yang, C; Prakash, GKS; Narayanan, SR

    2012-01-01

    Inexpensive, robust and efficient large-scale electrical energy storage systems are vital to the utilization of electricity generated from solar and wind resources. In this regard, the low cost, robustness, and eco-friendliness of aqueous iron-based rechargeable batteries are particularly attractive and compelling. However, wasteful evolution of hydrogen during charging and the inability to discharge at high rates have limited the deployment of iron-based aqueous batteries. We report here new chemical formulations of the rechargeable iron battery electrode to achieve a ten-fold reduction in the hydrogen evolution rate, an unprecedented charging efficiency of 96%, a high specific capacity of 0.3 Ah/g, and a twenty-fold increase in discharge rate capability. We show that modifying high-purity carbonyl iron by in situ electro-deposition of bismuth leads to substantial inhibition of the kinetics of the hydrogen evolution reaction. The in situ formation of conductive iron sulfides mitigates the passivation by iron hydroxide thereby allowing high discharge rates and high specific capacity to be simultaneously achieved. These major performance improvements are crucial to advancing the prospect of a sustainable large-scale energy storage solution based on aqueous iron-based rechargeable batteries. (C) 2012 The Electrochemical Society. [DOI: 10.1149/2.034208jes] All rights reserved.

  17. Large Scale Ice Water Path and 3-D Ice Water Content

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Liu, Guosheng

    Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.

  18. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis

    SciTech Connect (OSTI)

    Chen, H.-W.; Chang, N.-B.; Chen, J.-C.; Tsai, S.-J.

    2010-07-15

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.

  19. Lotung large-scale seismic test strong motion records. Volume 1, General description: Final report

    SciTech Connect (OSTI)

    Not Available

    1992-03-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4 scale and 1/12 scale) of a nuclear plant concrete containment structure at a seismically active site in Lotung, Taiwan. Extensive instrumentation was deployed to record both structural and ground responses during earthquakes. The experiment, generally referred to as the Lotung Large-Scale Seismic Test (LSST), was used to gather data for soil-structure interaction (SSI) analysis method evaluation and validation as well as for site ground response investigation. A number of earthquakes having local magnitudes ranging from 4.5 to 7.0 have been recorded at the LSST site since the completion of the test facility in September 1985. This report documents the earthquake data, both raw and processed, collected from the LSST experiment. Volume 1 of the report provides general information on site location, instrument types and layout, data acquisition and processing, and data file organization. The recorded data are described chronologically in subsequent volumes of the report.

  20. Primordial Magnetic Field Effects on the CMB and Large-Scale Structure

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yamazaki, Dai G.; Ichiki, Kiyotomo; Kajino, Toshitaka; Mathews, Grant J.

    2010-01-01

    Mmore » agnetic fields are everywhere in nature, and they play an important role in every astronomical environment which involves the formation of plasma and currents. It is natural therefore to suppose that magnetic fields could be present in the turbulent high-temperature environment of the big bang. Such a primordial magnetic field (PMF) would be expected to manifest itself in the cosmic microwave background (CMB) temperature and polarization anisotropies, and also in the formation of large-scale structure. In this paper, we summarize the theoretical framework which we have developed to calculate the PMF power spectrum to high precision. Using this formulation, we summarize calculations of the effects of a PMF which take accurate quantitative account of the time evolution of the cutoff scale. We review the constructed numerical program, which is without approximation, and an improvement over the approach used in a number of previous works for studying the effect of the PMF on the cosmological perturbations. We demonstrate how the PMF is an important cosmological physical process on small scales. We also summarize the current constraints on the PMF amplitude B λ and the power spectral index n B which have been deduced from the available CMB observational data by using our computational framework.« less

  1. Combined Climate and Carbon-Cycle Effects of Large-Scale Deforestation

    SciTech Connect (OSTI)

    Bala, G; Caldeira, K; Wickett, M; Phillips, T J; Lobell, D B; Delire, C; Mirin, A

    2006-10-17

    The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO{sub 2} to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These are the first such simulations performed using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has a net cooling influence on Earth's climate, since the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. While these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate.

  2. Calculation of large scale relative permeabilities from stochastic properties of the permeability field and fluid properties

    SciTech Connect (OSTI)

    Lenormand, R.; Thiele, M.R.

    1997-08-01

    The paper describes the method and presents preliminary results for the calculation of homogenized relative permeabilities using stochastic properties of the permeability field. In heterogeneous media, the spreading of an injected fluid is mainly sue to the permeability heterogeneity and viscosity fingering. At large scale, when the heterogeneous medium is replaced by a homogeneous one, we need to introduce a homogenized (or pseudo) relative permeability to obtain the same spreading. Generally, is derived by using fine-grid numerical simulations (Kyte and Berry). However, this operation is time consuming and cannot be performed for all the meshes of the reservoir. We propose an alternate method which uses the information given by the stochastic properties of the field without any numerical simulation. The method is based on recent developments on homogenized transport equations (the {open_quotes}MHD{close_quotes} equation, Lenormand SPE 30797). The MHD equation accounts for the three basic mechanisms of spreading of the injected fluid: (1) Dispersive spreading due to small scale randomness, characterized by a macrodispersion coefficient D. (2) Convective spreading due to large scale heterogeneities (layers) characterized by a heterogeneity factor H. (3) Viscous fingering characterized by an apparent viscosity ration M. In the paper, we first derive the parameters D and H as functions of variance and correlation length of the permeability field. The results are shown to be in good agreement with fine-grid simulations. The are then derived a function of D, H and M. The main result is that this approach lead to a time dependent . Finally, the calculated are compared to the values derived by history matching using fine-grid numerical simulations.

  3. LyMAS: Predicting large-scale Ly? forest statistics from the dark matter density field

    SciTech Connect (OSTI)

    Peirani, Sbastien; Colombi, Stphane; Dubois, Yohan; Pichon, Christophe; Weinberg, David H.; Blaizot, Jrmy

    2014-03-20

    We describe Ly? Mass Association Scheme (LyMAS), a method of predicting clustering statistics in the Ly? forest on large scales from moderate-resolution simulations of the dark matter (DM) distribution, with calibration from high-resolution hydrodynamic simulations of smaller volumes. We use the 'Horizon-MareNostrum' simulation, a 50 h {sup 1} Mpc comoving volume evolved with the adaptive mesh hydrodynamic code RAMSES, to compute the conditional probability distribution P(F{sub s} |? {sub s}) of the transmitted flux F{sub s} , smoothed (one-dimensionally, 1D) over the spectral resolution scale, on the DM density contrast ? {sub s}, smoothed (three-dimensionally, 3D) over a similar scale. In this study we adopt the spectral resolution of the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS) at z = 2.5, and we find optimal results for a DM smoothing length ? = 0.3 h {sup 1} Mpc (comoving). In its simplest form, LyMAS draws randomly from the hydro-calibrated P(F{sub s} |? {sub s}) to convert DM skewers into Ly? forest pseudo-spectra, which are then used to compute cross-sightline flux statistics. In extended form, LyMAS exactly reproduces both the 1D power spectrum and one-point flux distribution of the hydro simulation spectra. Applied to the MareNostrum DM field, LyMAS accurately predicts the two-point conditional flux distribution and flux correlation function of the full hydro simulation for transverse sightline separations as small as 1 h {sup 1} Mpc, including redshift-space distortion effects. It is substantially more accurate than a deterministic density-flux mapping ({sup F}luctuating Gunn-Peterson Approximation{sup )}, often used for large-volume simulations of the forest. With the MareNostrum calibration, we apply LyMAS to 1024{sup 3} N-body simulations of a 300 h {sup 1} Mpc and 1.0 h {sup 1} Gpc cube to produce large, publicly available catalogs of mock BOSS spectra that probe a large comoving volume. LyMAS will be a powerful tool for interpreting 3D Ly? forest data, thereby transforming measurements from BOSS and other massive quasar absorption surveys into constraints on dark energy, DM, space geometry, and intergalactic medium physics.

  4. CO{sub 2} Geologic Storage: Coupled Hydro-Chemo-Thermo-Mechanical Phenomena - From Pore-scale Processes to Macroscale Implications -

    SciTech Connect (OSTI)

    Santamarina, J. Carlos

    2013-05-31

    Global energy consumption will increase in the next decades and it is expected to largely rely on fossil fuels. The use of fossil fuels is intimately related to CO{sub 2} emissions and the potential for global warming. Geological CO{sub 2} storage aims to mitigate the global warming problem by sequestering CO{sub 2} underground. Coupled hydro-chemo-mechanical phenomena determine the successful operation and long term stability of CO{sub 2} geological storage. This research explores coupled phenomena, identifies different zones in the storage reservoir, and investigates their implications in CO{sub 2} geological storage. In particular, the research: Explores spatial patterns in mineral dissolution and precipitation (comprehensive mass balance formulation); experimentally determines the interfacial properties of water, mineral, and CO{sub 2} systems (including CO{sub 2}-water-surfactant mixtures to reduce the CO{sub 2}- water interfacial tension in view of enhanced sweep efficiency); analyzes the interaction between clay particles and CO{sub 2}, and the response of sediment layers to the presence of CO{sub 2} using specially designed experimental setups and complementary analyses; couples advective and diffusive mass transport of species, together with mineral dissolution to explore pore changes during advection of CO{sub 2}-dissolved water along a rock fracture; upscales results to a porous medium using pore network simulations; measures CO{sub 2} breakthrough in highly compacted fine-grained sediments, shale and cement specimens; explores sealing strategies; and experimentally measures CO{sub 2}-CH{sub 4} replacement in hydrate-bearing sediments during. Analytical, experimental and numerical results obtained in this study can be used to identify optimal CO{sub 2} injection and reservoir-healing strategies to maximize the efficiency of CO{sub 2} injection and to attain long-term storage.

  5. FY results for the Los Alamos large scale demonstration and deployment project

    SciTech Connect (OSTI)

    Stallings, E.; McFee, J.

    2000-11-01

    The Los Alamos Large Scale Demonstration and Deployment Project (LSDDP) in support of the US Department of Energy (DOE) Deactivation and Decommissioning Focus Area (DDFA) is identifying and demonstrating technologies to reduce the cost and risk of management of transuranic element contaminated large metal objects, i.e. gloveboxes. DOE must dispose of hundreds of gloveboxes from Rocky Flats, Los Alamos and other DOE sites. Current practices for removal, decontamination and size reduction of large metal objects translates to a DOE system-wide cost in excess of $800 million, without disposal costs. In FY99 and FY00 the Los Alamos LSDDP performed several demonstrations on cost/risk savings technologies. Commercial air pallets were demonstrated for movement and positioning of the oversized crates in neutron counting equipment. The air pallets are able to cost effectively address the complete waste management inventory, whereas the baseline wheeled carts could address only 25% of the inventory with higher manpower costs. A gamma interrogation radiography technology was demonstrated to support characterization of the crates. The technology was developed for radiography of trucks for identification of contraband. The radiographs were extremely useful in guiding the selection and method for opening very large crated metal objects. The cost of the radiography was small and the operating benefit is high. Another demonstration compared a Blade Cutting Plunger and reciprocating saw for removal of glovebox legs and appurtenances. The cost comparison showed that the Blade Cutting Plunger costs were comparable, and a significant safety advantage was reported. A second radiography demonstration was conducted evaluation of a technology based on WIPP-type x-ray characterization of large boxes. This technology provides considerable detail of the contents of the crates. The technology identified details as small as the fasteners in the crates, an unpunctured aerosol can, and a vessel containing liquids. The cost of this technology is higher than the gamma interrogation technique, but the detail provided is much greater.

  6. Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.

    2010-01-01

    Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster headsmore » to minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less

  7. Re-evaluation of the 1995 Hanford Large Scale Drum Fire Test Results

    SciTech Connect (OSTI)

    Yang, J M

    2007-05-02

    A large-scale drum performance test was conducted at the Hanford Site in June 1995, in which over one hundred (100) 55-gal drums in each of two storage configurations were subjected to severe fuel pool fires. The two storage configurations in the test were pallet storage and rack storage. The description and results of the large-scale drum test at the Hanford Site were reported in WHC-SD-WM-TRP-246, ''Solid Waste Drum Array Fire Performance,'' Rev. 0, 1995. This was one of the main references used to develop the analytical methodology to predict drum failures in WHC-SD-SQA-ANAL-501, 'Fire Protection Guide for Waste Drum Storage Array,'' September 1996. Three drum failure modes were observed from the test reported in WHC-SD-WM-TRP-246. They consisted of seal failure, lid warping, and catastrophic lid ejection. There was no discernible failure criterion that distinguished one failure mode from another. Hence, all three failure modes were treated equally for the purpose of determining the number of failed drums. General observations from the results of the test are as follows: {lg_bullet} Trash expulsion was negligible. {lg_bullet} Flame impingement was identified as the main cause for failure. {lg_bullet} The range of drum temperatures at failure was 600 C to 800 C. This is above the yield strength temperature for steel, approximately 540 C (1,000 F). {lg_bullet} The critical heat flux required for failure is above 45 kW/m{sup 2}. {lg_bullet} Fire propagation from one drum to the next was not observed. The statistical evaluation of the test results using, for example, the student's t-distribution, will demonstrate that the failure criteria for TRU waste drums currently employed at nuclear facilities are very conservative relative to the large-scale test results. Hence, the safety analysis utilizing the general criteria described in the five bullets above will lead to a technically robust and defensible product that bounds the potential consequences from postulated fires in TRU waste facilities, the means of storage in which are the Type A, 55-gal drums.

  8. A review of large-scale LNG spills : experiment and modeling.

    SciTech Connect (OSTI)

    Luketa-Hanlin, Anay Josephine

    2005-04-01

    The prediction of the possible hazards associated with the storage and transportation of liquefied natural gas (LNG) by ship has motivated a substantial number of experimental and analytical studies. This paper reviews the experimental and analytical work performed to date on large-scale spills of LNG. Specifically, experiments on the dispersion of LNG, as well as experiments of LNG fires from spills on water and land are reviewed. Explosion, pool boiling, and rapid phase transition (RPT) explosion studies are described and discussed, as well as models used to predict dispersion and thermal hazard distances. Although there have been significant advances in understanding the behavior of LNG spills, technical knowledge gaps to improve hazard prediction are identified. Some of these gaps can be addressed with current modeling and testing capabilities. A discussion of the state of knowledge and recommendations to further improve the understanding of the behavior of LNG spills on water is provided.

  9. Aerodynamic force measurement on a large-scale model in a short duration test facility

    SciTech Connect (OSTI)

    Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.

    2005-03-01

    A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3 m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350 {mu}s is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1 ms.

  10. Measurement of the large-scale anisotropy of the cosmic background radiation at 3mm

    SciTech Connect (OSTI)

    Epstein, G.L.

    1983-12-01

    A balloon-borne differential radiometer has measured the large-scale anisotropy of the cosmic background radiation (CBR) with high sensitivity. The antenna temperature dipole anistropy at 90 GHz (3 mm wavelength) is 2.82 +- 0.19 mK, corresponding to a thermodynamic anistropy of 3.48 +- mK for a 2.7 K blackbody CBR. The dipole direction, 11.3 +- 0.1 hours right ascension and -5.7/sup 0/ +- 1.8/sup 0/ declination, agrees well with measurements at other frequencies. Calibration error dominates magnitude uncertainty, with statistical errors on dipole terms being under 0.1 mK. No significant quadrupole power is found, placing a 90% confidence-level upper limit of 0.27 mK on the RMS thermodynamic quadrupolar anistropy. 22 figures, 17 tables.

  11. Aerosols released during large-scale integral MCCI tests in the ACE Program

    SciTech Connect (OSTI)

    Fink, J.K.; Thompson, D.H.; Spencer, B.W.; Sehgal, B.R.

    1992-04-01

    As part of the internationally sponsored Advanced Containment Experiments (ACE) program, seven large-scale experiments on molten core concrete interactions (MCCIs) have been performed at Argonne National Laboratory. One of the objectives of these experiments is to collect and characterize all the aerosols released from the MCCIs. Aerosols released from experiments using four types of concrete (siliceous, limestone/common sand, serpentine, and limestone/limestone) and a range of metal oxidation for both BWR and PWR reactor core material have been collected and characterized. Release fractions were determined for UO{sup 2}, Zr, the fission-products: BaO, SrO, La{sub 2}O{sub 3}, CeO{sub 2}, MoO{sub 2}, Te, Ru, and control materials: Ag, In, and B{sub 4}C. Release fractions of UO{sub 2} and the fission products other than Te were small in all tests. However, release of control materials was significant.

  12. Aerosols released during large-scale integral MCCI tests in the ACE Program

    SciTech Connect (OSTI)

    Fink, J.K.; Thompson, D.H.; Spencer, B.W. ); Sehgal, B.R. )

    1992-01-01

    As part of the internationally sponsored Advanced Containment Experiments (ACE) program, seven large-scale experiments on molten core concrete interactions (MCCIs) have been performed at Argonne National Laboratory. One of the objectives of these experiments is to collect and characterize all the aerosols released from the MCCIs. Aerosols released from experiments using four types of concrete (siliceous, limestone/common sand, serpentine, and limestone/limestone) and a range of metal oxidation for both BWR and PWR reactor core material have been collected and characterized. Release fractions were determined for UO{sup 2}, Zr, the fission-products: BaO, SrO, La{sub 2}O{sub 3}, CeO{sub 2}, MoO{sub 2}, Te, Ru, and control materials: Ag, In, and B{sub 4}C. Release fractions of UO{sub 2} and the fission products other than Te were small in all tests. However, release of control materials was significant.

  13. Large-Scale Computational Screening of Zeolites for Ethane/Ethene Separation

    SciTech Connect (OSTI)

    Kim, J; Lin, LC; Martin, RL; Swisher, JA; Haranczyk, M; Smit, B

    2012-08-14

    Large-scale computational screening of thirty thousand zeolite structures was conducted to find optimal structures for seperation of ethane/ethene mixtures. Efficient grand canonical Monte Carlo (GCMC) simulations were performed with graphics processing units (GPUs) to obtain pure component adsorption isotherms for both ethane and ethene. We have utilized the ideal adsorbed solution theory (LAST) to obtain the mixture isotherms, which were used to evaluate the performance of each zeolite structure based on its working capacity and selectivity. In our analysis, we have determined that specific arrangements of zeolite framework atoms create sites for the preferential adsorption of ethane over ethene. The majority of optimum separation materials can be identified by utilizing this knowledge and screening structures for the presence of this feature will enable the efficient selection of promising candidate materials for ethane/ethene separation prior to performing molecular simulations.

  14. Large-scale production of anhydrous nitric acid and nitric acid solutions of dinitrogen pentoxide

    DOE Patents [OSTI]

    Harrar, Jackson E. (Castro Valley, CA); Quong, Roland (Oakland, CA); Rigdon, Lester P. (Livermore, CA); McGuire, Raymond R. (Brentwood, CA)

    2001-01-01

    A method and apparatus are disclosed for a large scale, electrochemical production of anhydrous nitric acid and N.sub.2 O.sub.5. The method includes oxidizing a solution of N.sub.2 O.sub.4 /aqueous-HNO.sub.3 at the anode, while reducing aqueous HNO.sub.3 at the cathode, in a flow electrolyzer constructed of special materials. N.sub.2 O.sub.4 is produced at the cathode and may be separated and recycled as a feedstock for use in the anolyte. The process is controlled by regulating the electrolysis current until the desired products are obtained. The chemical compositions of the anolyte and catholyte are monitored by measurement of the solution density and the concentrations of N.sub.2 O.sub.4.

  15. Large-Scale First-Principles Molecular Dynamics Simulations with Electrostatic Embedding: Application to Acetylcholinesterase Catalysis

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Fattebert, Jean-Luc; Lau, Edmond Y.; Bennion, Brian J.; Huang, Patrick; Lightstone, Felice C.

    2015-10-22

    Enzymes are complicated solvated systems that typically require many atoms to simulate their function with any degree of accuracy. We have recently developed numerical techniques for large scale First-Principles molecular dynamics simulations and applied them to study the enzymatic reaction catalyzed by acetylcholinesterase. We carried out Density functional theory calculations for a quantum mechanical (QM) sub- system consisting of 612 atoms with an O(N) complexity finite-difference approach. The QM sub-system is embedded inside an external potential field representing the electrostatic effect due to the environment. We obtained finite temperature sampling by First-Principles molecular dynamics for the acylation reaction of acetylcholinemore » catalyzed by acetylcholinesterase. Our calculations shows two energies barriers along the reaction coordinate for the enzyme catalyzed acylation of acetylcholine. In conclusion, the second barrier (8.5 kcal/mole) is rate-limiting for the acylation reaction and in good agreement with experiment.« less

  16. Drivers and barriers to e-invoicing adoption in Greek large scale manufacturing industries

    SciTech Connect (OSTI)

    Marinagi, Catherine E-mail: ptrivel@yahoo.com Trivellas, Panagiotis E-mail: ptrivel@yahoo.com Reklitis, Panagiotis E-mail: ptrivel@yahoo.com; Skourlas, Christos

    2015-02-09

    This paper attempts to investigate the drivers and barriers that large-scale Greek manufacturing industries experience in adopting electronic invoices (e-invoices), based on three case studies with organizations having international presence in many countries. The study focuses on the drivers that may affect the increase of the adoption and use of e-invoicing, including the customers demand for e-invoices, and sufficient know-how and adoption of e-invoicing in organizations. In addition, the study reveals important barriers that prevent the expansion of e-invoicing, such as suppliers’ reluctance to implement e-invoicing, and IT infrastructures incompatibilities. Other issues examined by this study include the observed benefits from e-invoicing implementation, and the financial priorities of the organizations assumed to be supported by e-invoicing.

  17. PATHWAYS OF LARGE-SCALE MAGNETIC COUPLINGS BETWEEN SOLAR CORONAL EVENTS

    SciTech Connect (OSTI)

    Schrijver, Carolus J.; Title, Alan M.; DeRosa, Marc L.; Yeates, Anthony R.

    2013-08-20

    The high-cadence, comprehensive view of the solar corona by SDO/AIA shows many events that are widely separated in space while occurring close together in time. In some cases, sets of coronal events are evidently causally related, while in many other instances indirect evidence can be found. We present case studies to highlight a variety of coupling processes involved in coronal events. We find that physical linkages between events do occur, but concur with earlier studies that these couplings appear to be crucial to understanding the initiation of major eruptive or explosive phenomena relatively infrequently. We note that the post-eruption reconfiguration timescale of the large-scale corona, estimated from the extreme-ultraviolet afterglow, is on average longer than the mean time between coronal mass ejections (CMEs), so that many CMEs originate from a corona that is still adjusting from a previous event. We argue that the coronal field is intrinsically global: current systems build up over days to months, the relaxation after eruptions continues over many hours, and evolving connections easily span much of a hemisphere. This needs to be reflected in our modeling of the connections from the solar surface into the heliosphere to properly model the solar wind, its perturbations, and the generation and propagation of solar energetic particles. However, the large-scale field cannot be constructed reliably by currently available observational resources. We assess the potential of high-quality observations from beyond Earth's perspective and advanced global modeling to understand the couplings between coronal events in the context of CMEs and solar energetic particle events.

  18. Survey and analysis of selected jointly owned large-scale electric utility storage projects

    SciTech Connect (OSTI)

    Not Available

    1982-05-01

    The objective of this study was to examine and document the issues surrounding the curtailment in commercialization of large-scale electric storage projects. It was sensed that if these issues could be uncovered, then efforts might be directed toward clearing away these barriers and allowing these technologies to penetrate the market to their maximum potential. Joint-ownership of these projects was seen as a possible solution to overcoming the major barriers, particularly economic barriers, of commercializaton. Therefore, discussions with partners involved in four pumped storage projects took place to identify the difficulties and advantages of joint-ownership agreements. The four plants surveyed included Yards Creek (Public Service Electric and Gas and Jersey Central Power and Light); Seneca (Pennsylvania Electric and Cleveland Electric Illuminating Company); Ludington (Consumers Power and Detroit Edison, and Bath County (Virginia Electric Power Company and Allegheny Power System, Inc.). Also investigated were several pumped storage projects which were never completed. These included Blue Ridge (American Electric Power); Cornwall (Consolidated Edison); Davis (Allegheny Power System, Inc.) and Kttatiny Mountain (General Public Utilities). Institutional, regulatory, technical, environmental, economic, and special issues at each project were investgated, and the conclusions relative to each issue are presented. The major barriers preventing the growth of energy storage are the high cost of these systems in times of extremely high cost of capital, diminishing load growth and regulatory influences which will not allow the building of large-scale storage systems due to environmental objections or other reasons. However, the future for energy storage looks viable despite difficult economic times for the utility industry. Joint-ownership can ease some of the economic hardships for utilites which demonstrate a need for energy storage.

  19. Impact of Distribution-Connected Large-Scale Wind Turbines on Transmission System Stability during Large Disturbances: Preprint

    SciTech Connect (OSTI)

    Zhang, Y.; Allen, A.; Hodge, B. M.

    2014-02-01

    This work examines the dynamic impacts of distributed utility-scale wind power during contingency events on both the distribution system and the transmission system. It is the first step toward investigating high penetrations of distribution-connected wind power's impact on both distribution and transmission stability.

  20. Public attitudes regarding large-scale solar energy development in the U.S.

    SciTech Connect (OSTI)

    Carlisle, Juliet E.; Kane, Stephanie L.; Solan, David; Bowman, Madelaine; Joe, Jeffrey C.

    2015-08-01

    Using data collected from both a National sample as well as an oversample in U.S. Southwest, we examine public attitudes toward the construction of utility-scale solar facilities in the U.S. as well as development in one’s own county. Our multivariate analyses assess demographic and sociopsychological factors as well as context in terms of proximity of proposed project by considering the effect of predictors for respondents living in the Southwest versus those from a National sample.We find that the predictors, and impact of the predictors, related to support and opposition to solar development vary in terms of psychological and physical distance. Overall, for respondents living in the U.S. Southwest we find that environmentalism, belief that developers receive too many incentives, and trust in project developers to be significantly related to support and opposition to solar development, in general. When Southwest respondents consider large-scale solar development in their county, the influence of these variables changes so that property value, race, and age only yield influence. Differential effects occur for respondents of our National sample.We believe our findings to be relevant for those outside the U.S. due to the considerable growth PV solar has experienced in the last decade, especially in China, Japan, Germany, and the U.S.

  1. Public attitudes regarding large-scale solar energy development in the U.S.

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Carlisle, Juliet E.; Kane, Stephanie L.; Solan, David; Bowman, Madelaine; Joe, Jeffrey C.

    2015-08-01

    Using data collected from both a National sample as well as an oversample in U.S. Southwest, we examine public attitudes toward the construction of utility-scale solar facilities in the U.S. as well as development in one’s own county. Our multivariate analyses assess demographic and sociopsychological factors as well as context in terms of proximity of proposed project by considering the effect of predictors for respondents living in the Southwest versus those from a National sample.We find that the predictors, and impact of the predictors, related to support and opposition to solar development vary in terms of psychological and physical distance.more » Overall, for respondents living in the U.S. Southwest we find that environmentalism, belief that developers receive too many incentives, and trust in project developers to be significantly related to support and opposition to solar development, in general. When Southwest respondents consider large-scale solar development in their county, the influence of these variables changes so that property value, race, and age only yield influence. Differential effects occur for respondents of our National sample.We believe our findings to be relevant for those outside the U.S. due to the considerable growth PV solar has experienced in the last decade, especially in China, Japan, Germany, and the U.S.« less

  2. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    SciTech Connect (OSTI)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp; Vazhkudai, Sudharshan S.; Cao, Qing

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  3. The Lagrangian-space Effective Field Theory of large scale structures

    SciTech Connect (OSTI)

    Porto, Rafael A.; Zaldarriaga, Matias; Senatore, Leonardo E-mail: senatore@stanford.edu

    2014-05-01

    We introduce a Lagrangian-space Effective Field Theory (LEFT) formalism for the study of cosmological large scale structures. Unlike the previous Eulerian-space construction, it is naturally formulated as an effective field theory of extended objects in Lagrangian space. In LEFT the resulting finite size effects are described using a multipole expansion parameterized by a set of time dependent coefficients and organized in powers of the ratio of the wavenumber of interest k over the non-linear scale k{sub NL}. The multipoles encode the effects of the short distance modes on the long-wavelength universe and absorb UV divergences when present. There are no IR divergences in LEFT. Some of the parameters that control the perturbative approach are not assumed to be small and can be automatically resummed. We present an illustrative one-loop calculation for a power law universe. We describe the dynamics both at the level of the equations of motion and through an action formalism.

  4. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    SciTech Connect (OSTI)

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem important to the nations scientific progress as described shortly. Further, SLAC researchers routinely generate massive amounts of data, and frequently collaborate with other researchers located around the world. Thus SLAC is an ideal teammate through which to develop, test and deploy this technology. The nature of the datasets generated by simulations performed at SLAC presented unique visualization challenges especially when dealing with higher-order elements that were addressed during this Phase II. During this Phase II, we have developed a strong platform for collaborative visualization based on ParaView. We have developed and deployed a ParaView Web Visualization framework that can be used for effective collaboration over the Web. Collaborating and visualizing over the Web presents the community with unique opportunities for sharing and accessing visualization and HPC resources that hitherto with either inaccessible or difficult to use. The technology we developed in here will alleviate both these issues as it becomes widely deployed and adopted.

  5. LARGE-SCALE MAGNETIC HELICITY FLUXES ESTIMATED FROM MDI MAGNETIC SYNOPTIC CHARTS OVER THE SOLAR CYCLE 23

    SciTech Connect (OSTI)

    Yang Shangbin; Zhang Hongqi

    2012-10-10

    To investigate the characteristics of large-scale and long-term evolution of magnetic helicity with solar cycles, we use the method of Local Correlation Tracking to estimate the magnetic helicity evolution over solar cycle 23 from 1996 to 2009 using 795 MDI magnetic synoptic charts. The main results are as follows: the hemispheric helicity rule still holds in general, i.e., the large-scale negative (positive) magnetic helicity dominates the northern (southern) hemisphere. However, the large-scale magnetic helicity fluxes show the same sign in both hemispheres around 2001 and 2005. The global, large-scale magnetic helicity flux over the solar disk changes from a negative value at the beginning of solar cycle 23 to a positive value at the end of the cycle, while the net accumulated magnetic helicity is negative in the period between 1996 and 2009.

  6. Comparison of prestellar core elongations and large-scale molecular cloud structures in the Lupus I region

    SciTech Connect (OSTI)

    Poidevin, Frdrick; Ade, Peter A. R.; Hargrave, Peter C.; Nutter, David; Angile, Francesco E.; Devlin, Mark J.; Klein, Jeffrey; Benton, Steven J.; Netterfield, Calvin B.; Chapin, Edward L.; Fissel, Laura M.; Gandilo, Natalie N.; Fukui, Yasuo; Gundersen, Joshua O.; Korotkov, Andrei L.; Matthews, Tristan G.; Novak, Giles; Moncelsi, Lorenzo; Mroczkowski, Tony K.; Olmi, Luca; and others

    2014-08-10

    Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 ?m maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of the morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 ?m with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamicsincluding secondary filaments that often run orthogonally to the primary filamentand possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.

  7. Large-Scale Urban Decontamination; Developments, Historical Examples and Lessons Learned

    SciTech Connect (OSTI)

    Rick Demmer

    2007-02-01

    Recent terrorist threats and actual events have lead to a renewed interest in the technical field of large scale, urban environment decontamination. One of the driving forces for this interest is the real potential for the cleanup and removal of radioactive dispersal device (RDD or dirty bomb) residues. In response the U. S. Government has spent many millions of dollars investigating RDD contamination and novel decontamination methodologies. Interest in chemical and biological (CB) cleanup has also peaked with the threat of terrorist action like the anthrax attack at the Hart Senate Office Building and with catastrophic natural events such as Hurricane Katrina. The efficiency of cleanup response will be improved with these new developments and a better understanding of the old reliable methodologies. Perhaps the most interesting area of investigation for large area decontamination is that of the RDD. While primarily an economic and psychological weapon, the need to cleanup and return valuable or culturally significant resources to the public is nonetheless valid. Several private companies, universities and National Laboratories are currently developing novel RDD cleanup technologies. Because of its longstanding association with radioactive facilities, the U. S. Department of Energy National Laboratories are at the forefront in developing and testing new RDD decontamination methods. However, such cleanup technologies are likely to be fairly task specific; while many different contamination mechanisms, substrate and environmental conditions will make actual application more complicated. Some major efforts have also been made to model potential contamination, to evaluate both old and new decontamination techniques and to assess their readiness for use. Non-radioactive, CB threats each have unique decontamination challenges and recent events have provided some examples. The U. S. Environmental Protection Agency (EPA), as lead agency for these emergency cleanup responses, has a sound approach for decontamination decision-making that has been applied several times. The anthrax contamination at the U. S. Hart Senate Office Building and numerous U. S. Post Office facilities are examples of employing novel technical responses. Decontamination of the Hart Office building required development of a new approach for high level decontamination of biological contamination as well as techniques for evaluating the technology effectiveness. The World Trade Center destruction also demonstrated the need for, and successful implementation of, appropriate cleanup methodologies. There are a number of significant lessons that can be gained from a look at previous large scale cleanup projects. Too often we are quick to apply a costly package and dispose method when sound technological cleaning approaches are available. Understanding historical perspectives, advanced planning and constant technology improvement are essential to successful decontamination.

  8. Large scale validation of the M5L lung CAD on heterogeneous CT datasets

    SciTech Connect (OSTI)

    Lopez Torres, E. E-mail: cerello@to.infn.it; Fiorina, E.; Pennazio, F.; Peroni, C.; Saletta, M.; Cerello, P. E-mail: cerello@to.infn.it; Camarlinghi, N.; Fantacci, M. E.

    2015-04-15

    Purpose: M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. Methods: M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number of features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. Results: The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. Conclusions: The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large scale screenings and clinical programs.

  9. Biomass Energy for Transport and Electricity: Large scale utilization under low CO2 concentration scenarios

    SciTech Connect (OSTI)

    Luckow, Patrick; Wise, Marshall A.; Dooley, James J.; Kim, Son H.

    2010-01-25

    This paper examines the potential role of large scale, dedicated commercial biomass energy systems under global climate policies designed to stabilize atmospheric concentrations of CO2 at 400ppm and 450ppm. We use an integrated assessment model of energy and agriculture systems to show that, given a climate policy in which terrestrial carbon is appropriately valued equally with carbon emitted from the energy system, biomass energy has the potential to be a major component of achieving these low concentration targets. The costs of processing and transporting biomass energy at much larger scales than current experience are also incorporated into the modeling. From the scenario results, 120-160 EJ/year of biomass energy is produced by midcentury and 200-250 EJ/year by the end of this century. In the first half of the century, much of this biomass is from agricultural and forest residues, but after 2050 dedicated cellulosic biomass crops become the dominant source. A key finding of this paper is the role that carbon dioxide capture and storage (CCS) technologies coupled with commercial biomass energy can play in meeting stringent emissions targets. Despite the higher technology costs of CCS, the resulting negative emissions used in combination with biomass are a very important tool in controlling the cost of meeting a target, offsetting the venting of CO2 from sectors of the energy system that may be more expensive to mitigate, such as oil use in transportation. The paper also discusses the role of cellulosic ethanol and Fischer-Tropsch biomass derived transportation fuels and shows that both technologies are important contributors to liquid fuels production, with unique costs and emissions characteristics. Through application of the GCAM integrated assessment model, it becomes clear that, given CCS availability, bioenergy will be used both in electricity and transportation.

  10. Impact of Large Scale Energy Efficiency Programs On Consumer Tariffs and Utility Finances in India

    SciTech Connect (OSTI)

    Abhyankar, Nikit; Phadke, Amol

    2011-01-20

    Large-scale EE programs would modestly increase tariffs but reduce consumers' electricity bills significantly. However, the primary benefit of EE programs is a significant reduction in power shortages, which might make these programs politically acceptable even if tariffs increase. To increase political support, utilities could pursue programs that would result in minimal tariff increases. This can be achieved in four ways: (a) focus only on low-cost programs (such as replacing electric water heaters with gas water heaters); (b) sell power conserved through the EE program to the market at a price higher than the cost of peak power purchase; (c) focus on programs where a partial utility subsidy of incremental capital cost might work and (d) increase the number of participant consumers by offering a basket of EE programs to fit all consumer subcategories and tariff tiers. Large scale EE programs can result in consistently negative cash flows and significantly erode the utility's overall profitability. In case the utility is facing shortages, the cash flow is very sensitive to the marginal tariff of the unmet demand. This will have an important bearing on the choice of EE programs in Indian states where low-paying rural and agricultural consumers form the majority of the unmet demand. These findings clearly call for a flexible, sustainable solution to the cash-flow management issue. One option is to include a mechanism like FAC in the utility incentive mechanism. Another sustainable solution might be to have the net program cost and revenue loss built into utility's revenue requirement and thus into consumer tariffs up front. However, the latter approach requires institutionalization of EE as a resource. The utility incentive mechanisms would be able to address the utility disincentive of forgone long-run return but have a minor impact on consumer benefits. Fundamentally, providing incentives for EE programs to make them comparable to supply-side investments is a way of moving the electricity sector toward a model focused on providing energy services rather than providing electricity.

  11. The consequences of failure should be considered in siting geologic carbon sequestration projects

    SciTech Connect (OSTI)

    Price, P.N.; Oldenburg, C.M.

    2009-02-23

    Geologic carbon sequestration is the injection of anthropogenic CO{sub 2} into deep geologic formations where the CO{sub 2} is intended to remain indefinitely. If successfully implemented, geologic carbon sequestration will have little or no impact on terrestrial ecosystems aside from the mitigation of climate change. However, failure of a geologic carbon sequestration site, such as large-scale leakage of CO{sub 2} into a potable groundwater aquifer, could cause impacts that would require costly remediation measures. Governments are attempting to develop regulations for permitting geologic carbon sequestration sites to ensure their safety and effectiveness. At present, these regulations focus largely on decreasing the probability of failure. In this paper we propose that regulations for the siting of early geologic carbon sequestration projects should emphasize limiting the consequences of failure because consequences are easier to quantify than failure probability.

  12. Large Scale U.S. Unconventional Fuels Production and the Role of Carbon Dioxide Capture and Storage Technologies in Reducing Their Greenhouse Gas Emissions

    SciTech Connect (OSTI)

    Dooley, James J.; Dahowski, Robert T.

    2008-11-18

    This paper examines the role that carbon dioxide capture and storage technologies could play in reducing greenhouse gas emissions if a significant unconventional fuels industry were to develop within the United States. Specifically, the paper examines the potential emergence of a large scale domestic unconventional fuels industry based on oil shale and coal-to-liquids (CTL) technologies. For both of these domestic heavy hydrocarbon resources, this paper models the growth of domestic production to a capacity of 3 MMB/d by 2050. For the oil shale production case, we model large scale deployment of an in-situ retorting process applied to the Eocene Green River formation of Colorado, Utah, and Wyoming where approximately 75% of the high grade oil shale resources within the United States lies. For the CTL case, we examine a more geographically dispersed coal-based unconventional fuel industry. This paper examines the performance of these industries under two hypothetical climate policies and concludes that even with the wide scale availability of cost effective carbon dioxide capture and storage technologies, these unconventional fuels production industries would be responsible for significant increases in CO2 emissions to the atmosphere. The oil shale production facilities required to produce 3MMB/d would result in net emissions to the atmosphere of between 3000-7000 MtCO2 in addition to storing potentially 1000 to 5000 MtCO2 in regional deep geologic formations in the period up to 2050. A similarly sized domestic CTL industry could result in 4000 to 5000 MtCO2 emitted to the atmosphere in addition to potentially 21,000 to 22,000 MtCO2 stored in regional deep geologic formations over the same period up to 2050. Preliminary analysis of regional CO2 storage capacity in locations where such facilities might be sited indicates that there appears to be sufficient storage capacity, primarily in deep saline formations, to accommodate the CO2 from these industries. However, additional analyses plus detailed regional and site characterization is needed, along with a closer examination of competing storage demands.

  13. A large-scale structure at redshift 1.71 in the Lockman Hole

    SciTech Connect (OSTI)

    Henry, J. Patrick; Hasinger, Gnther; Suh, Hyewon; Aoki, Kentaro; Finoguenov, Alexis; Fotopoulou, Sotiria; Salvato, Mara; Tanaka, Masayuki

    2014-01-01

    We previously identified LH146, a diffuse X-ray source in the Lockman Hole, as a galaxy cluster at redshift 1.753. The redshift was based on one spectroscopic value, buttressed by seven additional photometric redshifts. We confirm here the previous spectroscopic redshift and present concordant spectroscopic redshifts for an additional eight galaxies. The average of these nine redshifts is 1.714 0.012 (error on the mean). Scrutiny of the galaxy distribution in redshift space and the plane of the sky shows that there are two concentrations of galaxies near the X-ray source. In addition, there are three diffuse X-ray sources spread along the axis connecting the galaxy concentrations. LH146 is one of these three and lies approximately at the center of the two galaxy concentrations and the outer two diffuse X-ray sources. We thus conclude that LH146 is at the redshift initially reported but it is not a single virialized galaxy cluster, as previously assumed. Rather, it appears to mark the approximate center of a larger region containing more objects. For brevity, we refer to all these objects and their alignments as a large-scale structure. The exact nature of LH146 itself remains unclear.

  14. Galaxy evolution and large-scale structure in the far-infrared. I. IRAS pointed observations

    SciTech Connect (OSTI)

    Lonsdale, C.J.; Hacking, P.B.

    1989-04-01

    Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution. 81 refs.

  15. In-Flight Measurement of the Absolute Energy Scale of the Fermi Large Area Telescope

    SciTech Connect (OSTI)

    Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W.B.; Axelsson, M.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Bloom, E.D.; Bonamente, E.; Borgland, A.W.; Bouvier, A.; Bregeon, J.; Brez, A.; Brigida, M.; Bruel, P.; Buehler, R.; Buson, S.; /more authors..

    2012-09-20

    The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to survey the gamma-ray sky from 20 MeV to several hundreds of GeV. In this energy band there are no astronomical sources with sufficiently well known and sharp spectral features to allow an absolute calibration of the LAT energy scale. However, the geomagnetic cutoff in the cosmic ray electron-plus-positron (CRE) spectrum in low Earth orbit does provide such a spectral feature. The energy and spectral shape of this cutoff can be calculated with the aid of a numerical code tracing charged particles in the Earth's magnetic field. By comparing the cutoff value with that measured by the LAT in different geomagnetic positions, we have obtained several calibration points between {approx}6 and {approx}13 GeV with an estimated uncertainty of {approx}2%. An energy calibration with such high accuracy reduces the systematic uncertainty in LAT measurements of, for example, the spectral cutoff in the emission from gamma ray pulsars.

  16. Using calibrated engineering models to predict energy savings in large-scale geothermal heat pump projects

    SciTech Connect (OSTI)

    Shonder, J.A.; Hughes, P.J.; Thornton, J.W.

    1998-10-01

    Energy savings performance contracting (ESPC) is now receiving greater attention as a means of implementing large-scale energy conservation projects in housing. Opportunities for such projects exist for military housing, federally subsidized low-income housing, and planned communities (condominiums, townhomes, senior centers), to name a few. Accurate prior (to construction) estimates of the energy savings in these projects reduce risk, decrease financing costs, and help avoid post-construction disputes over performance contract baseline adjustments. This paper demonstrates an improved method of estimating energy savings before construction takes place. Using an engineering model calibrated to pre-construction energy-use data collected in the field, this method is able to predict actual energy savings to a high degree of accuracy. This is verified with post-construction energy-use data from a geothermal heat pump ESPC at Fort Polk, Louisiana. This method also allows determination of the relative impact of the various energy conservation measures installed in a comprehensive energy conservation project. As an example, the breakout of savings at Fort Polk for the geothermal heat pumps, desuperheaters, lighting retrofits, and low-flow hot water outlets is provided.

  17. Using Calibrated Engineering Models To Predict Energy Savings In Large-Scale Geothermal Heat Pump Projects

    SciTech Connect (OSTI)

    Shonder, John A; Hughes, Patrick; Thornton, Jeff W.

    1998-01-01

    Energy savings performance contracting (ESPC) is now receiving greater attention as a means of implementing large-scale energy conservation projects in housing. Opportunities for such projects exist for military housing, federally subsidized low-income housing, and planned communities (condominiums, townhomes, senior centers), to name a few. Accurate prior (to construction) estimates of the energy savings in these projects reduce risk, decrease financing costs, and help avoid post-construction disputes over performance contract baseline adjustments. This paper demonstrates an improved method of estimating energy savings before construction takes place. Using an engineering model calibrated to pre-construction energy-use data collected in the field, this method is able to predict actual energy savings to a high degree of accuracy. This is verified with post-construction energy-use data from a geothermal heat pump ESPC at Fort Polk, Louisiana. This method also allows determination of the relative impact of the various energy conservation measures installed in a comprehensive energy conservation project. As an example, the breakout of savings at Fort Polk for the geothermal heat pumps, desuperheaters, lighting retrofits, and low-flow hot water outlets is provided.

  18. Induced core formation time in subcritical magnetic clouds by large-scale trans-Alfvnic flows

    SciTech Connect (OSTI)

    Kudoh, Takahiro; Basu, Shantanu E-mail: basu@uwo.ca

    2014-10-20

    We clarify the mechanism of accelerated core formation by large-scale nonlinear flows in subcritical magnetic clouds by finding a semi-analytical formula for the core formation time and describing the physical processes that lead to them. Recent numerical simulations show that nonlinear flows induce rapid ambipolar diffusion that leads to localized supercritical regions that can collapse. Here, we employ non-ideal magnetohydrodynamic simulations including ambipolar diffusion for gravitationally stratified sheets threaded by vertical magnetic fields. One of the horizontal dimensions is eliminated, resulting in a simpler two-dimensional simulation that can clarify the basic process of accelerated core formation. A parameter study of simulations shows that the core formation time is inversely proportional to the square of the flow speed when the flow speed is greater than the Alfvn speed. We find a semi-analytical formula that explains this numerical result. The formula also predicts that the core formation time is about three times shorter than that with no turbulence, when the turbulent speed is comparable to the Alfvn speed.

  19. Kinematic morphology of large-scale structure: evolution from potential to rotational flow

    SciTech Connect (OSTI)

    Wang, Xin; Szalay, Alex; Aragn-Calvo, Miguel A.; Neyrinck, Mark C.; Eyink, Gregory L.

    2014-09-20

    As an alternative way to describe the cosmological velocity field, we discuss the evolution of rotational invariants constructed from the velocity gradient tensor. Compared with the traditional divergence-vorticity decomposition, these invariants, defined as coefficients of the characteristic equation of the velocity gradient tensor, enable a complete classification of all possible flow patterns in the dark-matter comoving frame, including both potential and vortical flows. We show that this tool, first introduced in turbulence two decades ago, is very useful for understanding the evolution of the cosmic web structure, and in classifying its morphology. Before shell crossing, different categories of potential flow are highly associated with the cosmic web structure because of the coherent evolution of density and velocity. This correspondence is even preserved at some level when vorticity is generated after shell crossing. The evolution from the potential to vortical flow can be traced continuously by these invariants. With the help of this tool, we show that the vorticity is generated in a particular way that is highly correlated with the large-scale structure. This includes a distinct spatial distribution and different types of alignment between the cosmic web and vorticity direction for various vortical flows. Incorporating shell crossing into closed dynamical systems is highly non-trivial, but we propose a possible statistical explanation for some of the phenomena relating to the internal structure of the three-dimensional invariant space.

  20. NV Energy Large-Scale Photovoltaic Integration Study: Intra-Hour Dispatch and AGC Simulation

    SciTech Connect (OSTI)

    Lu, Shuai; Etingov, Pavel V.; Meng, Da; Guo, Xinxin; Jin, Chunlian; Samaan, Nader A.

    2013-01-02

    The uncertainty and variability with photovoltaic (PV) generation make it very challenging to balance power system generation and load, especially under high penetration cases. Higher reserve requirements and more cycling of conventional generators are generally anticipated for large-scale PV integration. However, whether the existing generation fleet is flexible enough to handle the variations and how well the system can maintain its control performance are difficult to predict. The goal of this project is to develop a software program that can perform intra-hour dispatch and automatic generation control (AGC) simulation, by which the balancing operations of a system can be simulated to answer the questions posed above. The simulator, named Electric System Intra-Hour Operation Simulator (ESIOS), uses the NV Energy southern system as a study case, and models the systems generator configurations, AGC functions, and operator actions to balance system generation and load. Actual dispatch of AGC generators and control performance under various PV penetration levels can be predicted by running ESIOS. With data about the load, generation, and generator characteristics, ESIOS can perform similar simulations and assess variable generation integration impacts for other systems as well. This report describes the design of the simulator and presents the study results showing the PV impacts on NV Energy real-time operations.

  1. The power of event-driven analytics in Large Scale Data Processing

    ScienceCinema (OSTI)

    None

    2011-04-25

    FeedZai is a software company specialized in creating high-­-throughput low-­-latency data processing solutions. FeedZai develops a product called "FeedZai Pulse" for continuous event-­-driven analytics that makes application development easier for end users. It automatically calculates key performance indicators and baselines, showing how current performance differ from previous history, creating timely business intelligence updated to the second. The tool does predictive analytics and trend analysis, displaying data on real-­-time web-­-based graphics. In 2010 FeedZai won the European EBN Smart Entrepreneurship Competition, in the Digital Models category, being considered one of the "top-­-20 smart companies in Europe". The main objective of this seminar/workshop is to explore the topic for large-­-scale data processing using Complex Event Processing and, in particular, the possible uses of Pulse in the scope of the data processing needs of CERN. Pulse is available as open-­-source and can be licensed both for non-­-commercial and commercial applications. FeedZai is interested in exploring possible synergies with CERN in high-­-volume low-­-latency data processing applications. The seminar will be structured in two sessions, the first one being aimed to expose the general scope of FeedZai's activities, and the second focused on Pulse itself: 10:00-11:00 FeedZai and Large Scale Data Processing Introduction to FeedZai FeedZai Pulse and Complex Event Processing Demonstration Use-Cases and Applications Conclusion and Q&A 11:00-11:15 Coffee break 11:15-12:30 FeedZai Pulse Under the Hood A First FeedZai Pulse Application PulseQL overview Defining KPIs and Baselines Conclusion and Q&A About the speakers Nuno Sebastião is the CEO of FeedZai. Having worked for many years for the European Space Agency (ESA), he was responsible the overall design and development of Satellite Simulation Infrastructure of the agency. Having left ESA to found FeedZai, Nuno is currently responsible for the whole operations of the company. Nuno holds an M.Eng. in Informatics Engineering for the University of Coimbra, and an MBA from the London Business School. Paulo Marques is the CTO of FeedZai, being responsible for product development. Paulo is an Assistant Professor at the University of Coimbra, in the area of Distributed Data Processing, and an Adjunct Associated Professor at Carnegie Mellon, in the US. In the past Paulo lead a large number of projects for institutions like the ESA, Microsoft Research, SciSys, Siemens, among others, being now fully dedicated to FeedZai. Paulo holds a Ph.D. in Distributed Systems from the University of Coimbra.

  2. Application of micro-PIXE method to ore geology

    SciTech Connect (OSTI)

    Murao, S.; Hamasaki, S.; Sie, S. H.; Maglambayan, V. B.; Hu, X.

    1999-06-10

    Specific examples of ore mineral analysis by micro-PIXE are presented in this paper. For mineralogical usage it is essential to construct a specimen chamber which is designed exclusively for mineral analysis. In most of the analysis of natural minerals, selection of absorbers is essential in order to obtain optimum results. Trace element data reflect the crystallographic characteristics of each mineral and also geologic settings of sampling locality, and can be exploited in research spanning mineral exploration to beneficiation. Micro-PIXE thus serves as a bridge between small-scale mineralogical experiments and understanding of large-scale geological phenomenon on the globe.

  3. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    SciTech Connect (OSTI)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a brief summary of those relevant to issues raised by researchers at the workshop.

  4. Autonomous UAV-Based Mapping of Large-Scale Urban Firefights

    SciTech Connect (OSTI)

    Snarski, S; Scheibner, K F; Shaw, S; Roberts, R S; LaRow, A; Oakley, D; Lupo, J; Neilsen, D; Judge, B; Forren, J

    2006-03-09

    This paper describes experimental results from a live-fire data collect designed to demonstrate the ability of IR and acoustic sensing systems to detect and map high-volume gunfire events from tactical UAVs. The data collect supports an exploratory study of the FightSight concept in which an autonomous UAV-based sensor exploitation and decision support capability is being proposed to provide dynamic situational awareness for large-scale battalion-level firefights in cluttered urban environments. FightSight integrates IR imagery, acoustic data, and 3D scene context data with prior time information in a multi-level, multi-step probabilistic-based fusion process to reliably locate and map the array of urban firing events and firepower movements and trends associated with the evolving urban battlefield situation. Described here are sensor results from live-fire experiments involving simultaneous firing of multiple sub/super-sonic weapons (2-AK47, 2-M16, 1 Beretta, 1 Mortar, 1 rocket) with high optical and acoustic clutter at ranges up to 400m. Sensor-shooter-target configurations and clutter were designed to simulate UAV sensing conditions for a high-intensity firefight in an urban environment. Sensor systems evaluated were an IR bullet tracking system by Lawrence Livermore National Laboratory (LLNL) and an acoustic gunshot detection system by Planning Systems, Inc. (PSI). The results demonstrate convincingly the ability for the LLNL and PSI sensor systems to accurately detect, separate, and localize multiple shooters and the associated shot directions during a high-intensity firefight (77 rounds in 5 sec) in a high acoustic and optical clutter environment with no false alarms. Preliminary fusion processing was also examined that demonstrated an ability to distinguish co-located shooters (shooter density), range to <0.5 m accuracy at 400m, and weapon type.

  5. Large

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large area avalanche photodiode detector array upgrade for a ruby-laser Thomson scattering system T. M. Biewer, a) D. J. Den Hartog, and D. J. Holly Department of Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706 M. R. Stoneking Physics Department, Lawrence University, Appleton, Wisconsin 54912 ͑Presented on 8 July 2002͒ A low-cost upgrade has been implemented on the Madison Symmetric Torus ͑MST͒ ruby-laser Thomson scattering ͑TS͒ system to increase spectral coverage and

  6. Microsoft PowerPoint - 2-A-3-OK-Real-Time Data Infrastructure for Large Scale Wind Fleets.pptx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Real Time Data Infrastructure for Large Real-Time Data Infrastructure for Large Scale Wind Fleets - Return on Investment vs Fundamental Business Requirements Value now. Value over time. © Copyright 2011, OSIsoft, LLC All Rights Reserved. vs. Fundamental Business Requirements Reliability - 4 Ws and an H * What is reliability? - Uptime, OEE, profitable wind plants? (OEE Availability % * Production % * Quality %) * (OEE = Availability % * Production % * Quality %) * Why should money be spent to

  7. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    SciTech Connect (OSTI)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.

  8. APPLICATIONS OF CFD METHOD TO GAS MIXING ANALYSIS IN A LARGE-SCALED TANK

    SciTech Connect (OSTI)

    Lee, S; Richard Dimenna, R

    2007-03-19

    The computational fluid dynamics (CFD) modeling technique was applied to the estimation of maximum benzene concentration for the vapor space inside a large-scaled and high-level radioactive waste tank at Savannah River site (SRS). The objective of the work was to perform the calculations for the benzene mixing behavior in the vapor space of Tank 48 and its impact on the local concentration of benzene. The calculations were used to evaluate the degree to which purge air mixes with benzene evolving from the liquid surface and its ability to prevent an unacceptable concentration of benzene from forming. The analysis was focused on changing the tank operating conditions to establish internal recirculation and changing the benzene evolution rate from the liquid surface. The model used a three-dimensional momentum coupled with multi-species transport. The calculations included potential operating conditions for air inlet and exhaust flows, recirculation flow rate, and benzene evolution rate with prototypic tank geometry. The flow conditions are assumed to be fully turbulent since Reynolds numbers for typical operating conditions are in the range of 20,000 to 70,000 based on the inlet conditions of the air purge system. A standard two-equation turbulence model was used. The modeling results for the typical gas mixing problems available in the literature were compared and verified through comparisons with the test results. The benchmarking results showed that the predictions are in good agreement with the analytical solutions and literature data. Additional sensitivity calculations included a reduced benzene evolution rate, reduced air inlet and exhaust flow, and forced internal recirculation. The modeling results showed that the vapor space was fairly well mixed and that benzene concentrations were relatively low when forced recirculation and 72 cfm ventilation air through the tank boundary were imposed. For the same 72 cfm air inlet flow but without forced recirculation, the heavier benzene gas was stratified. The results demonstrated that benzene concentrations were relatively low for typical operating configurations and conditions. Detailed results and the cases considered in the calculations will be discussed here.

  9. Reducing Plug and Process Loads for a Large Scale, Low Energy Office Building: NREL's Research Support Facility; Preprint

    SciTech Connect (OSTI)

    Lobato, C.; Pless, S.; Sheppy, M.; Torcellini, P.

    2011-02-01

    This paper documents the design and operational plug and process load energy efficiency measures needed to allow a large scale office building to reach ultra high efficiency building goals. The appendices of this document contain a wealth of documentation pertaining to plug and process load design in the RSF, including a list of equipment was selected for use.

  10. THE DETECTION OF THE LARGE-SCALE ALIGNMENT OF MASSIVE GALAXIES AT z {approx} 0.6

    SciTech Connect (OSTI)

    Li Cheng [Partner Group of the Max Planck Institute for Astrophysics at the Shanghai Astronomical Observatory and Key Laboratory for Research in Galaxies and Cosmology of Chinese Academy of Sciences, Nandan Road 80, Shanghai 200030 (China); Jing, Y. P. [Center for Astronomy and Astrophysics, Department of Physics, Shanghai Jiao Tong University, Shanghai 200240 (China); Faltenbacher, A. [School of Physics, University of the Witwatersrand, P.O. Box Wits, Johannesburg 2050 (South Africa); Wang Jie, E-mail: leech@shao.ac.cn [National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)

    2013-06-10

    We report on the detection of the alignment between galaxies and large-scale structure at z {approx} 0.6 based on the CMASS galaxy sample from the Baryon Oscillation Spectroscopy Survey Data Release 9. We use two statistics to quantify the alignment signal: (1) the alignment two-point correlation function that probes the dependence of galaxy clustering at a given separation in redshift space on the projected angle ({theta}{sub p}) between the orientation of galaxies and the line connecting to other galaxies, and (2) the cos (2{theta})-statistic that estimates the average of cos (2{theta}{sub p}) for all correlated pairs at a given separation s. We find a significant alignment signal out to about 70 h {sup -1} Mpc in both statistics. Applications of the same statistics to dark matter halos of mass above 10{sup 12} h {sup -1} M{sub Sun} in a large cosmological simulation show scale-dependent alignment signals similar to the observation, but with higher amplitudes at all scales probed. We show that this discrepancy may be partially explained by a misalignment angle between central galaxies and their host halos, though detailed modeling is needed in order to better understand the link between the orientations of galaxies and host halos. In addition, we find systematic trends of the alignment statistics with the stellar mass of the CMASS galaxies, in the sense that more massive galaxies are more strongly aligned with the large-scale structure.

  11. Efficient large-scale finite-element computations in a CRAY environment

    SciTech Connect (OSTI)

    Goudreau, G.L.; Bailey, R.A.; Hallquist, J.O.; Murray, R.C.; Sackett, S.J.

    1983-06-01

    The Lawrence Livermore National Laboratory engineering computational experience on the CRAY-1 is highlighted in the context of our large general purpose solid and structural mechanics codes. DYNA2D and DYNA3D are explicit large deformation inelastic Lagrangian codes with one point elements and hourglass control. NIKE2D and NIKE3D are implicit codes of comparable continuum formulation but use two point constant pressure elements and an optimized linear equation solver. NIKE3D has a finite rotation plastic resultant shell element. The new general purpose linear elastic structures code GEMINI is also illustrated for large static and eigenvalue analysis. 19 references.

  12. A membrane-free lithium/polysulfide semi-liquid battery for large-scale energy storage

    SciTech Connect (OSTI)

    Yang, Yuan; Zheng, Guangyuan; Cui, Yi

    2013-01-01

    Large-scale energy storage represents a key challenge for renewable energy and new systems with low cost, high energy density and long cycle life are desired. In this article, we develop a new lithium/polysulfide (Li/PS) semi-liquid battery for large-scale energy storage, with lithium polysulfide (Li{sub 2}S{sub 8}) in ether solvent as a catholyte and metallic lithium as an anode. Unlike previous work on Li/S batteries with discharge products such as solid state Li{sub 2}S{sub 2} and Li{sub 2}S, the catholyte is designed to cycle only in the range between sulfur and Li{sub 2}S{sub 4}. Consequently all detrimental effects due to the formation and volume expansion of solid Li{sub 2}S{sub 2}/Li{sub 2}S are avoided. This novel strategy results in excellent cycle life and compatibility with flow battery design. The proof-of-concept Li/PS battery could reach a high energy density of 170 W h kg{sup -1} and 190 W h L{sup -1} for large scale storage at the solubility limit, while keeping the advantages of hybrid flow batteries. We demonstrated that, with a 5 M Li{sub 2}S{sub 8} catholyte, energy densities of 97 W h kg{sup -1} and 108 W h L{sup -1} can be achieved. As the lithium surface is well passivated by LiNO{sub 3} additive in ether solvent, internal shuttle effect is largely eliminated and thus excellent performance over 2000 cycles is achieved with a constant capacity of 200 mA h g{sup -1}. This new system can operate without the expensive ion-selective membrane, and it is attractive for large-scale energy storage.

  13. ARRA-Multi-Level Energy Storage and Controls for Large-Scale Wind Energy Integration

    SciTech Connect (OSTI)

    David Wenzhong Gao

    2012-09-30

    The Project Objective is to design innovative energy storage architecture and associated controls for high wind penetration to increase reliability and market acceptance of wind power. The project goals are to facilitate wind energy integration at different levels by design and control of suitable energy storage systems. The three levels of wind power system are: Balancing Control Center level, Wind Power Plant level, and Wind Power Generator level. Our scopes are to smooth the wind power fluctuation and also ensure adequate battery life. In the new hybrid energy storage system (HESS) design for wind power generation application, the boundary levels of the state of charge of the battery and that of the supercapacitor are used in the control strategy. In the controller, some logic gates are also used to control the operating time durations of the battery. The sizing method is based on the average fluctuation of wind profiles of a specific wind station. The calculated battery size is dependent on the size of the supercapacitor, state of charge of the supercapacitor and battery wear. To accommodate the wind power fluctuation, a hybrid energy storage system (HESS) consisting of battery energy system (BESS) and super-capacitor is adopted in this project. A probability-based power capacity specification approach for the BESS and super-capacitors is proposed. Through this method the capacities of BESS and super-capacitor are properly designed to combine the characteristics of high energy density of BESS and the characteristics of high power density of super-capacitor. It turns out that the super-capacitor within HESS deals with the high power fluctuations, which contributes to the extension of BESS lifetime, and the super-capacitor can handle the peaks in wind power fluctuations without the severe penalty of round trip losses associated with a BESS. The proposed approach has been verified based on the real wind data from an existing wind power plant in Iowa. An intelligent controller that increases battery life within hybrid energy storage systems for wind application was developed. Comprehensive studies have been conducted and simulation results are analyzed. A permanent magnet synchronous generator, coupled with a variable speed wind turbine, is connected to a power grid (14-bus system). A rectifier, a DC-DC converter and an inverter are used to provide a complete model of the wind system. An Energy Storage System (ESS) is connected to a DC-link through a DC-DC converter. An intelligent controller is applied to the DC-DC converter to help the Voltage Source Inverter (VSI) to regulate output power and also to control the operation of the battery and supercapacitor. This ensures a longer life time for the batteries. The detailed model is simulated in PSCAD/EMTP. Additionally, economic analysis has been done for different methods that can reduce the wind power output fluctuation. These methods are, wind power curtailment, dumping loads, battery energy storage system and hybrid energy storage system. From the results, application of single advanced HESS can save more money for wind turbines owners. Generally the income would be the same for most of methods because the wind does not change and maximum power point tracking can be applied to most systems. On the other hand, the cost is the key point. For short term and small wind turbine, the BESS is the cheapest and applicable method while for large scale wind turbines and wind farms the application of advanced HESS would be the best method to reduce the power fluctuation. The key outcomes of this project include a new intelligent controller that can reduce energy exchanged between the battery and DC-link, reduce charging/discharging cycles, reduce depth of discharge and increase time interval between charge/discharge, and lower battery temperature. This improves the overall lifetime of battery energy storages. Additionally, a new design method based on probability help optimize the power capacity specification for BESS and super-capacitors. Recommendations include experimental implementation of the controller and energy storage systems in laboratory environment for further testing and verification, which will help commercialization of the proposed system design and controller.

  14. Economic analysis of large-scale hydrogen storage for renewable utility applications.

    SciTech Connect (OSTI)

    Schoenung, Susan M.

    2011-08-01

    The work reported here supports the efforts of the Market Transformation element of the DOE Fuel Cell Technology Program. The portfolio includes hydrogen technologies, as well as fuel cell technologies. The objective of this work is to model the use of bulk hydrogen storage, integrated with intermittent renewable energy production of hydrogen via electrolysis, used to generate grid-quality electricity. In addition the work determines cost-effective scale and design characteristics and explores potential attractive business models.

  15. Techno-economic Modeling of the Integration of 20% Wind and Large-scale Energy Storage in ERCOT by 2030

    SciTech Connect (OSTI)

    Ross Baldick; Michael Webber; Carey King; Jared Garrison; Stuart Cohen; Duehee Lee

    2012-12-21

    This study’s objective is to examine interrelated technical and economic avenues for the Electric Reliability Council of Texas (ERCOT) grid to incorporate up to and over 20% wind generation by 2030. Our specific interests are to look at the factors that will affect the implementation of both high level of wind power penetration (> 20% generation) and installation of large scale storage.

  16. Placement of the dam for the no. 2 kambaratinskaya HPP by large-scale blasting: some observations

    SciTech Connect (OSTI)

    Shuifer, M. I.; Argal, E. S.

    2011-11-15

    Results of complex instrument observations of large-scale blasting during construction of the dam for the No. 2 Kambaratinskaya HPP on the Naryn River in the Republic of Kirgizia are analyzed. The purpose of these observations was: to determine the actual parameters of the seismic process, evaluate the effect of air and acoustic shock waves, and investigate the kinematics of the surface formed by the blast in its core region within the mass of fractured rocks.

  17. QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP),

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP), Bethesda MD, April 29-30, 2014 NY Center for Computational Science 2 Defining questions of nuclear physics research in US: Nuclear Science Advisory Committee (NSAC) "The Frontiers of Nuclear Science", 2007 Long Range Plan "What are the phases of strongly interacting matter and what roles do they play in the cosmos ?" "What does QCD predict for

  18. Microsoft Word - The_Advanced_Networks_and_Services_Underpinning_Modern,Large-Scale_Science.SciDAC.v5.doc

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ESnet4: Advanced Networking and Services Supporting the Science Mission of DOE's Office of Science William E. Johnston ESnet Dept. Head and Senior Scientist Lawrence Berkeley National Laboratory May, 2007 1 Introduction In many ways, the dramatic achievements in scientific discovery through advanced computing and the discoveries of the increasingly large-scale instruments with their enormous data handling and remote collaboration requirements, have been made possible by accompanying

  19. Diurnal Cycle of Convection at the ARM SGP Site: Role of Large-Scale Forcing, Surface Fluxes, and Convective Inhibition

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Diurnal Cycle of Convection at the ARM SGP Site: Role of Large-Scale Forcing, Surface Fluxes, and Convective Inhibition G. J. Zhang Center for Atmospheric Sciences Scripps Institution of Oceanography La Jolla, California Introduction Atmospheric convection undergoes strong diurnal variation over both land and oceans (Gray and Jacobson 1977; Dai 2001; Nesbitt and Zipser 2003). Because of the nature of the diurnal variation of solar radiation, the phasing of convection with solar radiation has a

  20. Coordinated Multi-layer Multi-domain Optical Network (COMMON) for Large-Scale Science Applications (COMMON)

    SciTech Connect (OSTI)

    Vokkarane, Vinod

    2013-09-01

    We intend to implement a Coordinated Multi-layer Multi-domain Optical Network (COMMON) Framework for Large-scale Science Applications. In the COMMON project, specific problems to be addressed include 1) anycast/multicast/manycast request provisioning, 2) deployable OSCARS enhancements, 3) multi-layer, multi-domain quality of service (QoS), and 4) multi-layer, multidomain path survivability. In what follows, we outline the progress in the above categories (Year 1, 2, and 3 deliverables).

  1. Microsoft Word - NRAP-TRS-III-002-2012_Modeling the Performance of Large Scale CO2 Storage_20121024.docx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Modeling the Performance of Large- Scale CO 2 Storage Systems: A Comparison of Different Sensitivity Analysis Methods 24 October 2012 Office of Fossil Energy NRAP-TRS-III-002-2012 Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy,

  2. Performance upgrades to the MCNP6 burnup capability for large scale depletion calculations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Fensin, M. L.; Galloway, J. D.; James, M. R.

    2015-04-11

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. With the merger of MCNPX and MCNP5, MCNP6 combined the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. The new MCNP6 depletion capability was first showcased at the International Congress for Advancements in Nuclear Power Plants (ICAPP) meeting in 2012. At that conference the new capabilities addressed included the combined distributive and shared memory parallel architecture for the burnup capability, improved memory management, physics enhancements, and newmore » predictability as compared to the H.B Robinson Benchmark. At Los Alamos National Laboratory, a special purpose cluster named “tebow,” was constructed such to maximize available RAM per CPU, as well as leveraging swap space with solid state hard drives, to allow larger scale depletion calculations (allowing for significantly more burnable regions than previously examined). As the MCNP6 burnup capability was scaled to larger numbers of burnable regions, a noticeable slowdown was realized.This paper details two specific computational performance strategies for improving calculation speedup: (1) retrieving cross sections during transport; and (2) tallying mechanisms specific to burnup in MCNP. To combat this slowdown new performance upgrades were developed and integrated into MCNP6 1.2.« less

  3. Modeling ramp compression experiments using large-scale molecular dynamics simulation.

    SciTech Connect (OSTI)

    Mattsson, Thomas Kjell Rene; Desjarlais, Michael Paul; Grest, Gary Stephen; Templeton, Jeremy Alan; Thompson, Aidan Patrick; Jones, Reese E.; Zimmerman, Jonathan A.; Baskes, Michael I.; Winey, J. Michael; Gupta, Yogendra Mohan; Lane, J. Matthew D.; Ditmire, Todd; Quevedo, Hernan J.

    2011-10-01

    Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.

  4. Performance upgrades to the MCNP6 burnup capability for large scale depletion calculations

    SciTech Connect (OSTI)

    Fensin, M. L.; Galloway, J. D.; James, M. R.

    2015-04-11

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. With the merger of MCNPX and MCNP5, MCNP6 combined the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. The new MCNP6 depletion capability was first showcased at the International Congress for Advancements in Nuclear Power Plants (ICAPP) meeting in 2012. At that conference the new capabilities addressed included the combined distributive and shared memory parallel architecture for the burnup capability, improved memory management, physics enhancements, and new predictability as compared to the H.B Robinson Benchmark. At Los Alamos National Laboratory, a special purpose cluster named “tebow,” was constructed such to maximize available RAM per CPU, as well as leveraging swap space with solid state hard drives, to allow larger scale depletion calculations (allowing for significantly more burnable regions than previously examined). As the MCNP6 burnup capability was scaled to larger numbers of burnable regions, a noticeable slowdown was realized.This paper details two specific computational performance strategies for improving calculation speedup: (1) retrieving cross sections during transport; and (2) tallying mechanisms specific to burnup in MCNP. To combat this slowdown new performance upgrades were developed and integrated into MCNP6 1.2.

  5. Large scale two-dimensional arrays of magnesium diboride superconducting quantum interference devices

    SciTech Connect (OSTI)

    Cybart, Shane A. Dynes, R. C.; Wong, T. J.; Cho, E. Y.; Beeman, J. W.; Yung, C. S.; Moeckly, B. H.

    2014-05-05

    Magnetic field sensors based on two-dimensional arrays of superconducting quantum interference devices were constructed from magnesium diboride thin films. Each array contained over 30?000 Josephson junctions fabricated by ion damage of 30?nm weak links through an implant mask defined by nano-lithography. Current-biased devices exhibited very large voltage modulation as a function of magnetic field, with amplitudes as high as 8?mV.

  6. Integrating large-scale functional genomics data to dissect metabolic networks for hydrogen production

    SciTech Connect (OSTI)

    Harwood, Caroline S

    2012-12-17

    The goal of this project is to identify gene networks that are critical for efficient biohydrogen production by leveraging variation in gene content and gene expression in independently isolated Rhodopseudomonas palustris strains. Coexpression methods were applied to large data sets that we have collected to define probabilistic causal gene networks. To our knowledge this a first systems level approach that takes advantage of strain-to strain variability to computationally define networks critical for a particular bacterial phenotypic trait.

  7. Tensor to scalar ratio and large scale power suppression from pre-slow roll initial conditions

    SciTech Connect (OSTI)

    Lello, Louis; Boyanovsky, Daniel, E-mail: lal81@pitt.edu, E-mail: boyan@pitt.edu [Department of Physics and Astronomy, University of Pittsburgh, 3941 O'Hara St, Pittsburgh, PA 15260 (United States)

    2014-05-01

    We study the corrections to the power spectra of curvature and tensor perturbations and the tensor-to-scalar ratio r in single field slow roll inflation with standard kinetic term due to initial conditions imprinted by a ''fast-roll'' stage prior to slow roll. For a wide range of initial inflaton kinetic energy, this stage lasts only a few e-folds and merges smoothly with slow-roll thereby leading to non-Bunch-Davies initial conditions for modes that exit the Hubble radius during slow roll. We describe a program that yields the dynamics in the fast-roll stage while matching to the slow roll stage in a manner that is independent of the inflationary potentials. Corrections to the power spectra are encoded in a ''transfer function'' for initial conditions T{sub ?}(k), P{sub ?}(k) = P{sup BD}{sub ?}(k)T{sub ?}(k), implying a modification of the ''consistency condition'' for the tensor to scalar ratio at a pivot scale k{sub 0}: r(k{sub 0}) = ?8n{sub T}(k{sub 0})[T{sub T}(k{sub 0})/T{sub R}(k{sub 0})]. We obtain T{sub ?}(k) to leading order in a Born approximation valid for modes of observational relevance today. A fit yields T{sub ?}(k) = 1+A{sub ?}k{sup ?p}cos [2??k/H{sub sr}+?{sub ?}], with 1.5?scale during slow roll inflation, where curvature and tensor perturbations feature the same p,? for a wide range of initial conditions. These corrections lead to both a suppression of the quadrupole and oscillatory features in both P{sub R}(k) and r(k{sub 0}) with a period of the order of the Hubble scale during slow roll inflation. The results are quite general and independent of the specific inflationary potentials, depending solely on the ratio of kinetic to potential energy ? and the slow roll parameters ?{sub V}, ?{sub V} to leading order in slow roll. For a wide range of ? and the values of ?{sub V};?{sub V} corresponding to the upper bounds from Planck, we find that the low quadrupole is consistent with the results from Planck, and the oscillations in r(k{sub 0}) as a function of k{sub 0} could be observable if the modes corresponding to the quadrupole and the pivot scale crossed the Hubble radius very few (23) e-folds after the onset of slow roll. We comment on possible impact on the recent BICEP2 results.

  8. Large-scale fabrication of BN tunnel barriers for graphene spintronics

    SciTech Connect (OSTI)

    Fu, Wangyang; Makk, Péter; Maurand, Romain; Bräuninger, Matthias; Schönenberger, Christian

    2014-08-21

    We have fabricated graphene spin-valve devices utilizing scalable materials made from chemical vapor deposition (CVD). Both the spin-transporting graphene and the tunnel barrier material are CVD-grown. The tunnel barrier is realized by Hexagonal boron nitride, used either as a monolayer or bilayer and placed over the graphene. Spin transport experiments were performed using ferromagnetic contacts deposited onto the barrier. We find that spin injection is still greatly suppressed in devices with a monolayer tunneling barrier due to resistance mismatch. This is, however, not the case for devices with bilayer barriers. For those devices, a spin relaxation time of ∼260 ps intrinsic to the CVD graphene material is deduced. This time scale is comparable to those reported for exfoliated graphene, suggesting that this CVD approach is promising for spintronic applications which require scalable materials.

  9. Measuring the effectiveness of infrastructure-level detection of large-scale botnets

    SciTech Connect (OSTI)

    Yan, Guanhua; Eidenbenz, Stephan; Zeng, Yuanyuan; Shin, Kang G

    2010-12-16

    Botnets are one of the most serious security threats to the Internet and its end users. In recent years, utilizing P2P as a Command and Control (C&C) protocol has gained popularity due to its decentralized nature that can help hide the hotmaster's identity. Most bot detection approaches targeting P2P botnets either rely on behavior monitoring or traffic flow and packet analysis, requiring fine-grained information collected locally. This requirement limits the scale of detection. In this paper, we consider detection of P2P botnets at a high-level - the infrastructure level - by exploiting their structural properties from a graph analysis perspective. Using three different P2P overlay structures, we measure the effectiveness of detecting each structure at various locations (the Autonomous System (AS), the Point of Presence (PoP), and the router rendezvous) in the Internet infrastructure.

  10. Active and passive acoustic imaging inside a large-scale polyaxial hydraulic fracture test

    SciTech Connect (OSTI)

    Glaser, S.D.; Dudley, J.W. II; Shlyapobersky, J.

    1999-07-01

    An automated laboratory hydraulic fracture experiment has been assembled to determine what rock and treatment parameters are crucial to improving the efficiency and effectiveness of field hydraulic fractures. To this end a large (460 mm cubic sample) polyaxial cell, with servo-controlled X,Y,Z, pore pressure, crack-mouth-opening-displacement, and bottom hole pressure, was built. Active imaging with embedded seismic diffraction arrays images the geometry of the fracture. Preliminary tests indicate fracture extent can be imaged to within 5%. Unique embeddible high-fidelity particle velocity AE sensors were designed and calibrated to allow determination of fracture source kinematics.

  11. Efficient Feature-Driven Visualization of Large-Scale Scientific Data

    SciTech Connect (OSTI)

    Lu, Aidong

    2012-12-12

    Very large, complex scientific data acquired in many research areas creates critical challenges for scientists to understand, analyze, and organize their data. The objective of this project is to expand the feature extraction and analysis capabilities to develop powerful and accurate visualization tools that can assist domain scientists with their requirements in multiple phases of scientific discovery. We have recently developed several feature-driven visualization methods for extracting different data characteristics of volumetric datasets. Our results verify the hypothesis in the proposal and will be used to develop additional prototype systems.

  12. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; et al

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less

  13. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    SciTech Connect (OSTI)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  14. Carbon Molecular Sieve Membrane as a True One Box Unit for Large Scale Hydrogen Production

    SciTech Connect (OSTI)

    Paul Liu

    2012-05-01

    IGCC coal-fired power plants show promise for environmentally-benign power generation. In these plants coal is gasified to syngas then processed in a water gas-shift (WGS) reactor to maximize the hydrogen/CO{sub 2} content. The gas stream can then be separated into a hydrogen rich stream for power generation and/or further purified for sale as a chemical and a CO{sub 2} rich stream for the purpose of carbon capture and storage (CCS). Today, the separation is accomplished using conventional absorption/desorption processes with post CO{sub 2} compression. However, significant process complexity and energy penalties accrue with this approach, accounting for ~20% of the capital cost and ~27% parasitic energy consumption. Ideally, a ??one-box? process is preferred in which the syngas is fed directly to the WGS reactor without gas pre-treatment, converting the CO to hydrogen in the presence of H{sub 2}S and other impurities and delivering a clean hydrogen product for power generation or other uses. The development of such a process is the primary goal of this project. Our proposed "one-box" process includes a catalytic membrane reactor (MR) that makes use of a hydrogen-selective, carbon molecular sieve (CMS) membrane, and a sulfur-tolerant Co/Mo/Al{sub 2}O{sub 3} catalyst. The membrane reactor??s behavior has been investigated with a bench top unit for different experimental conditions and compared with the modeling results. The model is used to further investigate the design features of the proposed process. CO conversion >99% and hydrogen recovery >90% are feasible under the operating pressures available from IGCC. More importantly, the CMS membrane has demonstrated excellent selectivity for hydrogen over H{sub 2}S (>100), and shown no flux loss in the presence of a synthetic "tar"-like material, i.e., naphthalene. In summary, the proposed "one-box" process has been successfully demonstrated with the bench-top reactor. In parallel we have successfully designed and fabricated a full-scale CMS membrane and module for the proposed application. This full-scale membrane element is a 3" diameter with 30"L, composed of ~85 single CMS membrane tubes. The membrane tubes and bundles have demonstrated satisfactory thermal, hydrothermal, thermal cycling and chemical stabilities under an environment simulating the temperature, pressure and contaminant levels encountered in our proposed process. More importantly, the membrane module packed with the CMS bundle was tested for over 30 pressure cycles between ambient pressure and >300 -600 psi at 200 to 300°C without mechanical degradation. Finally, internal baffles have been designed and installed to improve flow distribution within the module, which delivered ?90% separation efficiency in comparison with the efficiency achieved with single membrane tubes. In summary, the full-scale CMS membrane element and module have been successfully developed and tested satisfactorily for our proposed one-box application; a test quantity of elements/modules have been fabricated for field testing. Multiple field tests have been performed under this project at National Carbon Capture Center (NCCC). The separation efficiency and performance stability of our full-scale membrane elements have been verified in testing conducted for times ranging from 100 to >250 hours of continuous exposure to coal/biomass gasifier off-gas for hydrogen enrichment with no gas pre-treatment for contaminants removal. In particular, "tar-like" contaminants were effectively rejected by the membrane with no evidence of fouling. In addition, testing was conducted using a hybrid membrane system, i.e., the CMS membrane in conjunction with the palladium membrane, to demonstrate that 99+% H{sub 2} purity and a high degree of CO{sub 2} capture could be achieved. In summary, the stability and performance of the full-scale hydrogen selective CMS membrane/module has been verified in multiple field tests in the presence of coal/biomass gasifier off-gas under this project. A promi

  15. Evaluation of Simple Causal Message Logging for Large-Scale Fault Tolerant HPC Systems

    SciTech Connect (OSTI)

    Bronevetsky, G; Meneses, E; Kale, L V

    2011-02-25

    The era of petascale computing brought machines with hundreds of thousands of processors. The next generation of exascale supercomputers will make available clusters with millions of processors. In those machines, mean time between failures will range from a few minutes to few tens of minutes, making the crash of a processor the common case, instead of a rarity. Parallel applications running on those large machines will need to simultaneously survive crashes and maintain high productivity. To achieve that, fault tolerance techniques will have to go beyond checkpoint/restart, which requires all processors to roll back in case of a failure. Incorporating some form of message logging will provide a framework where only a subset of processors are rolled back after a crash. In this paper, we discuss why a simple causal message logging protocol seems a promising alternative to provide fault tolerance in large supercomputers. As opposed to pessimistic message logging, it has low latency overhead, especially in collective communication operations. Besides, it saves messages when more than one thread is running per processor. Finally, we demonstrate that a simple causal message logging protocol has a faster recovery and a low performance penalty when compared to checkpoint/restart. Running NAS Parallel Benchmarks (CG, MG and BT) on 1024 processors, simple causal message logging has a latency overhead below 5%.

  16. Development of fine-resolution analyses and expanded large-scale forcing properties. Part I: Methodology and evaluation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Li, Zhijin; Vogelmann, Andrew M.; Feng, Sha; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Endo, Satoshi

    2015-01-20

    We produce fine-resolution, three-dimensional fields of meteorological and other variables for the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) Southern Great Plains site. The Community Gridpoint Statistical Interpolation system is implemented in a multiscale data assimilation (MS-DA) framework that is used within the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. The MS-DA algorithm uses existing reanalysis products and constrains fine-scale atmospheric properties by assimilating high-resolution observations. A set of experiments show that the data assimilation analysis realistically reproduces the intensity, structure, and time evolution of clouds and precipitation associated with a mesoscale convective system.more » Evaluations also show that the large-scale forcing derived from the fine-resolution analysis has an overall accuracy comparable to the existing ARM operational product. For enhanced applications, the fine-resolution fields are used to characterize the contribution of subgrid variability to the large-scale forcing and to derive hydrometeor forcing, which are presented in companion papers.« less

  17. Development of fine-resolution analyses and expanded large-scale forcing properties. Part I: Methodology and evaluation

    SciTech Connect (OSTI)

    Li, Zhijin; Vogelmann, Andrew M.; Feng, Sha; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Endo, Satoshi

    2015-01-20

    We produce fine-resolution, three-dimensional fields of meteorological and other variables for the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) Southern Great Plains site. The Community Gridpoint Statistical Interpolation system is implemented in a multiscale data assimilation (MS-DA) framework that is used within the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. The MS-DA algorithm uses existing reanalysis products and constrains fine-scale atmospheric properties by assimilating high-resolution observations. A set of experiments show that the data assimilation analysis realistically reproduces the intensity, structure, and time evolution of clouds and precipitation associated with a mesoscale convective system. Evaluations also show that the large-scale forcing derived from the fine-resolution analysis has an overall accuracy comparable to the existing ARM operational product. For enhanced applications, the fine-resolution fields are used to characterize the contribution of subgrid variability to the large-scale forcing and to derive hydrometeor forcing, which are presented in companion papers.

  18. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; Martin-Martinez, Sergio; Zhang, Jie; Hodge, Bri -Mathias; Molina-Garcia, Angel

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  19. Economic Impact of Large-Scale Deployment of Offshore Marine and Hydrokinetic Technology in Oregon Coastal Counties

    SciTech Connect (OSTI)

    Jimenez, T.; Tegen, S.; Beiter, P.

    2015-03-01

    To begin understanding the potential economic impacts of large-scale WEC technology, the Bureau of Ocean Energy Management (BOEM) commissioned the National Renewable Energy Laboratory (NREL) to conduct an economic impact analysis of largescale WEC deployment for Oregon coastal counties. This report follows a previously published report by BOEM and NREL on the jobs and economic impacts of WEC technology for the entire state (Jimenez and Tegen 2015). As in Jimenez and Tegen (2015), this analysis examined two deployment scenarios in the 2026-2045 timeframe: the first scenario assumed 13,000 megawatts (MW) of WEC technology deployed during the analysis period, and the second assumed 18,000 MW of WEC technology deployed by 2045. Both scenarios require major technology and cost improvements in the WEC devices. The study is on very large-scale deployment so readers can examine and discuss the potential of a successful and very large WEC industry. The 13,000-MW is used as the basis for the county analysis as it is the smaller of the two scenarios. Sensitivity studies examined the effects of a robust in-state WEC supply chain. The region of analysis is comprised of the seven coastal counties in Oregon—Clatsop, Coos, Curry, Douglas, Lane, Lincoln, and Tillamook—so estimates of jobs and other economic impacts are specific to this coastal county area.

  20. Simplified field-in-field technique for a large-scale implementation in breast radiation treatment

    SciTech Connect (OSTI)

    Fournier-Bidoz, Nathalie; Kirova, Youlia M.; Campana, Francois; Dendale, Remi; Fourquet, Alain

    2012-07-01

    We wanted to evaluate a simplified 'field-in-field' technique (SFF) that was implemented in our department of Radiation Oncology for breast treatment. This study evaluated 15 consecutive patients treated with a simplified field in field technique after breast-conserving surgery for early-stage breast cancer. Radiotherapy consisted of whole-breast irradiation to the total dose of 50 Gy in 25 fractions, and a boost of 16 Gy in 8 fractions to the tumor bed. We compared dosimetric outcomes of SFF to state-of-the-art electronic surface compensation (ESC) with dynamic leaves. An analysis of early skin toxicity of a population of 15 patients was performed. The median volume receiving at least 95% of the prescribed dose was 763 mL (range, 347-1472) for SFF vs. 779 mL (range, 349-1494) for ESC. The median residual 107% isodose was 0.1 mL (range, 0-63) for SFF and 1.9 mL (range, 0-57) for ESC. Monitor units were on average 25% higher in ESC plans compared with SFF. No patient treated with SFF had acute side effects superior to grade 1-NCI scale. SFF created homogenous 3D dose distributions equivalent to electronic surface compensation with dynamic leaves. It allowed the integration of a forward planned concomitant tumor bed boost as an additional multileaf collimator subfield of the tangential fields. Compared with electronic surface compensation with dynamic leaves, shorter treatment times allowed better radiation protection to the patient. Low-grade acute toxicity evaluated weekly during treatment and 2 months after treatment completion justified the pursuit of this technique for all breast patients in our department.

  1. Direct wafer bonding technology for large-scale InGaAs-on-insulator transistors

    SciTech Connect (OSTI)

    Kim, SangHyeon E-mail: sh-kim@kist.re.kr; Ikku, Yuki; Takenaka, Mitsuru; Takagi, Shinichi; Yokoyama, Masafumi; Nakane, Ryosho; Li, Jian; Kao, Yung-Chung

    2014-07-28

    Heterogeneous integration of III-V devices on Si wafers have been explored for realizing high device performance as well as merging electrical and photonic applications on the Si platform. Existing methodologies have unavoidable drawbacks such as inferior device quality or high cost in comparison with the current Si-based technology. In this paper, we present InGaAs-on-insulator (-OI) fabrication from an InGaAs layer grown on a Si donor wafer with a III-V buffer layer instead of growth on a InP donor wafer. This technology allows us to yield large wafer size scalability of III-V-OI layers up to the Si wafer size of 300?mm with a high film quality and low cost. The high film quality has been confirmed by Raman and photoluminescence spectra. In addition, the fabricated InGaAs-OI transistors exhibit the high electron mobility of 1700?cm{sup 2}/V s and uniform distribution of the leakage current, indicating high layer quality with low defect density.

  2. LY? FOREST TOMOGRAPHY FROM BACKGROUND GALAXIES: THE FIRST MEGAPARSEC-RESOLUTION LARGE-SCALE STRUCTURE MAP AT z > 2

    SciTech Connect (OSTI)

    Lee, Khee-Gan; Hennawi, Joseph F.; Eilers, Anna-Christina [Max Planck Institute for Astronomy, Knigstuhl 17, D-69117 Heidelberg (Germany); Stark, Casey; White, Martin [Department of Astronomy, University of California at Berkeley, B-20 Hearst Field Annex 3411, Berkeley, CA 94720 (United States); Prochaska, J. Xavier [Department of Astronomy and Astrophysics, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); Schlegel, David J. [University of California Observatories, Lick Observatory, 1156 High Street, Santa Cruz, CA 95064 (United States); Arinyo-i-Prats, Andreu [Institut de Cincies del Cosmos, Universitat de Barcelona (IEEC-UB), Mart Franqus 1, E-08028 Barcelona (Spain); Suzuki, Nao [Kavli Institute for the Physics and Mathematics of the Universe (IPMU), The University of Tokyo, Kashiwano-ha 5-1-5, Kashiwa-shi, Chiba (Japan); Croft, Rupert A. C. [Department of Physics, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States); Caputi, Karina I. [Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700-AV Groningen (Netherlands); Cassata, Paolo [Instituto de Fisica y Astronomia, Facultad de Ciencias, Universidad de Valparaiso, Av. Gran Bretana 1111, Casilla 5030, Valparaiso (Chile); Ilbert, Olivier; Le Brun, Vincent; Le Fvre, Olivier [Aix Marseille Universit, CNRS, LAM (Laboratoire d'Astrophysique de Marseille) UMR 7326, F-13388 Marseille (France); Garilli, Bianca [INAF-IASF, Via Bassini 15, I-20133, Milano (Italy); Koekemoer, Anton M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Maccagni, Dario [INAF-Osservatorio Astronomico di Bologna, Via Ranzani,1, I-40127 Bologna (Italy); Nugent, Peter, E-mail: lee@mpia.de [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); and others

    2014-11-01

    We present the first observations of foreground Ly? forest absorption from high-redshift galaxies, targeting 24 star-forming galaxies (SFGs) with z ? 2.3-2.8 within a 5' 14' region of the COSMOS field. The transverse sightline separation is ?2 h {sup 1} Mpc comoving, allowing us to create a tomographic reconstruction of the three-dimensional (3D) Ly? forest absorption field over the redshift range 2.20 ? z ? 2.45. The resulting map covers 6 h {sup 1} Mpc 14 h {sup 1} Mpc in the transverse plane and 230 h {sup 1} Mpc along the line of sight with a spatial resolution of ?3.5 h {sup 1} Mpc, and is the first high-fidelity map of a large-scale structure on ?Mpc scales at z > 2. Our map reveals significant structures with ? 10 h {sup 1} Mpc extent, including several spanning the entire transverse breadth, providing qualitative evidence for the filamentary structures predicted to exist in the high-redshift cosmic web. Simulated reconstructions with the same sightline sampling, spectral resolution, and signal-to-noise ratio recover the salient structures present in the underlying 3D absorption fields. Using data from other surveys, we identified 18 galaxies with known redshifts coeval with our map volume, enabling a direct comparison with our tomographic map. This shows that galaxies preferentially occupy high-density regions, in qualitative agreement with the same comparison applied to simulations. Our results establish the feasibility of the CLAMATO survey, which aims to obtain Ly? forest spectra for ?1000 SFGs over ?1 deg{sup 2} of the COSMOS field, in order to map out the intergalactic medium large-scale structure at (z) ? 2.3 over a large volume (100 h {sup 1} Mpc){sup 3}.

  3. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect (OSTI)

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  4. Large-scale Nanostructure Simulations from X-ray Scattering Data On Graphics Processor Clusters

    SciTech Connect (OSTI)

    Sarje, Abhinav; Pien, Jack; Li, Xiaoye; Chan, Elaine; Chourou, Slim; Hexemer, Alexander; Scholz, Arthur; Kramer, Edward

    2012-01-15

    X-ray scattering is a valuable tool for measuring the structural properties of materialsused in the design and fabrication of energy-relevant nanodevices (e.g., photovoltaic, energy storage, battery, fuel, and carbon capture andsequestration devices) that are key to the reduction of carbon emissions. Although today's ultra-fast X-ray scattering detectors can provide tremendousinformation on the structural properties of materials, a primary challenge remains in the analyses of the resulting data. We are developing novelhigh-performance computing algorithms, codes, and software tools for the analyses of X-ray scattering data. In this paper we describe two such HPCalgorithm advances. Firstly, we have implemented a flexible and highly efficient Grazing Incidence Small Angle Scattering (GISAXS) simulation code based on theDistorted Wave Born Approximation (DWBA) theory with C++/CUDA/MPI on a cluster of GPUs. Our code can compute the scattered light intensity from any givensample in all directions of space; thus allowing full construction of the GISAXS pattern. Preliminary tests on a single GPU show speedups over 125x compared tothe sequential code, and almost linear speedup when executing across a GPU cluster with 42 nodes, resulting in an additional 40x speedup compared to usingone GPU node. Secondly, for the structural fitting problems in inverse modeling, we have implemented a Reverse Monte Carlo simulation algorithm with C++/CUDAusing one GPU. Since there are large numbers of parameters for fitting in the in X-ray scattering simulation model, the earlier single CPU code required weeks ofruntime. Deploying the AccelerEyes Jacket/Matlab wrapper to use GPU gave around 100x speedup over the pure CPU code. Our further C++/CUDA optimization deliveredan additional 9x speedup.

  5. Proteogenomic strategies for identification of aberrant cancer peptides using large-scale Next Generation Sequencing data

    SciTech Connect (OSTI)

    Woo, Sunghee; Cha, Seong Won; Na, Seungjin; Guest, Clark; Liu, Tao; Smith, Richard D.; Rodland, Karin D.; Payne, Samuel H.; Bafna, Vineet

    2014-11-17

    Cancer is driven by the acquisition of somatic DNA lesions. Distinguishing the early driver mutations from subsequent passenger mutations is key to molecular sub-typing of cancers, and the discovery of novel biomarkers. The availability of genomics technologies (mainly wholegenome and exome sequencing, and transcript sampling via RNA-seq, collectively referred to as NGS) have fueled recent studies on somatic mutation discovery. However, the vision is challenged by the complexity, redundancy, and errors in genomic data, and the difficulty of investigating the proteome using only genomic approaches. Recently, combination of proteomic and genomic technologies are increasingly employed. However, the complexity and redundancy of NGS data remains a challenge for proteogenomics, and various trade-offs must be made to allow for the searches to take place. This paperprovides a discussion of two such trade-offs, relating to large database search, and FDR calculations, and their implication to cancer proteogenomics. Moreover, it extends and develops the idea of a unified genomic variant database that can be searched by any mass spectrometry sample. A total of 879 BAM files downloaded from TCGA repository were used to create a 4.34 GB unified FASTA database which contained 2,787,062 novel splice junctions, 38,464 deletions, 1105 insertions, and 182,302 substitutions. Proteomic data from a single ovarian carcinoma sample (439,858 spectra) was searched against the database. By applying the most conservative FDR measure, we have identified 524 novel peptides and 65,578 known peptides at 1% FDR threshold. The novel peptides include interesting examples of doubly mutated peptides, frame-shifts, and non-sample-recruited mutations, which emphasize the strength of our approach.

  6. Selection of components for the IDEALHY preferred cycle for the large scale liquefaction of hydrogen

    SciTech Connect (OSTI)

    Quack, H.; Seemann, I.; Klaus, M.; Haberstroh, Ch.; Berstad, D.; Walnum, H. T.; Neksa, P.; Decker, L.

    2014-01-29

    In a future energy scenario, in which storage and transport of liquid hydrogen in large quantities will be used, the efficiency of the liquefaction of hydrogen will be of utmost importance. The goal of the IDEALHY working party is to identify the most promising process for a 50 t/d plant and to select the components, with which such a process can be realized. In the first stage the team has compared several processes, which have been proposed or realized in the past. Based on this information a process has been selected, which is thermodynamically most promising and for which it could be assumed that good components already exist or can be developed in the foreseeable future. Main features of the selected process are the compression of the feed stream to a relatively high pressure level, o-p conversion inside plate-fin heat exchangers and expansion turbines in the supercritical region. Precooling to a temperature between 150 and 100 K will be obtained from a mixed refrigerant cycle similar to the systems used successfully in natural gas liquefaction plants. The final cooling will be produced by two Brayton cycles, both having several expansion turbines in series. The selected overall process has still a number of parameters, which can be varied. The optimum, i.e. the final choice will depend mainly on the quality of the available components. Key components are the expansion turbines of the two Brayton cycles and the main recycle compressor, which may be common to both Brayton cycles. A six-stage turbo-compressor with intercooling between the stages is expected to be the optimum choice here. Each stage may consist of several wheels in series. To make such a high efficient and cost-effective compressor feasible, one has to choose a refrigerant, which has a higher molecular weight than helium. The present preferred choice is a mixture of helium and neon with a molecular weight of about 8 kg/kmol. Such an expensive refrigerant requires that the whole refrigeration loop is extremely tight.

  7. Environmental Responses to Carbon Mitigation through Geological Storage

    SciTech Connect (OSTI)

    Cunningham, Alfred; Bromenshenk, Jerry

    2013-08-30

    In summary, this DOE EPSCoR project is contributing to the study of carbon mitigation through geological storage. Both deep and shallow subsurface research needs are being addressed through research directed at improved understanding of environmental responses associated with large scale injection of CO{sub 2} into geologic formations. The research plan has two interrelated research objectives. ? Objective 1: Determine the influence of CO{sub 2}-related injection of fluids on pore structure, material properties, and microbial activity in rock cores from potential geological carbon sequestration sites. ? Objective 2: Determine the Effects of CO{sub 2} leakage on shallow subsurface ecosystems (microbial and plant) using field experiments from an outdoor field testing facility.

  8. Large-scale Environmental Variables and Transition to Deep Convection in Cloud Resolving Model Simulations: A Vector Representation

    SciTech Connect (OSTI)

    Hagos, Samson M.; Leung, Lai-Yung R.

    2012-11-01

    Cloud resolving model simulations and vector analysis are used to develop a quantitative method of assessing regional variations in the relationships between various large-scale environmental variables and the transition to deep convection. Results of the CRM simulations from three tropical regions are used to cluster environmental conditions under which transition to deep convection does and does not take place. Projections of the large-scale environmental variables on the difference between these two clusters are used to quantify the roles of these variables in the transition to deep convection. While the transition to deep convection is most sensitive to moisture and vertical velocity perturbations, the details of the profiles of the anomalies vary from region to region. In comparison, the transition to deep convection is found to be much less sensitive to temperature anomalies over all three regions. The vector formulation presented in this study represents a simple general framework for quantifying various aspects of how the transition to deep convection is sensitive to environmental conditions.

  9. Impacts of Array Configuration on Land-Use Requirements for Large-Scale Photovoltaic Deployment in the United States: Preprint

    SciTech Connect (OSTI)

    Denholm, P.; Margolis, R. M.

    2008-05-01

    Land use is often cited as an important issue for renewable energy technologies. In this paper we examine the relationship between land-use requirements for large-scale photovoltaic (PV) deployment in the U.S. and PV-array configuration. We estimate the per capita land requirements for solar PV and find that array configuration is a stronger driver of energy density than regional variations in solar insolation. When deployed horizontally, the PV land area needed to meet 100% of an average U.S. citizen's electricity demand is about 100 m2. This requirement roughly doubles to about 200 m2 when using 1-axis tracking arrays. By comparing these total land-use requirements with other current per capita land uses, we find that land-use requirements of solar photovoltaics are modest, especially when considering the availability of zero impact 'land' on rooftops. Additional work is need to examine the tradeoffs between array spacing, self-shading losses, and land use, along with possible techniques to mitigate land-use impacts of large-scale PV deployment.

  10. Global Warming in Geologic Time

    SciTech Connect (OSTI)

    Archer, David

    2008-02-27

    The notion is pervasive in the climate science community and in the public at large that the climate impacts of fossil fuel CO2 release will only persist for a few centuries. This conclusion has no basis in theory or models of the atmosphere/ ocean carbon cycle, which we review here. The largest fraction of the CO2 recovery will take place on time scales of centuries, as CO2 invades the ocean, but a significant fraction of the fossil fuel CO2, ranging in published models in the literature from 20-60%, remains airborne for a thousand years or longer. Ultimate recovery takes place on time scales of hundreds of thousands of years, a geologic longevity typically associated in public perceptions with nuclear waste. The glacial/interglacial climate cycles demonstrate that ice sheets and sea level respond dramatically to millennial-timescale changes in climate forcing. There are also potential positive feedbacks in the carbon cycle, including methane hydrates in the ocean, and peat frozen in permafrost, that are most sensitive to the long tail of the fossil fuel CO2 in the atmosphere.

  11. Global Warming in Geologic Time

    SciTech Connect (OSTI)

    David Archer

    2008-02-27

    The notion is pervasive in the climate science community and in the public at large that the climate impacts of fossil fuel CO2 release will only persist for a few centuries. This conclusion has no basis in theory or models of the atmosphere / ocean carbon cycle, which we review here. The largest fraction of the CO2 recovery will take place on time scales of centuries, as CO2 invades the ocean, but a significant fraction of the fossil fuel CO2, ranging in published models in the literature from 20-60%, remains airborne for a thousand years or longer. Ultimate recovery takes place on time scales of hundreds of thousands of years, a geologic longevity typically associated in public perceptions with nuclear waste. The glacial / interglacial climate cycles demonstrate that ice sheets and sea level respond dramatically to millennial-timescale changes in climate forcing. There are also potential positive feedbacks in the carbon cycle, including methane hydrates in the ocean, and peat frozen in permafrost, that are most sensitive to the long tail of the fossil fuel CO2 in the atmosphere.

  12. Global Warming in Geologic Time

    ScienceCinema (OSTI)

    David Archer

    2010-01-08

    The notion is pervasive in the climate science community and in the public at large that the climate impacts of fossil fuel CO2 release will only persist for a few centuries. This conclusion has no basis in theory or models of the atmosphere / ocean carbon cycle, which we review here. The largest fraction of the CO2 recovery will take place on time scales of centuries, as CO2 invades the ocean, but a significant fraction of the fossil fuel CO2, ranging in published models in the literature from 20-60%, remains airborne for a thousand years or longer. Ultimate recovery takes place on time scales of hundreds of thousands of years, a geologic longevity typically associated in public perceptions with nuclear waste. The glacial / interglacial climate cycles demonstrate that ice sheets and sea level respond dramatically to millennial-timescale changes in climate forcing. There are also potential positive feedbacks in the carbon cycle, including methane hydrates in the ocean, and peat frozen in permafrost, that are most sensitive to the long tail of the fossil fuel CO2 in the atmosphere.

  13. Analysis of ground response data at Lotung large-scale soil- structure interaction experiment site. Final report

    SciTech Connect (OSTI)

    Chang, C.Y.; Mok, C.M.; Power, M.S.

    1991-12-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4-scale and 1/2-scale) of a nuclear plant containment structure at a site in Lotung (Tang, 1987), a seismically active region in northeast Taiwan. The models were constructed to gather data for the evaluation and validation of soil-structure interaction (SSI) analysis methodologies. Extensive instrumentation was deployed to record both structural and ground responses at the site during earthquakes. The experiment is generally referred to as the Lotung Large-Scale Seismic Test (LSST). As part of the LSST, two downhole arrays were installed at the site to record ground motions at depths as well as at the ground surface. Structural response and ground response have been recorded for a number of earthquakes (i.e. a total of 18 earthquakes in the period of October 1985 through November 1986) at the LSST site since the completion of the installation of the downhole instruments in October 1985. These data include those from earthquakes having magnitudes ranging from M{sub L} 4.5 to M{sub L} 7.0 and epicentral distances range from 4.7 km to 77.7 km. Peak ground surface accelerations range from 0.03 g to 0.21 g for the horizontal component and from 0.01 g to 0.20 g for the vertical component. The objectives of the study were: (1) to obtain empirical data on variations of earthquake ground motion with depth; (2) to examine field evidence of nonlinear soil response due to earthquake shaking and to determine the degree of soil nonlinearity; (3) to assess the ability of ground response analysis techniques including techniques to approximate nonlinear soil response to estimate ground motions due to earthquake shaking; and (4) to analyze earth pressures recorded beneath the basemat and on the side wall of the 1/4 scale model structure during selected earthquakes.

  14. Results of Large-Scale Testing on Effects of Anti-Foam Agent on Gas Retention and Release

    SciTech Connect (OSTI)

    Stewart, Charles W.; Guzman-Leong, Consuelo E.; Arm, Stuart T.; Butcher, Mark G.; Golovich, Elizabeth C.; Jagoda, Lynette K.; Park, Walter R.; Slaugh, Ryan W.; Su, Yin-Fong; Wend, Christopher F.; Mahoney, Lenna A.; Alzheimer, James M.; Bailey, Jeffrey A.; Cooley, Scott K.; Hurley, David E.; Johnson, Christian D.; Reid, Larry D.; Smith, Harry D.; Wells, Beric E.; Yokuda, Satoru T.

    2008-01-03

    The U.S. Department of Energy (DOE) Office of River Protections Waste Treatment Plant (WTP) will process and treat radioactive waste that is stored in tanks at the Hanford Site. The waste treatment process in the pretreatment facility will mix both Newtonian and non-Newtonian slurries in large process tanks. Process vessels mixing non-Newtonian slurries will use pulse jet mixers (PJMs), air sparging, and recirculation pumps. An anti-foam agent (AFA) will be added to the process streams to prevent surface foaming, but may also increase gas holdup and retention within the slurry. The work described in this report addresses gas retention and release in simulants with AFA through testing and analytical studies. Gas holdup and release tests were conducted in a 1/4-scale replica of the lag storage vessel operated in the Pacific Northwest National Laboratory (PNNL) Applied Process Engineering Laboratory using a kaolin/bentonite clay and AZ-101 HLW chemical simulant with non-Newtonian rheological properties representative of actual waste slurries. Additional tests were performed in a small-scale mixing vessel in the PNNL Physical Sciences Building using liquids and slurries representing major components of typical WTP waste streams. Analytical studies were directed at discovering how the effect of AFA might depend on gas composition and predicting the effect of AFA on gas retention and release in the full-scale plant, including the effects of mass transfer to the sparge air. The work at PNNL was part of a larger program that included tests conducted at Savannah River National Laboratory (SRNL) that is being reported separately. SRNL conducted gas holdup tests in a small-scale mixing vessel using the AZ-101 high-level waste (HLW) chemical simulant to investigate the effects of different AFAs, their components, and of adding noble metals. Full-scale, single-sparger mass transfer tests were also conducted at SRNL in water and AZ-101 HLW simulant to provide data for PNNLs WTP gas retention and release modeling.

  15. Large-Scale Utilization of Biomass Energy and Carbon Dioxide Capture and Storage in the Transport and Electricity Sectors under Stringent CO2 Concentration Limit Scenarios

    SciTech Connect (OSTI)

    Luckow, Patrick; Wise, Marshall A.; Dooley, James J.; Kim, Son H.

    2010-08-05

    This paper examines the potential role of large scale, dedicated commercial biomass energy systems under global climate policies designed to meet atmospheric concentrations of CO2 at 400ppm and 450ppm by the end of the century. We use an integrated assessment model of energy and agriculture systems to show that, given a climate policy in which terrestrial carbon is appropriately valued equally with carbon emitted from the energy system, biomass energy has the potential to be a major component of achieving these low concentration targets. A key aspect of the research presented here is that the costs of processing and transporting biomass energy at much larger scales than current experience are explicitly incorporated into the modeling. From the scenario results, 120-160 EJ/year of biomass energy is produced globally by midcentury and 200-250 EJ/year by the end of this century. In the first half of the century, much of this biomass is from agricultural and forest residues, but after 2050 dedicated cellulosic biomass crops become the majority source, along with growing utilization of waste-to-energy. The ability to draw on a diverse set of biomass based feedstocks helps to reduce the pressure for drastic large-scale changes in land use and the attendant environmental, ecological, and economic consequences those changes would unleash. In terms of the conversion of bioenergy feedstocks into value added energy, this paper demonstrates that biomass is and will continue to be used to generate electricity as well as liquid transportation fuels. A particular focus of this paper is to show how climate policies and technology assumptions - especially the availability of carbon dioxide capture and storage (CCS) technologies - affect the decisions made about where the biomass is used in the energy system. The potential for net-negative electric sector emissions through the use of CCS with biomass feedstocks provides an attractive part of the solution for meeting stringent emissions constraints; we find that at carbon prices above 150$/tCO2, over 90% of biomass in the energy system is used in combination with CCS. Despite the higher technology costs of CCS, it is a very important tool in controlling the cost of meeting a target, offsetting the venting of CO2 from sectors of the energy system that may be more expensive to mitigate, such as oil use in transportation. CCS is also used heavily with other fuels such as coal and natural gas, and by 2095 a total of 1530 GtCO2 has been stored in deep geologic reservoirs. The paper also discusses the role of cellulosic ethanol and Fischer-Tropsch biomass derived transportation fuels as two representative conversion processes and shows that both technologies may be important contributors to liquid fuels production, with unique costs and emissions characteristics.

  16. Hydrogen atom temperature measured with wavelength-modulated laser absorption spectroscopy in large scale filament arc negative hydrogen ion source

    SciTech Connect (OSTI)

    Nakano, H. Goto, M.; Tsumori, K.; Kisaki, M.; Ikeda, K.; Nagaoka, K.; Osakabe, M.; Takeiri, Y.; Kaneko, O.; Nishiyama, S.; Sasaki, K.

    2015-04-08

    The velocity distribution function of hydrogen atoms is one of the useful parameters to understand particle dynamics from negative hydrogen production to extraction in a negative hydrogen ion source. Hydrogen atom temperature is one of the indicators of the velocity distribution function. To find a feasibility of hydrogen atom temperature measurement in large scale filament arc negative hydrogen ion source for fusion, a model calculation of wavelength-modulated laser absorption spectroscopy of the hydrogen Balmer alpha line was performed. By utilizing a wide range tunable diode laser, we successfully obtained the hydrogen atom temperature of ?3000?K in the vicinity of the plasma grid electrode. The hydrogen atom temperature increases as well as the arc power, and becomes constant after decreasing with the filling of hydrogen gas pressure.

  17. Scattering of electromagnetic waves by vortex density structures associated with interchange instability: Analytical and large scale plasma simulation results

    SciTech Connect (OSTI)

    Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T. A.

    2014-05-15

    The presence of plasma turbulence can strongly influence propagation properties of electromagnetic signals used for surveillance and communication. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of coherent vortex structures. Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. These types of density irregularities play an important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics, and in many other applications. We will discuss scattering of high frequency electromagnetic waves on low frequency density irregularities due to the presence of vortex density structures associated with interchange instability. We will also present particle-in-cell simulation results of electromagnetic scattering on vortex type density structures using the large scale plasma code LSP and compare them with analytical results.

  18. The absorption chiller in large scale solar pond cooling design with condenser heat rejection in the upper convecting zone

    SciTech Connect (OSTI)

    Tsilingiris, P.T. )

    1992-07-01

    The possibility of using solar ponds as low-cost solar collectors combined with commercial absorption chillers in large scale solar cooling design is investigated. The analysis is based on the combination of a steady-state solar pond mathematical model with the operational characteristics of a commercial absorption chiller, assuming condenser heat rejection in the upper convecting zone (U.C.Z.). The numerical solution of the nonlinear equations involved leads to results which relate the chiller capacity with pond design and environmental parameters, which are also employed for the investigation of the optimum pond size for a minimum capital cost. The derived cost per cooling kW for a 350 kW chiller ranges from about 300 to 500 $/kW cooling. This is almost an order of magnitude lower than using a solar collector field of evacuated tube type.

  19. Final Report on DOE Project entitled Dynamic Optimized Advanced Scheduling of Bandwidth Demands for Large-Scale Science Applications

    SciTech Connect (OSTI)

    Ramamurthy, Byravamurthy

    2014-05-05

    In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published several conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.

  20. DISCOVERY OF A LARGE NUMBER OF CANDIDATE PROTOCLUSTERS TRACED BY ?15 Mpc-SCALE GALAXY OVERDENSITIES IN COSMOS

    SciTech Connect (OSTI)

    Chiang, Yi-Kuan; Gebhardt, Karl; Overzier, Roderik

    2014-02-10

    To demonstrate the feasibility of studying the epoch of massive galaxy cluster formation in a more systematic manner using current and future galaxy surveys, we report the discovery of a large sample of protocluster candidates in the 1.62deg{sup 2} COSMOS/UltraVISTA field traced by optical/infrared selected galaxies using photometric redshifts. By comparing properly smoothed three-dimensional galaxy density maps of the observations and a set of matched simulations incorporating the dominant observational effects (galaxy selection and photometric redshift uncertainties), we first confirm that the observed ?15 comoving Mpc-scale galaxy clustering is consistent with ?CDM models. Using further the relation between high-z overdensity and the present day cluster mass calibrated in these matched simulations, we found 36 candidate structures at 1.6 < z < 3.1, showing overdensities consistent with the progenitors of M{sub z} {sub =} {sub 0} ? 10{sup 15} M {sub ?} clusters. Taking into account the significant upward scattering of lower mass structures, the probabilities for the candidates to have at least M{sub z=} {sub 0} ? 10{sup 14} M {sub ?} are ?70%. For each structure, about 15%-40% of photometric galaxy candidates are expected to be true protocluster members that will merge into a cluster-scale halo by z = 0. With solely photometric redshifts, we successfully rediscover two spectroscopically confirmed structures in this field, suggesting that our algorithm is robust. This work generates a large sample of uniformly selected protocluster candidates, providing rich targets for spectroscopic follow-up and subsequent studies of cluster formation. Meanwhile, it demonstrates the potential for probing early cluster formation with upcoming redshift surveys such as the Hobby-Eberly Telescope Dark Energy Experiment and the Subaru Prime Focus Spectrograph survey.

  1. Adsorption and diffusion of Ru adatoms on Ru(0001)-supported graphene: Large-scale first-principles calculations

    SciTech Connect (OSTI)

    Han, Yong; Evans, James W.

    2015-10-27

    Large-scale first-principles density functional theory calculations are performed to investigate the adsorption and diffusion of Ru adatoms on monolayer graphene (G) supported on Ru(0001). The G sheet exhibits a periodic moir-cell superstructure due to lattice mismatch. Within a moir cell, there are three distinct regions: fcc, hcp, and mound, in which the C6-ring center is above a fcc site, a hcp site, and a surface Ru atom of Ru(0001), respectively. The adsorption energy of a Ru adatom is evaluated at specific sites in these distinct regions. We find the strongest binding at an adsorption site above a C atom in the fcc region, next strongest in the hcp region, then the fcc-hcp boundary (ridge) between these regions, and the weakest binding in the mound region. Behavior is similar to that observed from small-unit-cell calculations of Habenicht et al. [Top. Catal. 57, 69 (2014)], which differ from previous large-scale calculations. We determine the minimum-energy path for local diffusion near the center of the fcc region and obtain a local diffusion barrier of ~0.48 eV. We also estimate a significantly lower local diffusion barrier in the ridge region. These barriers and information on the adsorption energy variation facilitate development of a realistic model for the global potential energy surface for Ru adatoms. Furthermore, this in turn enables simulation studies elucidating diffusion-mediated directed-assembly of Ru nanoclusters during deposition of Ru on G/Ru(0001).

  2. LARGE-SCALE PERIODIC VARIABILITY OF THE WIND OF THE WOLF-RAYET STAR WR 1 (HD 4004)

    SciTech Connect (OSTI)

    Chene, A.-N.

    2010-06-20

    We present the results of an intensive photometric and spectroscopic monitoring campaign of the WN4 Wolf-Rayet (WR) star WR 1 = HD 4004. Our broadband V photometry covering a timespan of 91 days shows variability with a period of P = 16.9{sup +0.6}{sub -0.3} days. The same period is also found in our spectral data. The light curve is non-sinusoidal with hints of a gradual change in its shape as a function of time. The photometric variations nevertheless remain coherent over several cycles and we estimate that the coherence timescale of the light curve is of the order of 60 days. The spectroscopy shows large-scale line-profile variability which can be interpreted as excess emission peaks moving from one side of the profile to the other on a timescale of several days. Although we cannot unequivocally exclude the unlikely possibility that WR 1 is a binary, we propose that the nature of the variability we have found strongly suggests that it is due to the presence in the wind of the WR star of large-scale structures, most likely corotating interaction regions (CIRs), which are predicted to arise in inherently unstable radiatively driven winds when they are perturbed at their base. We also suggest that variability observed in WR 6, WR 134, and WR 137 is of the same nature. Finally, assuming that the period of CIRs is related to the rotational period, we estimate the rotation rate of the four stars for which sufficient monitoring has been carried out, i.e., v{sub rot} = 6.5, 40, 70, and 275 km s{sup -1} for WR 1, WR 6, WR 134, and WR 137, respectively.

  3. Adsorption and diffusion of Ru adatoms on Ru(0001)-supported graphene: Large-scale first-principles calculations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Han, Yong; Evans, James W.

    2015-10-27

    Large-scale first-principles density functional theory calculations are performed to investigate the adsorption and diffusion of Ru adatoms on monolayer graphene (G) supported on Ru(0001). The G sheet exhibits a periodic moiré-cell superstructure due to lattice mismatch. Within a moiré cell, there are three distinct regions: fcc, hcp, and mound, in which the C6-ring center is above a fcc site, a hcp site, and a surface Ru atom of Ru(0001), respectively. The adsorption energy of a Ru adatom is evaluated at specific sites in these distinct regions. We find the strongest binding at an adsorption site above a C atom inmore » the fcc region, next strongest in the hcp region, then the fcc-hcp boundary (ridge) between these regions, and the weakest binding in the mound region. Behavior is similar to that observed from small-unit-cell calculations of Habenicht et al. [Top. Catal. 57, 69 (2014)], which differ from previous large-scale calculations. We determine the minimum-energy path for local diffusion near the center of the fcc region and obtain a local diffusion barrier of ~0.48 eV. We also estimate a significantly lower local diffusion barrier in the ridge region. These barriers and information on the adsorption energy variation facilitate development of a realistic model for the global potential energy surface for Ru adatoms. Furthermore, this in turn enables simulation studies elucidating diffusion-mediated directed-assembly of Ru nanoclusters during deposition of Ru on G/Ru(0001).« less

  4. HIGH-TEMPERATURE ELECTROLYSIS FOR LARGE-SCALE HYDROGEN AND SYNGAS PRODUCTION FROM NUCLEAR ENERGY SYSTEM SIMULATION AND ECONOMICS

    SciTech Connect (OSTI)

    J. E. O'Brien; M. G. McKellar; E. A. Harvego; C. M. Stoots

    2009-05-01

    A research and development program is under way at the Idaho National Laboratory (INL) to assess the technological and scale-up issues associated with the implementation of solid-oxide electrolysis cell technology for efficient high-temperature hydrogen production from steam. This work is supported by the US Department of Energy, Office of Nuclear Energy, under the Nuclear Hydrogen Initiative. This paper will provide an overview of large-scale system modeling results and economic analyses that have been completed to date. System analysis results have been obtained using the commercial code UniSim, augmented with a custom high-temperature electrolyzer module. Economic analysis results were based on the DOE H2A analysis methodology. The process flow diagrams for the system simulations include an advanced nuclear reactor as a source of high-temperature process heat, a power cycle and a coupled steam electrolysis loop. Several reactor types and power cycles have been considered, over a range of reactor outlet temperatures. Pure steam electrolysis for hydrogen production as well as coelectrolysis for syngas production from steam/carbon dioxide mixtures have both been considered. In addition, the feasibility of coupling the high-temperature electrolysis process to biomass and coal-based synthetic fuels production has been considered. These simulations demonstrate that the addition of supplementary nuclear hydrogen to synthetic fuels production from any carbon source minimizes emissions of carbon dioxide during the production process.

  5. Using Soir Lucene for Large-Scale Metagenomics Data Retrieval and Analysis (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema (OSTI)

    Goll, Johannes [JCVI

    2013-01-22

    JCVI's Johannes Goll on "Using Solr/Lucene for Large-Scale Metagenomics Data Retrieval and Analysis" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  6. Scales

    ScienceCinema (OSTI)

    Murray Gibson

    2010-01-08

    Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain ? a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

  7. Recent developments in large-scale finite-element Lagrangian hydrocode technology. [Dyna 20/dyna 30 computer code

    SciTech Connect (OSTI)

    Goudreau, G.L.; Hallquist, J.O.

    1981-10-01

    The state of Lagrangian hydrocodes for computing the large deformation dynamic response of inelastic continuua is reviewed in the context of engineering computation at the Lawrence Livermore National Laboratory, USA, and the DYNA2D/DYNA3D finite elements codes. The emphasis is on efficiency and computational cost. The simplest elements with explicit time integration. The two-dimensional four node quadrilateral and the three-dimensional hexahedron with one point quadrature are advocated as superior to other more expensive choices. Important auxiliary capabilities are a cheap but effective hourglass control, slidelines/planes with void opening/closure, and rezoning. Both strain measures and material formulation are seen as a homogeneous stress point problem and a flexible material subroutine interface admits both incremental and total strain formulation, dependent on internal energy or an arbitrary set of other internal variables. Vectorization on Class VI computers such as the CRAY-1 is a simple exercise for optimally organized primitive element formulations. Some examples of large scale computation are illustrated, including continuous tone graphic representation.

  8. Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities (Book), Large-Scale Renewable Energy Guide, Federal Energy Management Program (FEMP)

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    LARGE-SCALE RENEWABLE ENERGY GUIDE Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities A Practical Guide to Getting Large-Scale Renewable Energy Projects Financed with Private Capital Cover photos, clockwise from the top: Installing mirrored parabolic trough collectors - (January 19, 2012) Crews work around the clock installing mirrored parabolic trough collectors, built on site, that will cover 3 square miles at Abengoa's Solana Plant. Solana a 280 megawatt utility

  9. Large Scale DD Simulation Results for Crystal Plasticity Parameters in Fe-Cr And Fe-Ni Systems

    SciTech Connect (OSTI)

    Zbib, Hussein M.; Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2012-04-30

    The development of viable nuclear energy source depends on ensuring structural materials integrity. Structural materials in nuclear reactors will operate in harsh radiation conditions coupled with high level hydrogen and helium production, as well as formation of high density of point defects and defect clusters, and thus will experience severe degradation of mechanical properties. Therefore, the main objective of this work is to develop a capability that predicts aging behavior and in-service lifetime of nuclear reactor components and, thus provide an instrumental tool for tailoring materials design and development for application in future nuclear reactor technologies. Towards this end goal, the long term effort is to develop a physically based multiscale modeling hierarchy, validated and verified, to address outstanding questions regarding the effects of irradiation on materials microstructure and mechanical properties during extended service in the fission and fusion environments. The focus of the current investigation is on modern steels for use in nuclear reactors including high strength ferritic-martensitic steels (Fe-Cr-Ni alloys). The effort is to develop a predicative capability for the influence of irradiation on mechanical behavior. Irradiation hardening is related to structural information crossing different length scales, such as composition, dislocation, and crystal orientation distribution. To predict effective hardening, the influence factors along different length scales should be considered. Therefore, a hierarchical upscaling methodology is implemented in this work in which relevant information is passed between models at three scales, namely, from molecular dynamics to dislocation dynamics to dislocation-based crystal plasticity. The molecular dynamics (MD) was used to predict the dislocation mobility in body centered cubic (bcc) Fe and its Ni and Cr alloys. The results are then passed on to dislocation dynamics to predict the critical resolved shear stress (CRSS) from the evolution of local dislocation and defects. In this report the focus is on the results obtained from large scale dislocation dynamics simulations. The effect of defect density, materials structure was investigated, and evolution laws are obtained. These results will form the bases for the development of evolution and hardening laws for a dislocation-based crystal plasticity framework. The hierarchical upscaling method being developed in this project can provide a guidance tool to evaluate performance of structural materials for next-generation nuclear reactors. Combined with other tools developed in the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the models developed will have more impact in improving the reliability of current reactors and affordability of new reactors.

  10. Regional Geologic Map

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Lane, Michael

    2013-06-28

    Shaded relief base with Hot Pot project area, generalized geology, selected mines, and major topographic features

  11. Regional Geologic Map

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Lane, Michael

    Shaded relief base with Hot Pot project area, generalized geology, selected mines, and major topographic features

  12. A methodology for understanding the impacts of large-scale penetration of micro-combined heat and power

    SciTech Connect (OSTI)

    Tapia-Ahumada, K.; Prez-Arriaga, I. J.; Moniz, Ernest J.

    2013-10-01

    Co-generation at small kW-e scale has been stimulated in recent years by governments and energy regulators as one way to increasing energy efficiency and reducing CO2emissions. If a widespread adoption should be realized, their effects from a system's point of view are crucial to understand the contributions of this technology. Based on a methodology that uses long-term capacity planning expansion, this paper explores some of the implications for an electric power system of having a large number of micro-CHPs. Results show that fuel cells-based micro-CHPs have the best and most consistent performance for different residential demands from the customer and system's perspectives. As the penetration increases at important levels, gas-based technologies - particularly combined cycle units - are displaced in capacity and production, which impacts the operation of the electric system during summer peak hours. Other results suggest that the tariff design impacts the economic efficiency of the system and the operation of micro-CHPs under a price-based strategy. Finally, policies aimed at micro-CHPs should consider the suitability of the technology (in size and heat-to-power ratio) to meet individual demands, the operational complexities of a large penetration, and the adequacy of the economic signals to incentivize an efficient and sustainable operation. Highlights: Capacity displacements and daily operation of an electric power system are explored; Benefits depend on energy mix, prices, and micro-CHP technology and control scheme; Benefits are observed mostly in winter when micro-CHP heat and power are fully used; Micro-CHPs mostly displace installed capacity from natural gas combined cycle units; and, Tariff design impacts economic efficiency of the system and operation of micro-CHPs.

  13. SDSS-III Baryon Oscillation Spectroscopic Survey data release 12: Galaxy target selection and large-scale structure catalogues

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Reid, Beth; Ho, Shirley; Padmanabhan, Nikhil; Percival, Will J.; Tinker, Jeremy; Tojeiro, Rito; White, Marin; Daniel J. Einstein; Maraston, Claudia; Ross, Ashley J.; et al

    2015-11-17

    The Baryon Oscillation Spectroscopic Survey (BOSS), part of the Sloan Digital Sky Survey (SDSS) III project, has provided the largest survey of galaxy redshifts available to date, in terms of both the number of galaxy redshifts measured by a single survey, and the effective cosmological volume covered. Key to analysing the clustering of these data to provide cosmological measurements is understanding the detailed properties of this sample. Potential issues include variations in the target catalogue caused by changes either in the targeting algorithm or properties of the data used, the pattern of spectroscopic observations, the spatial distribution of targets formore » which redshifts were not obtained, and variations in the target sky density due to observational systematics. We document here the target selection algorithms used to create the galaxy samples that comprise BOSS. We also present the algorithms used to create large-scale structure catalogues for the final Data Release (DR12) samples and the associated random catalogues that quantify the survey mask. The algorithms are an evolution of those used by the BOSS team to construct catalogues from earlier data, and have been designed to accurately quantify the galaxy sample. Furthermore, the code used, designated mksample, is released with this paper.« less

  14. Data Analysis, Pre-Ignition Assessment, and Post-Ignition Modeling of the Large-Scale Annular Cookoff Tests

    SciTech Connect (OSTI)

    G. Terrones; F.J. Souto; R.F. Shea; M.W.Burkett; E.S. Idar

    2005-09-30

    In order to understand the implications that cookoff of plastic-bonded explosive-9501 could have on safety assessments, we analyzed the available data from the large-scale annular cookoff (LSAC) assembly series of experiments. In addition, we examined recent data regarding hypotheses about pre-ignition that may be relevant to post-ignition behavior. Based on the post-ignition data from Shot 6, which had the most complete set of data, we developed an approximate equation of state (EOS) for the gaseous products of deflagration. Implementation of this EOS into the multimaterial hydrodynamics computer program PAGOSA yielded good agreement with the inner-liner collapse sequence for Shot 6 and with other data, such as velocity interferometer system for any reflector and resistance wires. A metric to establish the degree of symmetry based on the concept of time of arrival to pin locations was used to compare numerical simulations with experimental data. Several simulations were performed to elucidate the mode of ignition in the LSAC and to determine the possible compression levels that the metal assembly could have been subjected to during post-ignition.

  15. Large-scale purification and crystallization of the endoribonuclease XendoU: troubleshooting with His-tagged proteins

    SciTech Connect (OSTI)

    Renzi, Fabiana; Panetta, Gianna; Vallone, Beatrice; Brunori, Maurizio; Arceci, Massimo; Bozzoni, Irene; Laneve, Pietro; Caffarelli, Elisa

    2006-03-01

    Recombinant His-tagged XendoU, a eukaryotic endoribonuclease, appeared to aggregate in the presence of divalent cations. Monodisperse protein which yielded crystals diffracting to 2.2 Å was obtained by addition of EDTA. XendoU is the first endoribonuclease described in higher eukaryotes as being involved in the endonucleolytic processing of intron-encoded small nucleolar RNAs. It is conserved among eukaryotes and its viral homologue is essential in SARS replication and transcription. The large-scale purification and crystallization of recombinant XendoU are reported. The tendency of the recombinant enzyme to aggregate could be reversed upon the addition of chelating agents (EDTA, imidazole): aggregation is a potential drawback when purifying and crystallizing His-tagged proteins, which are widely used, especially in high-throughput structural studies. Purified monodisperse XendoU crystallized in two different space groups: trigonal P3{sub 1}21, diffracting to low resolution, and monoclinic C2, diffracting to higher resolution.

  16. Large Scale Duty Cycle (LSDC) Project: Tractive Energy Analysis Methodology and Results from Long-Haul Truck Drive Cycle Evaluations

    SciTech Connect (OSTI)

    LaClair, Tim J

    2011-05-01

    This report addresses the approach that will be used in the Large Scale Duty Cycle (LSDC) project to evaluate the fuel savings potential of various truck efficiency technologies. The methods and equations used for performing the tractive energy evaluations are presented and the calculation approach is described. Several representative results for individual duty cycle segments are presented to demonstrate the approach and the significance of this analysis for the project. The report is divided into four sections, including an initial brief overview of the LSDC project and its current status. In the second section of the report, the concepts that form the basis of the analysis are presented through a discussion of basic principles pertaining to tractive energy and the role of tractive energy in relation to other losses on the vehicle. In the third section, the approach used for the analysis is formalized and the equations used in the analysis are presented. In the fourth section, results from the analysis for a set of individual duty cycle measurements are presented and different types of drive cycles are discussed relative to the fuel savings potential that specific technologies could bring if these drive cycles were representative of the use of a given vehicle or trucking application. Additionally, the calculation of vehicle mass from measured torque and speed data is presented and the accuracy of the approach is demonstrated.

  17. Using an Energy Performance Based Design-Build Process to Procure a Large Scale Low-Energy Building: Preprint

    SciTech Connect (OSTI)

    Pless, S.; Torcellini, P.; Shelton, D.

    2011-05-01

    This paper will review a procurement, acquisition, and contract process of a large-scale replicable net zero energy (ZEB) office building. The owners developed and implemented an energy performance based design-build process to procure a 220,000 ft2 office building with contractual requirements to meet demand side energy and LEED goals. We will outline the key procurement steps needed to ensure achievement of our energy efficiency and ZEB goals. The development of a clear and comprehensive Request for Proposals (RFP) that includes specific and measurable energy use intensity goals is critical to ensure energy goals are met in a cost effective manner. The RFP includes a contractual requirement to meet an absolute demand side energy use requirement of 25 kBtu/ft2, with specific calculation methods on what loads are included, how to normalize the energy goal based on increased space efficiency and data center allocation, specific plug loads and schedules, and calculation details on how to account for energy used from the campus hot and chilled water supply. Additional advantages of integrating energy requirements into this procurement process include leveraging the voluntary incentive program, which is a financial incentive based on how well the owner feels the design-build team is meeting the RFP goals.

  18. Intercomparison of methods of coupling between convection and large-scale circulation. 1. Comparison over uniform surface conditions

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; Sessions, S.; Herman, M. J.; Sobel, A.; Wang, S.; Kim, D.; Cheng, A.; Bellon, G.; et al

    2015-10-24

    Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less

  19. SIMULTANEOUS OBSERVATIONS OF A LARGE-SCALE WAVE EVENT IN THE SOLAR ATMOSPHERE: FROM PHOTOSPHERE TO CORONA

    SciTech Connect (OSTI)

    Shen, Yuandeng; Liu, Yu

    2012-06-20

    For the first time, we report a large-scale wave that was observed simultaneously in the photosphere, chromosphere, transition region, and low corona layers of the solar atmosphere. Using the high temporal and high spatial resolution observations taken by the Solar Magnetic Activity Research Telescope at Hida Observatory and the Atmospheric Imaging Assembly (AIA) on board Solar Dynamic Observatory, we find that the wave evolved synchronously at different heights of the solar atmosphere, and it propagated at a speed of 605 km s{sup -1} and showed a significant deceleration (-424 m s{sup -2}) in the extreme-ultraviolet (EUV) observations. During the initial stage, the wave speed in the EUV observations was 1000 km s{sup -1}, similar to those measured from the AIA 1700 A (967 km s{sup -1}) and 1600 A (893 km s{sup -1}) observations. The wave was reflected by a remote region with open fields, and a slower wave-like feature at a speed of 220 km s{sup -1} was also identified following the primary fast wave. In addition, a type-II radio burst was observed to be associated with the wave. We conclude that this wave should be a fast magnetosonic shock wave, which was first driven by the associated coronal mass ejection and then propagated freely in the corona. As the shock wave propagated, its legs swept the solar surface and thereby resulted in the wave signatures observed in the lower layers of the solar atmosphere. The slower wave-like structure following the primary wave was probably caused by the reconfiguration of the low coronal magnetic fields, as predicted in the field-line stretching model.

  20. Model based multivariable controller for large scale compression stations. Design and experimental validation on the LHC 18KW cryorefrigerator

    SciTech Connect (OSTI)

    Bonne, François; Bonnay, Patrick; Bradu, Benjamin

    2014-01-29

    In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsed heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  1. Large-scale spatial variability of riverbed temperature gradients in Snake River fall Chinook salmon spawning areas

    SciTech Connect (OSTI)

    Hanrahan, Timothy P.

    2007-02-01

    In the Snake River basin of the Pacific northwestern United States, hydroelectric dam operations are often based on the predicted emergence timing of salmon fry from the riverbed. The spatial variability and complexity of surface water and riverbed temperature gradients results in emergence timing predictions that are likely to have large errors. The objectives of this study were to quantify the thermal heterogeneity between the river and riverbed in fall Chinook salmon spawning areas and to determine the effects of thermal heterogeneity on fall Chinook salmon emergence timing. This study quantified river and riverbed temperatures at 15 fall Chinook salmon spawning sites distributed in two reaches throughout 160 km of the Snake River in Hells Canyon, Idaho, USA, during three different water years. Temperatures were measured during the fall Chinook salmon incubation period with self-contained data loggers placed in the river and at three different depths below the riverbed surface. At all sites temperature increased with depth into the riverbed, including significant differences (p<0.05) in mean water temperature of up to 3.8C between the river and the riverbed among all the sites. During each of the three water years studied, river and riverbed temperatures varied significantly among all the study sites, among the study sites within each reach, and between sites located in the two reaches. Considerable variability in riverbed temperatures among the sites resulted in fall Chinook salmon emergence timing estimates that varied by as much as 55 days, depending on the source of temperature data used for the estimate. Monitoring of riverbed temperature gradients at a range of spatial scales throughout the Snake River would provide better information for managing hydroelectric dam operations, and would aid in the design and interpretation of future empirical research into the ecological significance of physical riverine processes.

  2. Radiative Heating of the ISCCP Upper Level Cloud Regimes and its Impact on the Large-scale Tropical Circulation

    SciTech Connect (OSTI)

    Li, Wei; Schumacher, Courtney; McFarlane, Sally A.

    2013-01-31

    Radiative heating profiles of the International Satellite Cloud Climatology Project (ISCCP) cloud regimes (or weather states) were estimated by matching ISCCP observations with radiative properties derived from cloud radar and lidar measurements from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) sites at Manus, Papua New Guinea, and Darwin, Australia. Focus was placed on the ISCCP cloud regimes containing the majority of upper level clouds in the tropics, i.e., mesoscale convective systems (MCSs), deep cumulonimbus with cirrus, mixed shallow and deep convection, and thin cirrus. At upper levels, these regimes have average maximum cloud occurrences ranging from 30% to 55% near 12 km with variations depending on the location and cloud regime. The resulting radiative heating profiles have maxima of approximately 1 K/day near 12 km, with equal heating contributions from the longwave and shortwave components. Upper level minima occur near 15 km, with the MCS regime showing the strongest cooling of 0.2 K/day and the thin cirrus showing no cooling. The gradient of upper level heating ranges from 0.2 to 0.4 K/(day?km), with the most convectively active regimes (i.e., MCSs and deep cumulonimbus with cirrus) having the largest gradient. When the above heating profiles were applied to the 25-year ISCCP data set, the tropics-wide average profile has a radiative heating maximum of 0.45Kday-1 near 250 hPa. Column-integrated radiative heating of upper level cloud accounts for about 20% of the latent heating estimated by the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR). The ISCCP radiative heating of tropical upper level cloud only slightly modifies the response of an idealized primitive equation model forced with the tropics-wide TRMM PR latent heating, which suggests that the impact of upper level cloud is more important to large-scale tropical circulation variations because of convective feedbacks rather than direct forcing by the cloud radiative heating profiles. However, the height of the radiative heating maxima and gradient of the heating profiles are important to determine the sign and patterns of the horizontal circulation anomaly driven by radiative heating at upper levels.

  3. Co-gasification of municipal solid waste and material recovery in a large-scale gasification and melting system

    SciTech Connect (OSTI)

    Tanigaki, Nobuhiro; Manako, Kazutaka; Osada, Morihiro

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer This study evaluates the effects of co-gasification of MSW with MSW bottom ash. Black-Right-Pointing-Pointer No significant difference between MSW treatment with and without MSW bottom ash. Black-Right-Pointing-Pointer PCDD/DFs yields are significantly low because of the high carbon conversion ratio. Black-Right-Pointing-Pointer Slag quality is significantly stable and slag contains few hazardous heavy metals. Black-Right-Pointing-Pointer The final landfill amount is reduced and materials are recovered by DMS process. - Abstract: This study evaluates the effects of co-gasification of municipal solid waste with and without the municipal solid waste bottom ash using two large-scale commercial operation plants. From the viewpoint of operation data, there is no significant difference between municipal solid waste treatment with and without the bottom ash. The carbon conversion ratios are as high as 91.7% and 95.3%, respectively and this leads to significantly low PCDD/DFs yields via complete syngas combustion. The gross power generation efficiencies are 18.9% with the bottom ash and 23.0% without municipal solid waste bottom ash, respectively. The effects of the equivalence ratio are also evaluated. With the equivalence ratio increasing, carbon monoxide concentration is decreased, and carbon dioxide and the syngas temperature (top gas temperature) are increased. The carbon conversion ratio is also increased. These tendencies are seen in both modes. Co-gasification using the gasification and melting system (Direct Melting System) has a possibility to recover materials effectively. More than 90% of chlorine is distributed in fly ash. Low-boiling-point heavy metals, such as lead and zinc, are distributed in fly ash at rates of 95.2% and 92.0%, respectively. Most of high-boiling-point heavy metals, such as iron and copper, are distributed in metal. It is also clarified that slag is stable and contains few harmful heavy metals such as lead. Compared with the conventional waste management framework, 85% of the final landfill amount reduction is achieved by co-gasification of municipal solid waste with bottom ash and incombustible residues. These results indicate that the combined production of slag with co-gasification of municipal solid waste with the bottom ash constitutes an ideal approach to environmental conservation and resource recycling.

  4. CyanoGEBA: A Better Understanding of Cynobacterial Diversity through Large-scale Genomics (JGI Seventh Annual User Meeting 2012: Genomics of Energy and Environment)

    ScienceCinema (OSTI)

    Shih, Patrick [Kerfeld Lab, UC Berkeley and JGI

    2013-01-22

    Patrick Shih, representing both the University of California, Berkeley and JGI, gives a talk titled "CyanoGEBA: A Better Understanding of Cynobacterial Diversity through Large-scale Genomics" at the JGI 7th Annual Users Meeting: Genomics of Energy & Environment Meeting on March 22, 2012 in Walnut Creek, California.

  5. CyanoGEBA: A Better Understanding of Cynobacterial Diversity through Large-scale Genomics (JGI Seventh Annual User Meeting 2012: Genomics of Energy and Environment)

    SciTech Connect (OSTI)

    Shih, Patrick [Kerfeld Lab, UC Berkeley and JGI] [Kerfeld Lab, UC Berkeley and JGI

    2012-03-22

    Patrick Shih, representing both the University of California, Berkeley and JGI, gives a talk titled "CyanoGEBA: A Better Understanding of Cynobacterial Diversity through Large-scale Genomics" at the JGI 7th Annual Users Meeting: Genomics of Energy & Environment Meeting on March 22, 2012 in Walnut Creek, California.

  6. Large-scale real-space density-functional calculations: Moir-induced electron localization in graphene

    SciTech Connect (OSTI)

    Oshiyama, Atsushi Iwata, Jun-Ichi; Uchida, Kazuyuki; Matsushita, Yu-Ichiro

    2015-03-21

    We show that our real-space finite-difference scheme allows us to perform density-functional calculations for nanometer-scale targets containing more than 100?000 atoms. This real-space scheme is applied to twisted bilayer graphene, clarifying that Moir pattern induced in the slightly twisted bilayer graphene drastically modifies the atomic and electronic structures.

  7. FERMI RULES OUT THE INVERSE COMPTON/CMB MODEL FOR THE LARGE-SCALE JET X-RAY EMISSION OF 3C 273

    SciTech Connect (OSTI)

    Meyer, Eileen T.; Georganopoulos, Markos

    2014-01-10

    The X-ray emission mechanism in large-scale jets of powerful radio quasars has been a source of debate in recent years, with two competing interpretations: either the X-rays are of synchrotron origin, arising from a different electron energy distribution than that producing the radio to optical synchrotron component, or they are due to inverse Compton scattering of cosmic microwave background photons (IC/CMB) by relativistic electrons in a powerful relativistic jet with bulk Lorentz factor ? ? 10-20. These two models imply radically different conditions in the large-scale jet in terms of jet speed, kinetic power, and maximum energy of the particle acceleration mechanism, with important implications for the impact of the jet on the large-scale environment. A large part of the X-ray origin debate has centered on the well-studied source 3C 273. Here we present new observations from Fermi which put an upper limit on the gamma-ray flux from the large-scale jet of 3C 273 that violates at a confidence greater that 99.9% the flux expected from the IC/CMB X-ray model found by extrapolation of the UV to X-ray spectrum of knot A, thus ruling out the IC/CMB interpretation entirely for this source when combined with previous work. Further, this upper limit from Fermi puts a limit on the Doppler beaming factor of at least ? <9, assuming equipartition fields, and possibly as low as ? <5, assuming no major deceleration of the jet from knots A throughD1.

  8. CONSTRAINTS ON THE ORIGIN OF COSMIC RAYS ABOVE 10{sup 18} eV FROM LARGE-SCALE ANISOTROPY SEARCHES IN DATA OF THE PIERRE AUGER OBSERVATORY

    SciTech Connect (OSTI)

    Abreu, P.; Andringa, S.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Castillo, J. Alvarez; Alvarez-Muniz, J.; Alves Batista, R.; Ambrosio, M.; Aramo, C.; Aminaei, A.; Anchordoqui, L.; Antici'c, T.; Arganda, E.; Collaboration: Pierre Auger Collaboration; and others

    2013-01-01

    A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above 10{sup 18} eV at the Pierre Auger Observatory is reported. For the first time, these large-scale anisotropy searches are performed as a function of both the right ascension and the declination and expressed in terms of dipole and quadrupole moments. Within the systematic uncertainties, no significant deviation from isotropy is revealed. Upper limits on dipole and quadrupole amplitudes are derived under the hypothesis that any cosmic ray anisotropy is dominated by such moments in this energy range. These upper limits provide constraints on the production of cosmic rays above 10{sup 18} eV, since they allow us to challenge an origin from stationary galactic sources densely distributed in the galactic disk and emitting predominantly light particles in all directions.

  9. Proceedings of the Joint IAEA/CSNI Specialists` Meeting on Fracture Mechanics Verification by Large-Scale Testing held at Pollard Auditorium, Oak Ridge, Tennessee

    SciTech Connect (OSTI)

    Pugh, C.E.; Bass, B.R.; Keeney, J.A.

    1993-10-01

    This report contains 40 papers that were presented at the Joint IAEA/CSNI Specialists` Meeting Fracture Mechanics Verification by Large-Scale Testing held at the Pollard Auditorium, Oak Ridge, Tennessee, during the week of October 26--29, 1992. The papers are printed in the order of their presentation in each session and describe recent large-scale fracture (brittle and/or ductile) experiments, analyses of these experiments, and comparisons between predictions and experimental results. The goal of the meeting was to allow international experts to examine the fracture behavior of various materials and structures under conditions relevant to nuclear reactor components and operating environments. The emphasis was on the ability of various fracture models and analysis methods to predict the wide range of experimental data now available. The individual papers have been cataloged separately.

  10. Creation of the dam for the No. 2 Kambaratinskaya HPP by large-scale blasting: analysis of planning experience and lessons learned

    SciTech Connect (OSTI)

    Shuifer, M. I.; Argal, E. S.

    2012-05-15

    Results of complex instrument observations and video taping during large-scale blasts detonated for creation of the dam at the No. 2 Kambaratinskaya HPP on the Naryn River in the Kyrgyz Republic are analyzed. Tests of the energy effectiveness of the explosives are evaluated, characteristics of LSB manifestations in seismic and air waves are revealed, and the shaping and movement of the rock mass are examined. A methodological analysis of the planning and production of the LSB is given.

  11. Analysis and experimental study on formation conditions of large-scale barrier-free diffuse atmospheric pressure air plasmas in repetitive pulse mode

    SciTech Connect (OSTI)

    Li, Lee Liu, Lun; Liu, Yun-Long; Bin, Yu; Ge, Ya-Feng; Lin, Fo-Chang

    2014-01-14

    Atmospheric air diffuse plasmas have enormous application potential in various fields of science and technology. Without dielectric barrier, generating large-scale air diffuse plasmas is always a challenging issue. This paper discusses and analyses the formation mechanism of cold homogenous plasma. It is proposed that generating stable diffuse atmospheric plasmas in open air should meet the three conditions: high transient power with low average power, excitation in low average E-field with locally high E-field region, and multiple overlapping electron avalanches. Accordingly, an experimental configuration of generating large-scale barrier-free diffuse air plasmas is designed. Based on runaway electron theory, a low duty-ratio, high voltage repetitive nanosecond pulse generator is chosen as a discharge excitation source. Using the wire-electrodes with small curvature radius, the gaps with highly non-uniform E-field are structured. Experimental results show that the volume-scaleable, barrier-free, homogeneous air non-thermal plasmas have been obtained between the gap spacing with the copper-wire electrodes. The area of air cold plasmas has been up to hundreds of square centimeters. The proposed formation conditions of large-scale barrier-free diffuse air plasmas are proved to be reasonable and feasible.

  12. Geology and slope stability in selected parts of The Geysers geothermal resources area: a guide to geologic features indicative of stable and unstable terrain in areas underlain by Franciscan and related rocks

    SciTech Connect (OSTI)

    Bedrossian, T.L.

    1980-01-01

    The results of a 4-month study of various geologic and topographic features related to the stability of Franciscan terrain in The Geysers GRA are presented. The study consisted of investigations of geologic and topographic features, throughout The Geysers GRA, and geologic mapping at a scale of 1:12,000 of approximately 1500 acres (600 hectares) of landslide terrain within the canyon of Big Sulphur Creek in the vicinity of the Buckeye mine (see plate 1). The area mapped during this study was selected because: (1) it is an area of potential future geothermal development, and (2) it illustrates that large areas mapped as landslides on regional scales (McLaughlin, 1974, 1975b; McNitt, 1968a) may contain zones of varying slope stability and, therefore, should be mapped in more detail prior to development of the land.

  13. The Energy DataBus: NREL's Open-Source Application for Large-Scale Energy Data Collection and Analysis

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    NRELs Energy DataBus is used for tracking and analyzing energy use on its own campus. The system is applicable to other facilitiesincluding anything from a single building to a large military base or college campusor for other energy data management needs. Managing and minimizing energy consumption on a large campus is usually a difficult task for facility managers: There may be hundreds of energy meters spread across a campus, and the meter data are often recorded by hand. Even when data are captured electronically, there may be measurement issues or time periods that may not coincide. Making sense of this limited and often confusing data can be a challenge that makes the assessment of building performance a struggle for many facility managers. The Energy DataBus software was developed by NREL to address these issues on its own campus, but with an eye toward offering its software solutions to other facilities. Key features include the software's ability to store large amounts of data collected at high frequenciesNREL collects some of its energy data every secondand rich functionality to integrate this wide variety of data into a single database [copied from http://en.openei.org/wiki/NREL_Energy_DataBus].

  14. The Energy DataBus: NREL's Open-Source Application for Large-Scale Energy Data Collection and Analysis

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    NREL’s Energy DataBus is used for tracking and analyzing energy use on its own campus. The system is applicable to other facilities—including anything from a single building to a large military base or college campus—or for other energy data management needs. Managing and minimizing energy consumption on a large campus is usually a difficult task for facility managers: There may be hundreds of energy meters spread across a campus, and the meter data are often recorded by hand. Even when data are captured electronically, there may be measurement issues or time periods that may not coincide. Making sense of this limited and often confusing data can be a challenge that makes the assessment of building performance a struggle for many facility managers. The Energy DataBus software was developed by NREL to address these issues on its own campus, but with an eye toward offering its software solutions to other facilities. Key features include the software's ability to store large amounts of data collected at high frequencies—NREL collects some of its energy data every second—and rich functionality to integrate this wide variety of data into a single database [copied from http://en.openei.org/wiki/NREL_Energy_DataBus].

  15. Large-Scale Testing of Effects of Anti-Foam Agent on Gas Holdup in Process Vessels in the Hanford Waste Treatment Plant - 8280

    SciTech Connect (OSTI)

    Mahoney, Lenna A.; Alzheimer, James M.; Arm, Stuart T.; Guzman-Leong, Consuelo E.; Jagoda, Lynette K.; Stewart, Charles W.; Wells, Beric E.; Yokuda, Satoru T.

    2008-06-03

    The Hanford Waste Treatment Plant (WTP) will vitrify the radioactive wastes stored in underground tanks. These wastes generate and retain hydrogen and other flammable gases that create safety concerns for the vitrification process tanks in the WTP. An anti-foam agent (AFA) will be added to the WTP process streams. Prior testing in a bubble column and a small-scale impeller-mixed vessel indicated that gas holdup in a high-level waste chemical simulant with AFA was up to 10 times that in clay simulant without AFA. This raised a concern that major modifications to the WTP design or qualification of an alternative AFA might be required to satisfy plant safety criteria. However, because the mixing and gas generation mechanisms in the small-scale tests differed from those expected in WTP process vessels, additional tests were performed in a large-scale prototypic mixing system with in situ gas generation. This paper presents the results of this test program. The tests were conducted at Pacific Northwest National Laboratory in a -scale model of the lag storage process vessel using pulse jet mixers and air spargers. Holdup and release of gas bubbles generated by hydrogen peroxide decomposition were evaluated in waste simulants containing an AFA over a range of Bingham yield stresses and gas gen geration rates. Results from the -scale test stand showed that, contrary to the small-scale impeller-mixed tests, gas holdup in clay without AFA is comparable to that in the chemical waste simulant with AFA. The test stand, simulants, scaling and data-analysis methods, and results are described in relation to previous tests and anticipated WTP operating conditions.

  16. Large-Scale Testing of Effects of Anti-Foam Agent on Gas Holdup in Process Vessels in the Hanford Waste Treatment Plant

    SciTech Connect (OSTI)

    Mahoney, L.A.; Alzheimer, J.M.; Arm, S.T.; Guzman-Leong, C.E.; Jagoda, L.K.; Stewart, C.W.; Wells, B.E.; Yokuda, S.T. [Pacific Northwest National Laboratory, Richland, WA (United States)

    2008-07-01

    The Hanford Waste Treatment and Immobilization Plant (WTP) will vitrify the radioactive wastes stored in underground tanks. These wastes generate and retain hydrogen and other flammable gases that create safety concerns for the vitrification process tanks in the WTP. An anti-foam agent (AFA) will be added to the WTP process streams. Previous testing in a bubble column and a small-scale impeller-mixed vessel indicated that gas holdup in a high-level waste chemical simulant with AFA was as much as 10 times higher than in clay simulant without AFA. This raised a concern that major modifications to the WTP design or qualification of an alternative AFA might be required to satisfy plant safety criteria. However, because the mixing and gas generation mechanisms in the small-scale tests differed from those expected in WTP process vessels, additional tests were performed in a large-scale prototypic mixing system with in situ gas generation. This paper presents the results of this test program. The tests were conducted at Pacific Northwest National Laboratory in a 1/4-scale model of the lag storage process vessel using pulse jet mixers and air spargers. Holdup and release of gas bubbles generated by hydrogen peroxide decomposition were evaluated in waste simulants containing an AFA over a range of Bingham yield stresses and gas generation rates. Results from the 1/4-scale test stand showed that, contrary to the small-scale impeller-mixed tests, holdup in the chemical waste simulant with AFA was not so greatly increased compared to gas holdup in clay without AFA. The test stand, simulants, scaling and data-analysis methods, and results are described in relation to previous tests and anticipated WTP operating conditions. (authors)

  17. NONLINEAR FORCE-FREE FIELD EXTRAPOLATION OF A CORONAL MAGNETIC FLUX ROPE SUPPORTING A LARGE-SCALE SOLAR FILAMENT FROM A PHOTOSPHERIC VECTOR MAGNETOGRAM

    SciTech Connect (OSTI)

    Jiang, Chaowei; Wu, S. T.; Hu, Qiang; Feng, Xueshang E-mail: wus@uah.edu E-mail: fengx@spaceweather.ac.cn

    2014-05-10

    Solar filaments are commonly thought to be supported in magnetic dips, in particular, in those of magnetic flux ropes (FRs). In this Letter, based on the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is a first, in the sense that current NLFFF extrapolations including the presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion lines (PILs) with strong transverse field and magnetic shear, and the existence of an FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength ? 100G), where the PIL is very fragmented due to small parasitic polarities on both sides of the PIL and the transverse field has a low signal-to-noise ratio. Thus, extrapolating a large-scale FR in such a case represents a far more difficult challenge. We demonstrate that our CESE-MHD-NLFFF code is sufficient for the challenge. The numerically reproduced magnetic dips of the extrapolated FR match observations of the filament and its barbs very well, which strongly supports the FR-dip model for filaments. The filament is stably sustained because the FR is weakly twisted and strongly confined by the overlying closed arcades.

  18. A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations

    SciTech Connect (OSTI)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N3) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix, based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.

  19. Large-scale ocean-atmosphere interactions in a simplified coupled model of the midlatitude wintertime circulation

    SciTech Connect (OSTI)

    Miller, A.J. )

    1992-02-15

    Midlatitude ocean-atmosphere interactions are studied in simulations from a simplified coupled model that includes synoptic-scale atmospheric variability, ocean current advection of sea surface temperature (SST), and air-sea heat exchange. Although theoretical dynamical ([open quotes]identical twin[close quotes]) predictions using this model have shown that the SST anomalies in this model indeed influence the atmosphere, it is found here that standard cross-correlation and empirical orthogonal function analyses of monthly mean model output yield the standard result, familiar from observational studies, that the atmosphere forces the ocean with little or no feedback. Therefore, these analyses are inconclusive and leave open the question of whether anomalous SST is influencing the atmosphere. In contrast, the authors find that compositing strong warm events of model SST is a useful indicator of ocean forcing the atmosphere. The authors present additional evidence for oceanic influence on the atmosphere, namely, that ocean current advection appears to enhance the persistence of model SST anomalies through a feedback effect that is absent when only heat flux is allowed to influence SST anomaly evolution. Models with more complete physics must ultimately be used to conclusively demonstrate these results. 26 refs., 27 figs., 5 tabs.

  20. Large-scale patterning of indium tin oxide electrodes for guided mode extraction from organic light-emitting diodes

    SciTech Connect (OSTI)

    Geyer, Ulf; Hauss, Julian; Riedel, Boris; Gleiss, Sebastian; Lemmer, Uli; Gerken, Martina

    2008-11-01

    We describe a cost-efficient and large area scalable production process of organic light-emitting diodes (OLEDs) with photonic crystals (PCs) as extraction elements for guided modes. Using laser interference lithography and physical plasma etching, we texture the indium tin oxide (ITO) electrode layer of an OLED with one- and two-dimensional PC gratings. By optical transmission measurements, the resonant mode of the grating is shown to have a drift of only 0.4% over the 5 mm length of the ITO grating. By changing the lattice constant between 300 and 600 nm, the OLED emission angle of enhanced light outcoupling is tailored from -24.25 deg. to 37 deg. At these angles, the TE emission is enhanced up to a factor of 2.14.

  1. Large-Scale Synthesis of Transition-Metal-Doped TiO2 Nanowires with Controllable Overpotential

    SciTech Connect (OSTI)

    Liu, Bin; Chen, HaoMing; Liu, Chong; Andrews, Sean; Han, Chris; Yang, Peidong

    2013-03-13

    Practical implementation of one-dimensional semiconductors into devices capable of exploiting their novel properties is often hindered by low product yields, poor material quality, high production cost, or overall lack of synthetic control. Here, we show that a molten-salt flux scheme can be used to synthesize large quantities of high-quality, single-crystalline TiO2 nanowires with controllable dimensions. Furthermore, in situ dopant incorporation of various transition metals allows for the tuning of optical, electrical, and catalytic properties. With this combination of control, robustness, and scalability, the molten-salt flux scheme can provide high-quality TiO2 nanowires to satisfy a broad range of application needs from photovoltaics to photocatalysis.

  2. Generation of large-scale, barrier-free diffuse plasmas in air at atmospheric pressure using array wire electrodes and nanosecond high-voltage pulses

    SciTech Connect (OSTI)

    Teng, Yun; Li, Lee Liu, Yun-Long; Liu, Lun; Liu, Minghai

    2014-10-15

    This paper introduces a method to generate large-scale diffuse plasmas by using a repetition nanosecond pulse generator and a parallel array wire-electrode configuration. We investigated barrier-free diffuse plasmas produced in the open air in parallel and cross-parallel array line-line electrode configurations. We found that, when the distance between the wire-electrode pair is small, the discharges were almost extinguished. Also, glow-like diffuse plasmas with little discharge weakening were obtained in an appropriate range of line-line distances and with a cathode-grounding cross-electrode configuration. As an example, we produced a large-scale, stable diffuse plasma with volumes as large as 18??15??15?cm{sup 3}, and this discharge region can be further expanded. Additionally, using optical and electrical measurements, we showed that the electron temperature was higher than the gas temperature, which was almost the same as room temperature. Also, an array of electrode configuration with more wire electrodes had helped to prevent the transition from diffuse discharge to arc discharge. Comparing the current waveforms of configurations with 1 cell and 9 cells, we found that adding cells significantly increased the conduction current and the electrical energy delivered in the electrode gaps.

  3. Galaxy evolution and large-scale structure in the far-infrared. II. The IRAS faint source survey

    SciTech Connect (OSTI)

    Lonsdale, C.J.; Hacking, P.B.; Conrow, T.P.; Rowan-Robinson, M. Queen Mary College, London )

    1990-07-01

    The new IRAS Faint Source Survey data base is used to confirm the conclusion of Hacking et al. (1987) that the 60 micron source counts fainter than about 0.5 Jy lie in excess of predictions based on nonevolving model populations. The existence of an anisotropy between the northern and southern Galactic caps discovered by Rowan-Robinson et al. (1986) and Needham and Rowan-Robinson (1988) is confirmed, and it is found to extend below their sensitivity limit to about 0.3 Jy in 60 micron flux density. The count anisotropy at f(60) greater than 0.3 can be interpreted reasonably as due to the Local Supercluster; however, no one structure accounting for the fainter anisotropy can be easily identified in either optical or far-IR two-dimensional sky distributions. The far-IR galaxy sky distributions are considerably smoother than distributions from the published optical galaxy catalogs. It is likely that structure of the large size discussed here have been discriminated against in earlier studies due to insufficient volume sampling. 105 refs.

  4. Design of a Physical Point-Absorbing WEC Model on which Multiple Control Strategies will be Tested at Large Scale in the MASK Basin

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Design of a Physical Point-Absorbing WEC Model on which Multiple Control Strategies will be Tested at Large Scale in the MASK Basin Diana L. Bull 1 , Ryan G. Coe 1 , Mark Monda 3 , Kevin Dullea 2 , Giorgio Bacelli 1 , David Patterson 1 1 Water Power Technologies, 2 Intelligent Systems Control, 3 Robotic and Security Systems Sandia National Laboratories, Albuquerque, NM 87185-1124 ABSTRACT A new multi-year effort has been launched by the Department of Energy to validate the extent to which

  5. Large scale simulations of the mechanical properties of layered transition metal ternary compounds for fossil energy power system applications

    SciTech Connect (OSTI)

    Ching, Wai-Yim

    2014-12-31

    Advanced materials with applications in extreme conditions such as high temperature, high pressure, and corrosive environments play a critical role in the development of new technologies to significantly improve the performance of different types of power plants. Materials that are currently employed in fossil energy conversion systems are typically the Ni-based alloys and stainless steels that have already reached their ultimate performance limits. Incremental improvements are unlikely to meet the more stringent requirements aimed at increased efficiency and reduce risks while addressing environmental concerns and keeping costs low. Computational studies can lead the way in the search for novel materials or for significant improvements in existing materials that can meet such requirements. Detailed computational studies with sufficient predictive power can provide an atomistic level understanding of the key characteristics that lead to desirable properties. This project focuses on the comprehensive study of a new class of materials called MAX phases, or Mn+1AXn (M = a transition metal, A = Al or other group III, IV, and V elements, X = C or N). The MAX phases are layered transition metal carbides or nitrides with a rare combination of metallic and ceramic properties. Due to their unique structural arrangements and special types of bonding, these thermodynamically stable alloys possess some of the most outstanding properties. We used a genomic approach in screening a large number of potential MAX phases and established a database for 665 viable MAX compounds on the structure, mechanical and electronic properties and investigated the correlations between them. This database if then used as a tool for materials informatics for further exploration of this class of intermetallic compounds.

  6. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    SciTech Connect (OSTI)

    Alonso, Juan J.; Iaccarino, Gianluca

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A solution to the long-time integration problem of spectral chaos approaches; 4. A rigorous methodology to account for aleatory and epistemic uncertainties, to emphasize the most important variables via dimension reduction and dimension-adaptive refinement, and to support fusion with experimental data using Bayesian inference; 5. The application of novel methodologies to time-dependent reliability studies in wind turbine applications including a number of efforts relating to the uncertainty quantification in vertical-axis wind turbine applications. In this report, we summarize all accomplishments in the project (during the time period specified) focusing on advances in UQ algorithms and deployment efforts to the wind turbine application area. Detailed publications in each of these areas have also been completed and are available from the respective conference proceedings and journals as detailed in a later section.

  7. Large-scale Manufacturing of Nanoparticulate-based Lubrication Additives for Improved Energy Efficiency and Reduced Emissions

    SciTech Connect (OSTI)

    Erdemir, Ali

    2013-09-26

    This project was funded under the Department of Energy (DOE) Lab Call on Nanomanufacturing for Energy Efficiency and was directed toward the development of novel boron-based nanocolloidal lubrication additives for improving the friction and wear performance of machine components in a wide range of industrial and transportation applications. Argonne?s research team concentrated on the scientific and technical aspects of the project, using a range of state-of-the art analytical and tribological test facilities. Argonne has extensive past experience and expertise in working with boron-based solid and liquid lubrication additives, and has intellectual property ownership of several. There were two industrial collaborators in this project: Ashland Oil (represented by its Valvoline subsidiary) and Primet Precision Materials, Inc. (a leading nanomaterials company). There was also a sub-contract with the University of Arkansas. The major objectives of the project were to develop novel boron-based nanocolloidal lubrication additives and to optimize and verify their performance under boundary-lubricated sliding conditions. The project also tackled problems related to colloidal dispersion, larger-scale manufacturing and blending of nano-additives with base carrier oils. Other important issues dealt with in the project were determination of the optimum size and concentration of the particles and compatibility with various base fluids and/or additives. Boron-based particulate additives considered in this project included boric acid (H{sub 3}BO{sub 3}), hexagonal boron nitride (h-BN), boron oxide, and borax. As part of this project, we also explored a hybrid MoS{sub 2} + boric acid formulation approach for more effective lubrication and reported the results. The major motivation behind this work was to reduce energy losses related to friction and wear in a wide spectrum of mechanical systems and thereby reduce our dependence on imported oil. Growing concern over greenhouse gas emissions was also a major reason. The transportation sector alone consumes about 13 million barrels of crude oil per day (nearly 60% of which is imported) and is responsible for about 30% of the CO{sub 2} emission. When we consider manufacturing and other energy-intensive industrial processes, the amount of petroleum being consumed due to friction and wear reaches more than 20 million barrels per day (from official energy statistics, U.S. Energy Information Administration). Frequent remanufacturing and/or replacement of worn parts due to friction-, wear-, and scuffing-related degradations also consume significant amounts of energy and give rise to additional CO{sub 2} emission. Overall, the total annual cost of friction- and wear-related energy and material losses is estimated to be rather significant (i.e., as much as 5% of the gross national products of highly industrialized nations). It is projected that more than half of the total friction- and wear-related energy losses can be recovered by developing and implementing advanced friction and wear control technologies. In transportation vehicles alone, 10% to 15% of the fuel energy is spent to overcome friction. If we can cut down the friction- and wear-related energy losses by half, then we can potentially save up to 1.5 million barrels of petroleum per day. Also, less friction and wear would mean less energy consumption as well as less carbon emissions and hazardous byproducts being generated and released to the environment. New and more robust anti-friction and -wear control technologies may thus have a significant positive impact on improving the efficiency and environmental cleanliness of the current legacy fleet and future transportation systems. Effective control of friction in other industrial sectors such as manufacturing, power generation, mining and oil exploration, and agricultural and earthmoving machinery may bring more energy savings. Therefore, this project was timely and responsive to the energy and environmental objectives of DOE and our nation. In this project, most of the boron-based mater

  8. Large-scale production, harvest and logistics of switchgrass (Panicum virgatum L.) - current technology and envisioning a mature technology

    SciTech Connect (OSTI)

    Sokhansanj, Shahabaddine; Turhollow, Jr., Anthony; Mani, Sudhagar; Kumar, Amit; Bransby, David; Lynd, L.; Laser, Mark

    2009-03-01

    Switchgrass (Panicum virgatum L.) is a promising cellulosic biomass feedstock for biorefineries and biofuel production. This paper reviews current and future potential technologies for production, harvest, storage, and transportation of switchgrass. Our analysis indicates that for a yield of 10 Mg ha 1, the current cost of producing switchgrass (after establishment) is about $41.50 Mg 1. The costs may be reduced to about half this if the yield is increased to 30 Mg ha 1 through genetic improvement, intensive crop management, and/or optimized inputs. At a yield of 10 Mg ha 1, we estimate that harvesting costs range from $23.72 Mg 1 for current baling technology to less than $16 Mg 1 when using a loafing collection system. At yields of 20 and 30 Mg ha 1 with an improved loafing system, harvesting costs are even lower at $12.75 Mg 1 and $9.59 Mg 1, respectively. Transport costs vary depending upon yield and fraction of land under switchgrass, bulk density of biomass, and total annual demand of a biorefinery. For a 2000 Mg d 1 plant and an annual yield of 10 Mg ha 1, the transport cost is an estimated $15.42 Mg 1, assuming 25% of the land is under switchgrass production. Total delivered cost of switchgrass using current baling technology is $80.64 Mg 1, requiring an energy input of 8.5% of the feedstock higher heating value (HHV). With mature technology, for example, a large, loaf collection system, the total delivered cost is reduced to about $71.16 Mg 1 with 7.8% of the feedstock HHV required as input. Further cost reduction can be achieved by combining mature technology with increased crop productivity. Delivered cost and energy input do not vary significantly as biorefinery capacity increases from 2000 Mg d 1 to 5000 Mg d 1 because the cost of increased distance to access a larger volume feedstock offsets the gains in increased biorefinery capacity. This paper outlines possible scenarios for the expansion of switchgrass handling to 30 Tg (million Mg) in 2015 and 100 Tg in 2030 based on predicted growth of the biorefinery industry in the USA. The value of switchgrass collection operations is estimated at more than $0.6 billion in 2015 and more than $2.1 billion in 2030. The estimated value of post harvest operations is $0.6 $2.0 billion in 2015, and $2.0 $6.5 billion in 2030, depending on the degree of preprocessing. The need for power equipment (tractors) will increase from 100 MW in 2015 to 666 MW in 2030, with corresponding annual values of $150 and $520 million, respectively. 2009 Society of Chemical Industry and John Wiley & Sons, Ltd

  9. Hierarchical chlorine-doped rutile TiO{sub 2} spherical clusters of nanorods: Large-scale synthesis and high photocatalytic activity

    SciTech Connect (OSTI)

    Xu Hua; Zheng Zhi; Zhang Lizhi Zhang Hailu; Deng Feng

    2008-09-15

    In this study, we report the synthesis of hierarchical chlorine-doped rutile TiO{sub 2} spherical clusters of nanorods photocatalyst on a large scale via a soft interface approach. This catalyst showed much higher photocatalytic activity than the famous commercial titania (Degussa P25) under visible light ({lambda}>420 nm). The resulting sample was characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), high-resolution TEM (HRTEM), nitrogen adsorption, X-ray photoelectron spectroscopy (XPS), UV-vis diffuse reflectance spectroscopy, {sup 1}H solid magic-angle spinning nuclear magnetic resonance (MAS-NMR) and photoluminescence spectroscopy. On the basis of characterization results, we found that the doping of chlorine resulted in red shift of absorption and higher surface acidity as well as crystal defects in the photocatalyst, which were the reasons for high photocatalytic activity of chlorine-doped TiO{sub 2} under visible light ({lambda}>420 nm). These hierarchical chlorine-doped rutile TiO{sub 2} spherical clusters of nanorods are very attractive in the fields of environmental pollutants removal and solar cell because of their easy separation and high activity. - Graphical abstract: Hierarchical chlorine-doped rutile TiO{sub 2} spherical clusters of nanorods photocatalyst were synthesized on a large scale via a soft interface approach. This catalyst showed much higher photocatalytic activity than the famous commercial titania (Degussa P25) under visible light ({lambda}>420 nm)

  10. The effect of the geomagnetic field on cosmic ray energy estimates and large scale anisotropy searches on data from the Pierre Auger Observatory

    SciTech Connect (OSTI)

    Abreu, P.; Aglietta, M.; Ahn, E.J.; Albuquerque, I.F.M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Ambrosio, M.; ,

    2011-11-01

    We present a comprehensive study of the influence of the geomagnetic field on the energy estimation of extensive air showers with a zenith angle smaller than 60{sup o}, detected at the Pierre Auger Observatory. The geomagnetic field induces an azimuthal modulation of the estimated energy of cosmic rays up to the {approx} 2% level at large zenith angles. We present a method to account for this modulation of the reconstructed energy. We analyse the effect of the modulation on large scale anisotropy searches in the arrival direction distributions of cosmic rays. At a given energy, the geomagnetic effect is shown to induce a pseudo-dipolar pattern at the percent level in the declination distribution that needs to be accounted for. In this work, we have identified and quantified a systematic uncertainty affecting the energy determination of cosmic rays detected by the surface detector array of the Pierre Auger Observatory. This systematic uncertainty, induced by the influence of the geomagnetic field on the shower development, has a strength which depends on both the zenith and the azimuthal angles. Consequently, we have shown that it induces distortions of the estimated cosmic ray event rate at a given energy at the percent level in both the azimuthal and the declination distributions, the latter of which mimics an almost dipolar pattern. We have also shown that the induced distortions are already at the level of the statistical uncertainties for a number of events N {approx_equal} 32 000 (we note that the full Auger surface detector array collects about 6500 events per year with energies above 3 EeV). Accounting for these effects is thus essential with regard to the correct interpretation of large scale anisotropy measurements taking explicitly profit from the declination distribution.

  11. Comparing large scale CCS deployment potential in the USA and China: a detailed analysis based on country-specific CO2 transport & storage cost curves

    SciTech Connect (OSTI)

    Dahowski, Robert T.; Davidson, Casie L.; Dooley, James J.

    2011-04-18

    The United States and China are the two largest emitters of greenhouse gases in the world and their projected continued growth and reliance on fossil fuels, especially coal, make them strong candidates for CCS. Previous work has revealed that both nations have over 1600 large electric utility and other industrial point CO2 sources as well as very large CO2 storage resources on the order of 2,000 billion metric tons (Gt) of onshore storage capacity. In each case, the vast majority of this capacity is found in deep saline formations. In both the USA and China, candidate storage reservoirs are likely to be accessible by most sources with over 80% of these large industrial CO2 sources having a CO2 storage option within just 80 km. This suggests a strong potential for CCS deployment as a meaningful option to efforts to reduce CO2 emissions from these large, vibrant economies. However, while the USA and China possess many similarities with regards to the potential value that CCS might provide, including the range of costs at which CCS may be available to most large CO2 sources in each nation, there are a number of more subtle differences that may help us to understand the ways in which CCS deployment may differ between these two countries in order for the USA and China to work together - and in step with the rest of the world - to most efficiently reduce greenhouse gas emissions. This paper details the first ever analysis of CCS deployment costs in these two countries based on methodologically comparable CO2 source and sink inventories, economic analysis, geospatial source-sink matching and cost curve modeling. This type of analysis provides a valuable insight into the degree to which early and sustained opportunities for climate change mitigation via commercial-scale CCS are available to the two countries, and could facilitate greater collaboration in areas where those opportunities overlap.

  12. Preliminary Geologic Characterization of West Coast States for Geologic Sequestration

    SciTech Connect (OSTI)

    Larry Myer

    2005-09-29

    Characterization of geological sinks for sequestration of CO{sub 2} in California, Nevada, Oregon, and Washington was carried out as part of Phase I of the West Coast Regional Carbon Sequestration Partnership (WESTCARB) project. Results show that there are geologic storage opportunities in the region within each of the following major technology areas: saline formations, oil and gas reservoirs, and coal beds. The work focused on sedimentary basins as the initial most-promising targets for geologic sequestration. Geographical Information System (GIS) layers showing sedimentary basins and oil, gas, and coal fields in those basins were developed. The GIS layers were attributed with information on the subsurface, including sediment thickness, presence and depth of porous and permeable sandstones, and, where available, reservoir properties. California offers outstanding sequestration opportunities because of its large capacity and the potential of value-added benefits from enhanced oil recovery (EOR) and enhanced gas recovery (EGR). The estimate for storage capacity of saline formations in the ten largest basins in California ranges from about 150 to about 500 Gt of CO{sub 2}, depending on assumptions about the fraction of the formations used and the fraction of the pore volume filled with separate-phase CO{sub 2}. Potential CO{sub 2}-EOR storage was estimated to be 3.4 Gt, based on a screening of reservoirs using depth, an API gravity cutoff, and cumulative oil produced. The cumulative production from gas reservoirs (screened by depth) suggests a CO{sub 2} storage capacity of 1.7 Gt. In Oregon and Washington, sedimentary basins along the coast also offer sequestration opportunities. Of particular interest is the Puget Trough Basin, which contains up to 1,130 m (3,700 ft) of unconsolidated sediments overlying up to 3,050 m (10,000 ft) of Tertiary sedimentary rocks. The Puget Trough Basin also contains deep coal formations, which are sequestration targets and may have potential for enhanced coal bed methane recovery (ECBM).

  13. Large scale DNA microsequencing device

    DOE Patents [OSTI]

    Foote, Robert S. (Oak Ridge, TN)

    1997-01-01

    A microminiature sequencing apparatus and method provide means for simultaneously obtaining sequences of plural polynucleotide strands. The apparatus comprises a microchip into which plural channels have been etched using standard lithographic procedures and chemical wet etching. The channels include a reaction well and a separating section. Enclosing the channels is accomplished by bonding a transparent cover plate over the apparatus. A first oligonucleotide strand is chemically affixed to the apparatus through an alkyl chain. Subsequent nucleotides are selected by complementary base pair bonding. A target nucleotide strand is used to produce a family of labelled sequencing strands in each channel which are separated in the separating section. During or following separation the sequences are determined using appropriate detection means.

  14. Large scale DNA microsequencing device

    DOE Patents [OSTI]

    Foote, Robert S. (Oak Ridge, TN)

    1999-01-01

    A microminiature sequencing apparatus and method provide means for simultaneously obtaining sequences of plural polynucleotide strands. The apparatus comprises a microchip into which plural channels have been etched using standard lithographic procedures and chemical wet etching. The channels include a reaction well and a separating section. Enclosing the channels is accomplished by bonding a transparent cover plate over the apparatus. A first oligonucleotide strand is chemically affixed to the apparatus through an alkyl chain. Subsequent nucleotides are selected by complementary base pair bonding. A target nucleotide strand is used to produce a family of labelled sequencing strands in each channel which are separated in the separating section. During or following separation the sequences are determined using appropriate detection means.

  15. Large scale DNA microsequencing device

    DOE Patents [OSTI]

    Foote, R.S.

    1999-08-31

    A microminiature sequencing apparatus and method provide means for simultaneously obtaining sequences of plural polynucleotide strands. The apparatus comprises a microchip into which plural channels have been etched using standard lithographic procedures and chemical wet etching. The channels include a reaction well and a separating section. Enclosing the channels is accomplished by bonding a transparent cover plate over the apparatus. A first oligonucleotide strand is chemically affixed to the apparatus through an alkyl chain. Subsequent nucleotides are selected by complementary base pair bonding. A target nucleotide strand is used to produce a family of labelled sequencing strands in each channel which are separated in the separating section. During or following separation the sequences are determined using appropriate detection means. 11 figs.

  16. Large scale DNA microsequencing device

    DOE Patents [OSTI]

    Foote, R.S.

    1997-08-26

    A microminiature sequencing apparatus and method provide a means for simultaneously obtaining sequences of plural polynucleotide strands. The apparatus cosists of a microchip into which plural channels have been etched using standard lithographic procedures and chemical wet etching. The channels include a reaction well and a separating section. Enclosing the channels is accomplished by bonding a transparent cover plate over the apparatus. A first oligonucleotide strand is chemically affixed to the apparatus through an alkyl chain. Subsequent nucleotides are selected by complementary base pair bonding. A target nucleotide strand is used to produce a family of labelled sequencing strands in each channel which are separated in the separating section. During or following separation the sequences are determined using appropriate detection means. 17 figs.

  17. Hydrological/Geological Studies

    Office of Legacy Management (LM)

    .\ .8.2 Hydrological/Geological Studies Book 1. Radiochemical Analyses of Water Samples from SelectedT" Streams Wells, Springs and Precipitation Collected During Re-Entry Drilling, Project Rulison-7, 197 1 HGS 8 This page intentionally left blank . . . ... . . . . . . . . , : . . . . . . . . . ' . r - . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . . . . ..... . - x ..:; . , ' , . . ' . . . . . . !' r:.::. _. . : _ . . : . . . . \ . . ' - \ , : , . . . . . . . . . . .

  18. Stochastic Engine Final Report: Applying Markov Chain Monte Carlo Methods with Importance Sampling to Large-Scale Data-Driven Simulation

    SciTech Connect (OSTI)

    Glaser, R E; Johannesson, G; Sengupta, S; Kosovic, B; Carle, S; Franz, G A; Aines, R D; Nitao, J J; Hanley, W G; Ramirez, A L; Newmark, R L; Johnson, V M; Dyer, K M; Henderson, K A; Sugiyama, G A; Hickling, T L; Pasyanos, M E; Jones, D A; Grimm, R J; Levine, R A

    2004-03-11

    Accurate prediction of complex phenomena can be greatly enhanced through the use of data and observations to update simulations. The ability to create these data-driven simulations is limited by error and uncertainty in both the data and the simulation. The stochastic engine project addressed this problem through the development and application of a family of Markov Chain Monte Carlo methods utilizing importance sampling driven by forward simulators to minimize time spent search very large state spaces. The stochastic engine rapidly chooses among a very large number of hypothesized states and selects those that are consistent (within error) with all the information at hand. Predicted measurements from the simulator are used to estimate the likelihood of actual measurements, which in turn reduces the uncertainty in the original sample space via a conditional probability method called Bayesian inferencing. This highly efficient, staged Metropolis-type search algorithm allows us to address extremely complex problems and opens the door to solving many data-driven, nonlinear, multidimensional problems. A key challenge has been developing representation methods that integrate the local details of real data with the global physics of the simulations, enabling supercomputers to efficiently solve the problem. Development focused on large-scale problems, and on examining the mathematical robustness of the approach in diverse applications. Multiple data types were combined with large-scale simulations to evaluate systems with {approx}{sup 10}20,000 possible states (detecting underground leaks at the Hanford waste tanks). The probable uses of chemical process facilities were assessed using an evidence-tree representation and in-process updating. Other applications included contaminant flow paths at the Savannah River Site, locating structural flaws in buildings, improving models for seismic travel times systems used to monitor nuclear proliferation, characterizing the source of indistinct atmospheric plumes, and improving flash radiography. In the course of developing these applications, we also developed new methods to cluster and analyze the results of the state-space searches, as well as a number of algorithms to improve the search speed and efficiency. Our generalized solution contributes both a means to make more informed predictions of the behavior of very complex systems, and to improve those predictions as events unfold, using new data in real time.

  19. LARGE-SCALE DISTRIBUTION OF ARRIVAL DIRECTIONS OF COSMIC RAYS DETECTED ABOVE 10{sup 18} eV AT THE PIERRE AUGER OBSERVATORY

    SciTech Connect (OSTI)

    Abreu, P.; Andringa, S.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Alves Batista, R.; Ambrosio, M.; Aramo, C.; Aminaei, A.; Anchordoqui, L.; Antici'c, T.; Arganda, E.; Collaboration: Pierre Auger Collaboration; and others

    2012-12-15

    A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above 10{sup 18} eV at the Pierre Auger Observatory is presented. This search is performed as a function of both declination and right ascension in several energy ranges above 10{sup 18} eV, and reported in terms of dipolar and quadrupolar coefficients. Within the systematic uncertainties, no significant deviation from isotropy is revealed. Assuming that any cosmic-ray anisotropy is dominated by dipole and quadrupole moments in this energy range, upper limits on their amplitudes are derived. These upper limits allow us to test the origin of cosmic rays above 10{sup 18} eV from stationary Galactic sources densely distributed in the Galactic disk and predominantly emitting light particles in all directions.

  20. Large scale distribution of ultra high energy cosmic rays detected at the Pierre Auger Observatory with zenith angles up to 80°

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Aab, Alexander

    2015-03-30

    In this study, we present the results of an analysis of the large angular scale distribution of the arrival directions of cosmic rays with energy above 4 EeV detected at the Pierre Auger Observatory including for the first time events with zenith angle between 60° and 80°. We perform two Rayleigh analyses, one in the right ascension and one in the azimuth angle distributions, that are sensitive to modulations in right ascension and declination, respectively. The largest departure from isotropy appears in themore » $$E\\gt 8$$ EeV energy bin, with an amplitude for the first harmonic in right ascension $$r_{1}^{\\alpha }=(4.4\\pm 1.0)\\times {{10}^{-2}}$$, that has a chance probability $$P(\\geqslant r_{1}^{\\alpha })=6.4\\times {{10}^{-5}}$$, reinforcing the hint previously reported with vertical events alone.« less

  1. Co-optimizing Generation and Transmission Expansion with Wind Power in Large-Scale Power Grids Implementation in the US Eastern Interconnection

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; Liu, Yilu

    2016-01-12

    This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less

  2. Organo-sulfur molecules enable iron-based battery electrodes to meet the challenges of large-scale electrical energy storage

    SciTech Connect (OSTI)

    Yang, B; Malkhandi, S; Manohar, AK; Prakash, GKS; Narayanan, SR

    2014-07-03

    Rechargeable iron-air and nickel-iron batteries are attractive as sustainable and inexpensive solutions for large-scale electrical energy storage because of the global abundance and eco-friendliness of iron, and the robustness of iron-based batteries to extended cycling. Despite these advantages, the commercial use of iron-based batteries has been limited by their low charging efficiency. This limitation arises from the iron electrodes evolving hydrogen extensively during charging. The total suppression of hydrogen evolution has been a significant challenge. We have found that organo-sulfur compounds with various structural motifs (linear and cyclic thiols, dithiols, thioethers and aromatic thiols) when added in milli-molar concentration to the aqueous alkaline electrolyte, reduce the hydrogen evolution rate by 90%. These organo-sulfur compounds form strongly adsorbed layers on the iron electrode and block the electrochemical process of hydrogen evolution. The charge-transfer resistance and double-layer capacitance of the iron/electrolyte interface confirm that the extent of suppression of hydrogen evolution depends on the degree of surface coverage and the molecular structure of the organo-sulfur compound. An unanticipated electrochemical effect of the adsorption of organo-sulfur molecules is "de-passivation" that allows the iron electrode to be discharged at high current values. The strongly adsorbed organo-sulfur compounds were also found to resist electro-oxidation even at the positive electrode potentials at which oxygen evolution can occur. Through testing on practical rechargeable battery electrodes we have verified the substantial improvements to the efficiency during charging and the increased capability to discharge at high rates. We expect these performance advances to enable the design of efficient, inexpensive and eco-friendly iron-based batteries for large-scale electrical energy storage.

  3. Determination of Large-Scale Cloud Ice Water Concentration by Combining Surface Radar and Satellite Data in Support of ARM SCM Activities

    SciTech Connect (OSTI)

    Liu, Guosheng

    2013-03-15

    Single-column modeling (SCM) is one of the key elements of Atmospheric Radiation Measurement (ARM) research initiatives for the development and testing of various physical parameterizations to be used in general circulation models (GCMs). The data required for use with an SCM include observed vertical profiles of temperature, water vapor, and condensed water, as well as the large-scale vertical motion and tendencies of temperature, water vapor, and condensed water due to horizontal advection. Surface-based measurements operated at ARM sites and upper-air sounding networks supply most of the required variables for model inputs, but do not provide the horizontal advection term of condensed water. Since surface cloud radar and microwave radiometer observations at ARM sites are single-point measurements, they can provide the amount of condensed water at the location of observation sites, but not a horizontal distribution of condensed water contents. Consequently, observational data for the large-scale advection tendencies of condensed water have not been available to the ARM cloud modeling community based on surface observations alone. This lack of advection data of water condensate could cause large uncertainties in SCM simulations. Additionally, to evaluate GCMs’ cloud physical parameterization, we need to compare GCM results with observed cloud water amounts over a scale that is large enough to be comparable to what a GCM grid represents. To this end, the point-measurements at ARM surface sites are again not adequate. Therefore, cloud water observations over a large area are needed. The main goal of this project is to retrieve ice water contents over an area of 10 x 10 deg. surrounding the ARM sites by combining surface and satellite observations. Built on the progress made during previous ARM research, we have conducted the retrievals of 3-dimensional ice water content by combining surface radar/radiometer and satellite measurements, and have produced 3-D cloud ice water contents in support of cloud modeling activities. The approach of the study is to expand a (surface) point measurement to an (satellite) area measurement. That is, the study takes the advantage of the high quality cloud measurements (particularly cloud radar and microwave radiometer measurements) at the point of the ARM sites. We use the cloud ice water characteristics derived from the point measurement to guide/constrain a satellite retrieval algorithm, then use the satellite algorithm to derive the 3-D cloud ice water distributions within an 10° (latitude) x 10° (longitude) area. During the research period, we have developed, validated and improved our cloud ice water retrievals, and have produced and archived at ARM website as a PI-product of the 3-D cloud ice water contents using combined satellite high-frequency microwave and surface radar observations for SGP March 2000 IOP and TWP-ICE 2006 IOP over 10 deg. x 10 deg. area centered at ARM SGP central facility and Darwin sites. We have also worked on validation of the 3-D ice water product by CloudSat data, synergy with visible/infrared cloud ice water retrievals for better results at low ice water conditions, and created a long-term (several years) of ice water climatology in 10 x 10 deg. area of ARM SGP and TWP sites and then compared it with GCMs.

  4. Geologic and tectonic characteristics of rockbursts

    SciTech Connect (OSTI)

    Adushkin, V.V.; Charlamov, V.A.; Kondratyev, S.V.; Rybnov, Y.S.; Shemyakin, V.M.; Sisov, I.A.; Syrnikov, N.M.; Turuntaev, S.B.; Vasilyeva, T.V.

    1995-06-01

    The modern mining enterprises have attained such scales of engineering activity that their direct influence to a rock massif and in series of cases to the region seismic regime doesn`t provoke any doubts. Excavation and removal of large volumes of rock mass, industrial explosions and other technological factors during long time can lead to the accumulation of man-made changes in rock massifs capable to cause catastrophic consequences. The stress state changes in considerable domains of massif create dangerous concentration of stresses at large geological heterogeneities - faults localized in the mining works zone. External influence can lead in that case to such phenomena as tectonic rockbursts and man-made earthquakes. The rockbursts problem in world mining practice exists for more than two hundred years. So that its actuality not only doesn`t decrease but steadily mounts up as due to the mining works depth increase, enlargement of the useful minerals excavations volumes as due to the possibility of safe use of the rock massif potential energy for facilitating the mastering of the bowels of the Earth and for making that more cheap. The purpose of present work is to study the engineering activity influence to processes occurring in the upper part of Earth crust and in particular in a rock massif. The rock massif is treated in those studies as a geophysical medium - such approach takes into account the presence of block structure of medium and the continuous exchange of energy between parts of that structure. The idea ``geophysical medium`` is applied in geophysics sufficiently wide and stresses the difference of actual Earth crust and rock massifs from the continuous media models discussed in mechanics.

  5. LARGE-SCALE CORONAL PROPAGATING FRONTS IN SOLAR ERUPTIONS AS OBSERVED BY THE ATMOSPHERIC IMAGING ASSEMBLY ON BOARD THE SOLAR DYNAMICS OBSERVATORYAN ENSEMBLE STUDY

    SciTech Connect (OSTI)

    Nitta, Nariaki V.; Schrijver, Carolus J.; Title, Alan M.; Liu, Wei

    2013-10-10

    This paper presents a study of a large sample of global disturbances in the solar corona with characteristic propagating fronts as intensity enhancement, similar to the phenomena that have often been referred to as Extreme Ultraviolet Imaging Telescope (EIT) waves or extreme-ultraviolet (EUV) waves. Now EUV images obtained by the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory provide a significantly improved view of these large-scale coronal propagating fronts (LCPFs). Between 2010 April and 2013 January, a total of 171 LCPFs have been identified through visual inspection of AIA images in the 193 channel. Here we focus on the 138 LCPFs that are seen to propagate across the solar disk, first studying how they are associated with flares, coronal mass ejections (CMEs), and type II radio bursts. We measure the speed of the LCPF in various directions until it is clearly altered by active regions or coronal holes. The highest speed is extracted for each LCPF. It is often considerably higher than EIT waves. We do not find a pattern where faster LCPFs decelerate and slow LCPFs accelerate. Furthermore, the speeds are not strongly correlated with the flare intensity or CME magnitude, nor do they show an association with type II bursts. We do not find a good correlation either between the speeds of LCPFs and CMEs in a subset of 86 LCPFs observed by one or both of the Solar and Terrestrial Relations Observatory spacecraft as limb events.

  6. Brine flow in heated geologic salt.

    SciTech Connect (OSTI)

    Kuhlman, Kristopher L.; Malama, Bwalya

    2013-03-01

    This report is a summary of the physical processes, primary governing equations, solution approaches, and historic testing related to brine migration in geologic salt. Although most information presented in this report is not new, we synthesize a large amount of material scattered across dozens of laboratory reports, journal papers, conference proceedings, and textbooks. We present a mathematical description of the governing brine flow mechanisms in geologic salt. We outline the general coupled thermal, multi-phase hydrologic, and mechanical processes. We derive these processes' governing equations, which can be used to predict brine flow. These equations are valid under a wide variety of conditions applicable to radioactive waste disposal in rooms and boreholes excavated into geologic salt.

  7. Performance of powder-filled evacuated panel insulation in a manufactured home roof cavity: Tests in the Large Scale Climate Simulator

    SciTech Connect (OSTI)

    Petrie, T.W.; Kosny, J.; Childs, P.W.

    1996-03-01

    A full-scale section of half the top of a single-wide manufactured home has been studied in the Large Scale Climate Simulator (LSCS) at the Oak Ridge National Laboratory. A small roof cavity with little room for insulation at the eaves is often the case with single-wide units and limits practical ways to improve thermal performance. The purpose of the current tests was to obtain steady-state performance data for the roof cavity of the manufactured home test section when the roof cavity was insulated with fiberglass batts, blown-in rock wool insulation or combinations of these insulations and powder-filled evacuated panel (PEP) insulation. Four insulation configurations were tested: (A) a configuration with two layers of nominal R{sub US}-7 h {center_dot} ft{sup 2} {center_dot} F/BTU (R{sub SI}-1.2 m{sup 2} {center_dot} K/W) fiberglass batts; (B) a layer of PEPs and one layer of the fiberglass batts; (C) four layers of the fiberglass batts; and (D) an average 4.1 in. (10.4 cm) thick layer of blown-in rock wool at an average density of 2.4 lb/ft{sup 3} (38 kg/m{sup 3}). Effects of additional sheathing were determined for Configurations B and C. With Configuration D over the ceiling, two layers of expanded polystyrene (EPS) boards, each about the same thickness as the PEPs, were installed over the trusses instead of the roof. Aluminum foils facing the attic and over the top layer of EPS were added. The top layer of EPS was then replaced by PEPs.

  8. Multipurpose bedrock surficial, and environmental geologic maps, New River valley, southwest Virginia

    SciTech Connect (OSTI)

    Schultz, A. ); Collins, T. )

    1994-03-01

    Multipurpose bedrock, surficial, and environmental geologic maps have recently been completed for portions of the Valley and Ridge province of southwest VA. The maps, at both 1:100,000 and 1:24,000 scales, show generalized and detailed bedrock geology grouped by lithology and environmental hazard associations. Also shown are a variety of alluvial, colluvial, debris flow, and landslide deposits, as well as karst features. Multidisciplinary research topics addressed during the mapping included slope evolution and geomorphology, drainage history and terrace distribution, ancient large-scale landsliding, and sinkhole development. The maps have been used by land-use planners and engineering firms in an evaluation of Appalachian paleoseismicity and to assess potential groundwater contamination and subsidence in karst areas. The maps are being used for environmental hazard assessment and site selection of a proposed large electric powerline that crosses the Jefferson National Forest. Also, the maps are proving useful in planning for a public access interpretive geologic enter focused on large-scale slope failures. Some of the largest known landslides in eastern North America took place within the map area. Field comparisons and detailed structure mapping of similar features along the Front Range of the Colorado Rockies indicate that the landslides were probably emplaced during a single catastrophic event of short duration. Although the giles County seismic zone is nearby, stability analyses of slopes in the map area have shown that failure need not have been initiated by a seismic event. Several distinct colluvial units mapped within the area of landslides document a period of extensive weathering that postdates slide emplacement. Radiocarbon dates from landslide sag ponds indicate a minimum age of 9,860 B.P. for emplacement of some of the landslides. These results indicate that pre-slide colluvial and debris flow deposits are at least Pleistocene in age.

  9. Assembly of 500,000 inter-specific catfish expressed sequence tags and large scale gene-associated marker development for whole genome association studies

    SciTech Connect (OSTI)

    Catfish Genome Consortium; Wang, Shaolin; Peatman, Eric; Abernathy, Jason; Waldbieser, Geoff; Lindquist, Erika; Richardson, Paul; Lucas, Susan; Wang, Mei; Li, Ping; Thimmapuram, Jyothi; Liu, Lei; Vullaganti, Deepika; Kucuktas, Huseyin; Murdock, Christopher; Small, Brian C; Wilson, Melanie; Liu, Hong; Jiang, Yanliang; Lee, Yoona; Chen, Fei; Lu, Jianguo; Wang, Wenqi; Xu, Peng; Somridhivej, Benjaporn; Baoprasertkul, Puttharat; Quilang, Jonas; Sha, Zhenxia; Bao, Baolong; Wang, Yaping; Wang, Qun; Takano, Tomokazu; Nandi, Samiran; Liu, Shikai; Wong, Lilian; Kaltenboeck, Ludmilla; Quiniou, Sylvie; Bengten, Eva; Miller, Norman; Trant, John; Rokhsar, Daniel; Liu, Zhanjiang

    2010-03-23

    Background-Through the Community Sequencing Program, a catfish EST sequencing project was carried out through a collaboration between the catfish research community and the Department of Energy's Joint Genome Institute. Prior to this project, only a limited EST resource from catfish was available for the purpose of SNP identification. Results-A total of 438,321 quality ESTs were generated from 8 channel catfish (Ictalurus punctatus) and 4 blue catfish (Ictalurus furcatus) libraries, bringing the number of catfish ESTs to nearly 500,000. Assembly of all catfish ESTs resulted in 45,306 contigs and 66,272 singletons. Over 35percent of the unique sequences had significant similarities to known genes, allowing the identification of 14,776 unique genes in catfish. Over 300,000 putative SNPs have been identified, of which approximately 48,000 are high-quality SNPs identified from contigs with at least four sequences and the minor allele presence of at least two sequences in the contig. The EST resource should be valuable for identification of microsatellites, genome annotation, large-scale expression analysis, and comparative genome analysis. Conclusions-This project generated a large EST resource for catfish that captured the majority of the catfish transcriptome. The parallel analysis of ESTs from two closely related Ictalurid catfishes should also provide powerful means for the evaluation of ancient and recent gene duplications, and for the development of high-density microarrays in catfish. The inter- and intra-specific SNPs identified from all catfish EST dataset assembly will greatly benefit the catfish introgression breeding program and whole genome association studies.

  10. Programmed Nanomaterial Assemblies in Large Scales: Applications of Synthetic and Genetically- Engineered Peptides to Bridge Nano-Assemblies and Macro-Assemblies

    SciTech Connect (OSTI)

    Matsui, Hiroshi

    2014-09-09

    Work is reported in these areas: Large-scale & reconfigurable 3D structures of precise nanoparticle assemblies in self-assembled collagen peptide grids; Binary QD-Au NP 3D superlattices assembled with collagen-like peptides and energy transfer between QD and Au NP in 3D peptide frameworks; Catalytic peptides discovered by new hydrogel-based combinatorial phage display approach and their enzyme-mimicking 2D assembly; New autonomous motors of metal-organic frameworks (MOFs) powered by reorganization of self-assembled peptides at interfaces; Biomimetic assembly of proteins into microcapsules on oil-in-water droplets with structural reinforcement via biomolecular recognition-based cross-linking of surface peptides; and Biomimetic fabrication of strong freestanding genetically-engineered collagen peptide films reinforced by quantum dot joints. We gained the broad knowledge about biomimetic material assembly from nanoscale to microscale ranges by coassembling peptides and NPs via biomolecular recognition. We discovered: Genetically-engineered collagen-like peptides can be self-assembled with Au NPs to generate 3D superlattices in large volumes (> ?m{sup 3}); The assembly of the 3D peptide-Au NP superstructures is dynamic and the interparticle distance changes with assembly time as the reconfiguration of structure is triggered by pH change; QDs/NPs can be assembled with the peptide frameworks to generate 3D superlattices and these QDs/NPs can be electronically coupled for the efficient energy transfer; The controlled assembly of catalytic peptides mimicking the catalytic pocket of enzymes can catalyze chemical reactions with high selectivity; and, For the bacteria-mimicking swimmer fabrication, peptide-MOF superlattices can power translational and propellant motions by the reconfiguration of peptide assembly at the MOF-liquid interface.

  11. Measurement of Electron Density near Plasma Grid of Large-scaled Negative Ion Source by Means of Millimeter-Wave Interferometer

    SciTech Connect (OSTI)

    Nagaoka, K.; Tokuzawa, T.; Tsumori, K.; Nakano, H.; Ito, Y.; Osakabe, M.; Ikeda, K.; Kisaki, M.; Shibuya, M.; Sato, M.; Komada, S.; Kondo, T.; Hayashi, H.; Asano, E.; Takeiri, Y.; Kaneko, O.

    2011-09-26

    A millimeter-wave interferometer with the frequency of 39 GHz ({lambda} 7.7 mm) was newly installed to a large-scaled negative ion source. The measurable line-integrated electron density (n{sub e}l) is from 2x10{sup 16} to 7x10{sup 18} m{sup -2}, where n{sub e} and l represent an electron density and the plasma length along the millimeter-wave path, respectively. Our interest in this study is behavior of negative ions and reduction of electron density in the beam extraction region near the plasma grid. The first results show the possibility of the electron density measurement by the millimeter-wave interferometer in this region. The line-averaged electron density increases proportional to the arc power under the condition without cesium seeding. The significant decrease of the electron density and significant increase of the negative ion density were observed just after the cesium seeding. The electron density measured with the interferometer agrees well with that observed with a Langmuir probe. The very high negative ion ratio of n{sub H-}/(n{sub e}+n{sub H-}) = 0.85 was achieved within 400 min. after the cesium seeding.

  12. Sensitivity analysis for joint inversion of ground-penetratingradar and thermal-hydrological data from a large-scale underground heatertest

    SciTech Connect (OSTI)

    Kowalsky, M.B.; Birkholzer, J.; Peterson, J.; Finsterle, S.; Mukhopadhya y, S.; Tsang, Y.T.

    2007-06-25

    We describe a joint inversion approach that combinesgeophysical and thermal-hydrological data for the estimation of (1)thermal-hydrological parameters (such as permeability, porosity, thermalconductivity, and parameters of the capillary pressure and relativepermeability functions) that are necessary for predicting the flow offluids and heat in fractured porous media, and (2) parameters of thepetrophysical function that relates water saturation, porosity andtemperature to the dielectric constant. The approach incorporates thecoupled simulation of nonisothermal multiphase fluid flow andground-penetrating radar (GPR) travel times within an optimizationframework. We discuss application of the approach to a large-scale insitu heater test which was conducted at Yucca Mountain, Nevada, to betterunderstand the coupled thermal, hydrological, mechanical, and chemicalprocesses that may occur in the fractured rock mass around a geologicrepository for high-level radioactive waste. We provide a description ofthe time-lapse geophysical data (i.e., cross-borehole ground-penetratingradar) and thermal-hydrological data (i.e., temperature and water contentdata) collected before and during the four-year heating phase of thetest, and analyze the sensitivity of the most relevantthermal-hydrological and petrophysical parameters to the available data.To demonstrate feasibility of the approach, and as a first step towardcomprehensive inversion of the heater test data, we apply the approach toestimate one parameter, the permeability of the rock matrix.

  13. Establishing MICHCARB, a geological carbon sequestration research...

    Office of Scientific and Technical Information (OSTI)

    Western Michigan University 58 GEOSCIENCES Geological carbon sequestration Enhanced oil recovery Characterization of oil, gas and saline reservoirs Geological carbon...

  14. X6.9-CLASS FLARE-INDUCED VERTICAL KINK OSCILLATIONS IN A LARGE-SCALE PLASMA CURTAIN AS OBSERVED BY THE SOLAR DYNAMICS OBSERVATORY/ATMOSPHERIC IMAGING ASSEMBLY

    SciTech Connect (OSTI)

    Srivastava, A. K. [Aryabhatta Research Institute of Observational Sciences (ARIES), Manora Peak, Nainital 263 002 (India); Goossens, M. [Centre for Mathematical Plasma Astrophysics, Department of Mathematics, KU Leuven, Celestijnenlaan 200B, B-3001 Leuven (Belgium)

    2013-11-01

    We present rare observational evidence of vertical kink oscillations in a laminar and diffused large-scale plasma curtain as observed by the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory. The X6.9-class flare in active region 11263 on 2011 August 9 induces a global large-scale disturbance that propagates in a narrow lane above the plasma curtain and creates a low density region that appears as a dimming in the observational image data. This large-scale propagating disturbance acts as a non-periodic driver that interacts asymmetrically and obliquely with the top of the plasma curtain and triggers the observed oscillations. In the deeper layers of the curtain, we find evidence of vertical kink oscillations with two periods (795 s and 530 s). On the magnetic surface of the curtain where the density is inhomogeneous due to coronal dimming, non-decaying vertical oscillations are also observed (period ? 763-896 s). We infer that the global large-scale disturbance triggers vertical kink oscillations in the deeper layers as well as on the surface of the large-scale plasma curtain. The properties of the excited waves strongly depend on the local plasma and magnetic field conditions.

  15. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    SciTech Connect (OSTI)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPUGPU collaborative simulations that solve realistic CFD problems with both complex configurations and high-order schemes.

  16. Low-risk and cost-effective prior savings estimates for large-scale energy conservation projects in housing: Learning from the Fort Polk GHP project

    SciTech Connect (OSTI)

    Shonder, J.A.; Hughes, P.J.; Thornton, J.W.

    1997-08-01

    Many opportunities exist for large-scale energy conservation projects in housing. Energy savings performance contracting (ESPC) is now receiving greater attention, as a means to implement such projects. This paper proposes an improved method for prior (to construction) savings estimates for these projects. The proposed approach to prior estimates is verified against data from Fort Polk, LA. In the course of evaluating the ESPC at Fort Polk, the authors have collected energy use data which allowed them to develop calibrated engineering models which accurately predict pre-retrofit energy consumption. They believe that such calibrated models could be used to provide much more accurate estimates of energy savings in retrofit projects. The improved savings estimating approach described here is based on an engineering model calibrated to field-collected data from the pre-retrofit period. A dynamic model of pre-retrofit energy use was developed for all housing and non-housing loads on a complete electrical feeder at Fort Polk. The model included the heat transfer characteristics of the buildings, the pre-retrofit air source heat pump, a hot water consumption model and a profile for electrical use by lights and other appliances. Energy consumption for all 200 apartments was totaled, and by adjusting thermostat setpoints and outdoor air infiltration parameters, the models were matched to field-collected energy consumption data for the entire feeder. The energy conservation measures were then implemented in the calibrated model: the air source heat pumps were replaced by geothermal heat pumps with desuperheaters; hot water loads were reduced to account for the low-flow shower heads; and lighting loads were reduced to account for fixture delamping and replacement with compact fluorescent lights. The analysis of pre- and post-retrofit data indicates that the retrofits have saved 30.3% of pre-retrofit electrical energy consumption on the feeder modeled in this paper.

  17. Accounting for Unresolved Spatial Variability in Large Scale Models: Development and Evaluation of a Statistical Cloud Parameterization with Prognostic Higher Order Moments

    SciTech Connect (OSTI)

    Robert Pincus

    2011-05-17

    This project focused on the variability of clouds that is present across a wide range of scales ranging from the synoptic to the millimeter. In particular, there is substantial variability in cloud properties at scales smaller than the grid spacing of models used to make climate projections (GCMs) and weather forecasts. These models represent clouds and other small-scale processes with parameterizations that describe how those processes respond to and feed back on the largescale state of the atmosphere.

  18. Low-Risk and Cost-Effective Prior Savings Estimates for Large-Scale Energy Conservation Projects in Housing: Learning from the Fort Polk GHP Project

    SciTech Connect (OSTI)

    Shonder, John A; Hughes, Patrick; Thornton, Jeff W.

    1997-08-01

    Many opportunities exist for large-scale energy conservation projects in housing: military housing, federally-subsidized low-income housing, and planned communities (condominiums, townhomes, senior centers) to name a few. Energy savings performance contracting (ESPC) is now receiving greater attention, as a means to implement such projects. This paper proposes an improved method for prior (to construction) savings estimates for these projects. More accurate prior estimates reduce project risk, decrease financing costs, and help avoid post-construction legal disputes over performance contract baseline adjustments. The proposed approach to prior estimates is verified against data from Fort Polk, LA. In the course of evaluating the ESPC at Fort Polk, Louisiana, we have collected energy use data - both at the electrical feeder level and at the level of individual residences - which allowed us to develop calibrated engineering models which accurately predict pre-retrofit energy consumption. We believe that such calibrated models could be used to provide much more accurate estimates of energy savings in retrofit projects, particularly in cases where the energy consumption of large populations of housing can be captured on one or a few meters. The improved savings estimating approach described here is based on an engineering model calibrated to field-collected data from the pre-retrofit period. A dynamic model of pre-retrofit energy use was developed for all housing and non-housing loads on a complete electrical feeder at Fort Polk. The feeder serves 46 buildings containing a total of 200 individual apartments. Of the 46 buildings, there are three unique types, and among these types the only difference is compass orientation. The model included the heat transfer characteristics of the buildings, the pre-retrofit air source heat pump, a hot water consumption model and a profile for electrical use by lights and other appliances. Energy consumption for all 200 apartments was totaled, and by adjusting thermostat setpoints and outdoor air infiltration parameters, the models were matched to field-collected energy consumption data for the entire feeder. The energy conservation measures were then implemented in the calibrated model: the air source heat pumps were replaced by geothermal heat pumps (GHPs) with desuperheaters; hot water loads were reduced to account for the low-flow shower heads; and lighting loads were reduced to account for fixture delamping and replacement with compact fluorescent lights (CFLs). Our analysis of pre- and post-retrofit data (Shonder and Hughes, 1997) indicates that the retrofits have saved 30.3% of pre-retrofit electrical energy consumption on the feeder modeled in this paper. Using the method outlined, we have been able to predict this savings within 0.1% of its measured value, using only pre-construction energy consumption data, and data from one pilot test site. It is well-known that predictions of savings from energy conservation programs are often optimistic, especially in the case of residential retrofits. Fels and keating (1993) cite several examples of programs which achieved as little as 20% of the predicted energy savings. Factors which influence the sometimes large discrepancies between actual and predicted savings include changes in occupancy, take-back effects (in which more efficient system operation leads occupants to choose higher levels of comfort), and changes in base energy use (e.g. through purchase of additional appliances such as washing machines and clothes dryers). An even larger factor, perhaps, is the inaccuracy inherent in the engineering models (BLAST, DOE-2, etc.) commonly used to estimate building energy consumption, if these models are not first calibrated to site-monitored data. For example, prior estimates of base-wide savings from the Fort Polk ESPC were on the order of 40% of pre-retrofit electrical use; our analysis has shown the true savings for the entire project (which includes 16 separate electrical feeders) to be about 32%. It should be noted that the retrofits ca

  19. PROCEEDINGS OF THE RIKEN BNL RESEARCH CENTER WORKSHOP ON LARGE SCALE COMPUTATIONS IN NUCLEAR PHYSICS USING THE QCDOC, SEPTEMBER 26 - 28, 2002.

    SciTech Connect (OSTI)

    AOKI,Y.; BALTZ,A.; CREUTZ,M.; GYULASSY,M.; OHTA,S.

    2002-09-26

    The massively parallel computer QCDOC (QCD On a Chip) of the RIKEN BNL Research Center (RI3RC) will provide ten-teraflop peak performance for lattice gauge calculations. Lattice groups from both Columbia University and RBRC, along with assistance from IBM, jointly handled the design of the QCDOC. RIKEN has provided $5 million in funding to complete the machine in 2003. Some fraction of this computer (perhaps as much as 10%) might be made available for large-scale computations in areas of theoretical nuclear physics other than lattice gauge theory. The purpose of this workshop was to investigate the feasibility and possibility of using a supercomputer such as the QCDOC for lattice, general nuclear theory, and other calculations. The lattice applications to nuclear physics that can be investigated with the QCDOC are varied: for example, the light hadron spectrum, finite temperature QCD, and kaon ({Delta}I = 1/2 and CP violation), and nucleon (the structure of the proton) matrix elements, to name a few. There are also other topics in theoretical nuclear physics that are currently limited by computer resources. Among these are ab initio calculations of nuclear structure for light nuclei (e.g. up to {approx}A = 8 nuclei), nuclear shell model calculations, nuclear hydrodynamics, heavy ion cascade and other transport calculations for RHIC, and nuclear astrophysics topics such as exploding supernovae. The physics topics were quite varied, ranging from simulations of stellar collapse by Douglas Swesty to detailed shell model calculations by David Dean, Takaharu Otsuka, and Noritaka Shimizu. Going outside traditional nuclear physics, James Davenport discussed molecular dynamics simulations and Shailesh Chandrasekharan presented a class of algorithms for simulating a wide variety of femionic problems. Four speakers addressed various aspects of theory and computational modeling for relativistic heavy ion reactions at RHIC. Scott Pratt and Steffen Bass gave general overviews of how qualitatively different types of physical processes evolve temporally in heavy ion reactions. Denes Molnar concentrated on the application of hydrodynamics, and Alex Krasnitz on a classical Yang-Mills field theory for the initial phase. We were pleasantly surprised by the excellence of the talks and the substantial interest from all parties. The diversity of the audience forced the speakers to give their talks at an understandable level, which was highly appreciated. One particular bonus of the discussions could be the application of highly developed three-dimensional astrophysics hydrodynamics codes to heavy ion reactions.

  20. High resolution reservoir geological modelling using outcrop information

    SciTech Connect (OSTI)

    Zhang Changmin; Lin Kexiang; Liu Huaibo

    1997-08-01

    This is China`s first case study of high resolution reservoir geological modelling using outcrop information. The key of the modelling process is to build a prototype model and using the model as a geological knowledge bank. Outcrop information used in geological modelling including seven aspects: (1) Determining the reservoir framework pattern by sedimentary depositional system and facies analysis; (2) Horizontal correlation based on the lower and higher stand duration of the paleo-lake level; (3) Determining the model`s direction based on the paleocurrent statistics; (4) Estimating the sandbody communication by photomosaic and profiles; (6) Estimating reservoir properties distribution within sandbody by lithofacies analysis; and (7) Building the reservoir model in sandbody scale by architectural element analysis and 3-D sampling. A high resolution reservoir geological model of Youshashan oil field has been built by using this method.

  1. A life cycle cost analysis framework for geologic storage of hydrogen : a user's tool.

    SciTech Connect (OSTI)

    Kobos, Peter Holmes; Lord, Anna Snider; Borns, David James; Klise, Geoffrey T.

    2011-09-01

    The U.S. Department of Energy (DOE) has an interest in large scale hydrogen geostorage, which could offer substantial buffer capacity to meet possible disruptions in supply or changing seasonal demands. The geostorage site options being considered are salt caverns, depleted oil/gas reservoirs, aquifers and hard rock caverns. The DOE has an interest in assessing the geological, geomechanical and economic viability for these types of geologic hydrogen storage options. This study has developed an economic analysis methodology and subsequent spreadsheet analysis to address costs entailed in developing and operating an underground geologic storage facility. This year the tool was updated specifically to (1) incorporate more site-specific model input assumptions for the wells and storage site modules, (2) develop a version that matches the general format of the HDSAM model developed and maintained by Argonne National Laboratory, and (3) incorporate specific demand scenarios illustrating the model's capability. Four general types of underground storage were analyzed: salt caverns, depleted oil/gas reservoirs, aquifers, and hard rock caverns/other custom sites. Due to the substantial lessons learned from the geological storage of natural gas already employed, these options present a potentially sizable storage option. Understanding and including these various geologic storage types in the analysis physical and economic framework will help identify what geologic option would be best suited for the storage of hydrogen. It is important to note, however, that existing natural gas options may not translate to a hydrogen system where substantial engineering obstacles may be encountered. There are only three locations worldwide that currently store hydrogen underground and they are all in salt caverns. Two locations are in the U.S. (Texas), and are managed by ConocoPhillips and Praxair (Leighty, 2007). The third is in Teeside, U.K., managed by Sabic Petrochemicals (Crotogino et al., 2008; Panfilov et al., 2006). These existing H{sub 2} facilities are quite small by natural gas storage standards. The second stage of the analysis involved providing ANL with estimated geostorage costs of hydrogen within salt caverns for various market penetrations for four representative cities (Houston, Detroit, Pittsburgh and Los Angeles). Using these demand levels, the scale and cost of hydrogen storage necessary to meet 10%, 25% and 100% of vehicle summer demands was calculated.

  2. Cigeo, the French Geological Repository Project - 13022

    SciTech Connect (OSTI)

    Labalette, Thibaud; Harman, Alain; Dupuis, Marie-Claude; Ouzounian, Gerald

    2013-07-01

    The Cigeo industrial-scale geological disposal centre is designed for the disposal of the most highly-radioactive French waste. It will be built in an argillite formation of the Callovo-Oxfordian dating back 160 million years. The Cigeo project is located near the Bure village in the Paris Basin. The argillite formation was studied since 1974, and from the Meuse/Haute-Marne underground research laboratory since end of 1999. Most of the waste to be disposed of in the Cigeo repository comes from nuclear power plants and from reprocessing of their spent fuel. (authors)

  3. Idaho Geological Survey | Open Energy Information

    Open Energy Info (EERE)

    The Idaho Geological Survey is located in Boise, Idaho. About Information on past oil and gas exploration wells in Idaho was transferred to the Idaho Geological Survey in...

  4. Chinese Geological Survey | Open Energy Information

    Open Energy Info (EERE)

    Chinese Geological Survey Jump to: navigation, search Name: Chinese Geological Survey Place: China Sector: Geothermal energy Product: Chinese body which is involved in surveys of...

  5. Geological aspects of the nuclear waste disposal problem

    SciTech Connect (OSTI)

    Laverov, N.P.; Omelianenko, B.L.; Velichkin, V.I.

    1994-06-01

    For the successful solution of the high-level waste (HLW) problem in Russia one must take into account such factors as the existence of the great volume of accumulated HLW, the large size and variety of geological conditions in the country, and the difficult economic conditions. The most efficient method of HLW disposal consists in the maximum use of protective capacities of the geological environment and in using inexpensive natural minerals for engineered barrier construction. In this paper, the principal trends of geological investigation directed toward the solution of HLW disposal are considered. One urgent practical aim is the selection of sites in deep wells in regions where the HLW is now held in temporary storage. The aim of long-term investigations into HLW disposal is to evaluate geological prerequisites for regional HLW repositories.

  6. Geologic interpretation of gravity anomalies

    SciTech Connect (OSTI)

    Andreyev, B.A.; Klushin, I.G.

    1990-04-19

    This Russian textbook provides a sufficiently complete and systematic illumination of physico-geologic and mathematical aspect of complex problem of interpretation of gravity anomalies. The rational methods of localization of anomalies are examined in detail. All methods of interpreting gravity anomalies are described which have found successful application in practice. Also given are ideas of some new methods of the interpretation of gravity anomalies, the prospects for further development and industrial testing. Numerous practical examples to interpretation are given. Partial Contents: Bases of gravitational field theory; Physico-geologic bases of gravitational prospecting; Principles of geologic interpretation of gravity anomalies; Conversions and calculations of anomalies; Interpretation of gravity anomalies for bodies of correct geometric form and for bodies of arbitrary form; Geologic interpretation of the results of regional gravitational photographing; Searches and prospecting of oil- and gas-bearing structures and of deposits of ore and nonmetalliferous useful minerals.

  7. Arizona Geological Society Digest 22

    National Nuclear Security Administration (NNSA)

    Arizona Geological Society Digest 22 2008 437 Tectonic infuences on the spatial and temporal evolution of the Walker Lane: An incipient transform fault along the evolving Pacifc - North American plate boundary James E. Faulds and Christopher D. Henry Nevada Bureau of Mines and Geology, University of Nevada, Reno, Nevada, 89557, USA ABSTRACT Since ~30 Ma, western North America has been evolving from an Andean type mar- gin to a dextral transform boundary. Transform growth has been marked by

  8. The effect of large-scale model time step and multiscale coupling frequency on cloud climatology, vertical structure, and rainfall extremes in a superparameterized GCM

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yu, Sungduk; Pritchard, Michael S.

    2015-12-17

    The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m2) and longwave cloud forcing (~5 W/m2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation is more faithfullymore » satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less

  9. Optimization Method to Branch and Bound Large SBO State Spaces Under Dynamic Probabilistic Risk Assessment via use of LENDIT Scales and S2R2 Sets

    SciTech Connect (OSTI)

    Joseph W. Nielsen; Akira Tokurio; Robert Hiromoto; Jivan Khatry

    2014-06-01

    Traditional Probabilistic Risk Assessment (PRA) methods have been developed and are quite effective in evaluating risk associated with complex systems, but lack the capability to evaluate complex dynamic systems. These time and energy scales associated with the transient may vary as a function of transition time to a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems, while complete, results in issues associated with combinatorial explosion. In order to address the combinatorial complexity arising from the number of possible state configurations and discretization of transition times, a characteristic scaling metric (LENDIT length, energy, number, distribution, information and time) is proposed as a means to describe systems uniformly and thus provide means to describe relational constraints expected in the dynamics of a complex (coupled) systems. Thus when LENDIT is used to characterize four sets state, system, resource and response (S2R2) describing reactor operations (normal and off-normal), LENDIT and S2R2 in combination have the potential to branch and bound the state space investigated by DPRA. In this paper we introduce the concept of LENDIT scales and S2R2 sets applied to a branch-and-bound algorithm and apply the methods to a station black out transient (SBO).

  10. SIMULATION FRAMEWORK FOR REGIONAL GEOLOGIC CO{sub 2} STORAGE ALONG ARCHES PROVINCE OF MIDWESTERN UNITED STATES

    SciTech Connect (OSTI)

    Sminchak, Joel

    2012-09-30

    This report presents final technical results for the project Simulation Framework for Regional Geologic CO{sub 2} Storage Infrastructure along Arches Province of the Midwest United States. The Arches Simulation project was a three year effort designed to develop a simulation framework for regional geologic carbon dioxide (CO{sub 2}) storage infrastructure along the Arches Province through development of a geologic model and advanced reservoir simulations of large-scale CO{sub 2} storage. The project included five major technical tasks: (1) compilation of geologic, hydraulic and injection data on Mount Simon, (2) development of model framework and parameters, (3) preliminary variable density flow simulations, (4) multi-phase model runs of regional storage scenarios, and (5) implications for regional storage feasibility. The Arches Province is an informal region in northeastern Indiana, northern Kentucky, western Ohio, and southern Michigan where sedimentary rock formations form broad arch and platform structures. In the province, the Mount Simon sandstone is an appealing deep saline formation for CO{sub 2} storage because of the intersection of reservoir thickness and permeability. Many CO{sub 2} sources are located in proximity to the Arches Province, and the area is adjacent to coal fired power plants along the Ohio River Valley corridor. Geophysical well logs, rock samples, drilling logs, and geotechnical tests were evaluated for a 500,000 km{sup 2} study area centered on the Arches Province. Hydraulic parameters and historical operational information was also compiled from Mount Simon wastewater injection wells in the region. This information was integrated into a geocellular model that depicts the parameters and conditions in a numerical array. The geologic and hydraulic data were integrated into a three-dimensional grid of porosity and permeability, which are key parameters regarding fluid flow and pressure buildup due to CO{sub 2} injection. Permeability data were corrected in locations where reservoir tests have been performed in Mount Simon injection wells. The geocellular model was used to develop a series of numerical simulations designed to support CO{sub 2} storage applications in the Arches Province. Variable density fluid flow simulations were initially run to evaluate model sensitivity to input parameters. Two dimensional, multiple-phase simulations were completed to evaluate issues related to arranging injection fields in the study area. A basin-scale, multiple-phase model was developed to evaluate large scale injection effects across the region. Finally, local scale simulations were also completed with more detailed depiction of the Eau Claire formation to investigate to the potential for upward migration of CO{sub 2}. Overall, the technical work on the project concluded that injection large-scale injection may be achieved with proper field design, operation, siting, and monitoring. Records from Mount Simon injection wells were compiled, documenting more than 20 billion gallons of injection into the Mount Simon formation in the Arches Province over the past 40 years, equivalent to approximately 60 million metric tons CO2. The multi-state team effort was useful in delineating the geographic variability in the Mount Simon reservoir properties. Simulations better defined potential well fields, well field arrangement, CO{sub 2} pipeline distribution system, and operational parameters for large-scale injection in the Arches Province. Multiphase scoping level simulations suggest that injection fields with arrays of 9 to 50+ wells may be used to accommodate large injection volumes. Individual wells may need to be separated by 3 to 10 km. Injection fields may require spacing of 25 to 40 km to limit pressure and saturation front interference. Basin-scale multiple-phase simulations in STOMP reflect variability in the Mount Simon. While simulations suggest a total injection rate of 100 million metric tons per year (approximately to a 40% reduction of CO{sub 2} emissions from large point sources across the Arches Pr

  11. SIMULATION FRAMEWORK FOR REGIONAL GEOLOGIC CO{sub 2} STORAGE ALONG ARCHES PROVINCE OF MIDWESTERN UNITED STATES

    SciTech Connect (OSTI)

    Sminchak, Joel

    2012-09-30

    This report presents final technical results for the project Simulation Framework for Regional Geologic CO{sub 2} Storage Infrastructure along Arches Province of the Midwest United States. The Arches Simulation project was a three year effort designed to develop a simulation framework for regional geologic carbon dioxide (CO{sub 2}) storage infrastructure along the Arches Province through development of a geologic model and advanced reservoir simulations of large-scale CO{sub 2} storage. The project included five major technical tasks: (1) compilation of geologic, hydraulic and injection data on Mount Simon, (2) development of model framework and parameters, (3) preliminary variable density flow simulations, (4) multi-phase model runs of regional storage scenarios, and (5) implications for regional storage feasibility. The Arches Province is an informal region in northeastern Indiana, northern Kentucky, western Ohio, and southern Michigan where sedimentary rock formations form broad arch and platform structures. In the province, the Mount Simon sandstone is an appealing deep saline formation for CO{sub 2} storage because of the intersection of reservoir thickness and permeability. Many CO{sub 2} sources are located in proximity to the Arches Province, and the area is adjacent to coal fired power plants along the Ohio River Valley corridor. Geophysical well logs, rock samples, drilling logs, and geotechnical tests were evaluated for a 500,000 km{sup 2} study area centered on the Arches Province. Hydraulic parameters and historical operational information was also compiled from Mount Simon wastewater injection wells in the region. This information was integrated into a geocellular model that depicts the parameters and conditions in a numerical array. The geologic and hydraulic data were integrated into a three-dimensional grid of porosity and permeability, which are key parameters regarding fluid flow and pressure buildup due to CO{sub 2} injection. Permeability data were corrected in locations where reservoir tests have been performed in Mount Simon injection wells. The geocellular model was used to develop a series of numerical simulations designed to support CO2 storage applications in the Arches Province. Variable density fluid flow simulations were initially run to evaluate model sensitivity to input parameters. Two dimensional, multiple-phase simulations were completed to evaluate issues related to arranging injection fields in the study area. A basin-scale, multiple-phase model was developed to evaluate large scale injection effects across the region. Finally, local scale simulations were also completed with more detailed depiction of the Eau Claire formation to investigate to the potential for upward migration of CO2. Overall, the technical work on the project concluded that injection large-scale injection may be achieved with proper field design, operation, siting, and monitoring. Records from Mount Simon injection wells were compiled, documenting more than 20 billion gallons of injection into the Mount Simon formation in the Arches Province over the past 40 years, equivalent to approximately 60 million metric tons CO2. The multi-state team effort was useful in delineating the geographic variability in the Mount Simon reservoir properties. Simulations better defined potential well fields, well field arrangement, CO2 pipeline distribution system, and operational parameters for large-scale injection in the Arches Province. Multiphase scoping level simulations suggest that injection fields with arrays of 9 to 50+ wells may be used to accommodate large injection volumes. Individual wells may need to be separated by 3 to 10 km. Injection fields may require spacing of 25 to 40 km to limit pressure and saturation front interference. Basin-scale multiple-phase simulations in STOMP reflect variability in the Mount Simon. While simulations suggest a total injection rate of 100 million metric tons per year (approximately to a 40% reduction of CO2 emissions from large point sources across the Arches Province) may be feasible,

  12. On Leakage from Geologic Storage Reservoirs of CO2

    SciTech Connect (OSTI)

    Pruess, Karsten

    2006-02-14

    Large amounts of CO2 would need to be injected underground to achieve a significant reduction of atmospheric emissions. The large areal extent expected for CO2 plumes makes it likely that caprock imperfections will be encountered, such as fault zones or fractures, which may allow some CO2 to escape from the primary storage reservoir. Leakage of CO2 could also occur along wellbores. Concerns with escape of CO2 from a primary geologic storage reservoir include (1) acidification of groundwater resources, (2) asphyxiation hazard when leaking CO2 is discharged at the land surface, (3) increase in atmospheric concentrations of CO2, and (4) damage from a high-energy, eruptive discharge (if such discharge is physically possible). In order to gain public acceptance for geologic storage as a viable technology for reducing atmospheric emissions of CO2, it is necessary to address these issues and demonstrate that CO2 can be injected and stored safely in geologic formations.

  13. Indiana's Trenton limestone geology

    SciTech Connect (OSTI)

    Keith, B.D.

    1981-03-01

    The term Trenton limestone is the stratigraphic designation for a unit in northern Indiana composed of both limestone and dolomite. The Trenton is Middle Ordovician (Champlainian) in age and related clearly to the position of the Cincinnati arch. The limestone is thickest in northern Indiana and thins toward the southeast. Isopach maps of the Trenton limestone and the Maquoketa group above it indicate that the Cincinnati arch did not exist as a positive structural influence to sedimentation until after Ordovician time. Preliminary results of an ongoing study of the Trenton reservoir suggest that second and tertiary recovery there will be limited. Because of the low density of drilling on the Trenton's north flank, however, large areas remain virtually untested; more structural or stratigraphic traps similar to those of the Urbana field could exist. A better definition of the distribution of the dolomite facies will lead to a more accurate assessment of the Trenton's potential.

  14. Geologic development and characteristics of continental margins, Gulf of Mexico

    SciTech Connect (OSTI)

    Coleman, J.M.; Prior, D.B.; Roberts, H.H.

    1986-09-01

    The continental slope of the Gulf basin covers more than 500,000 km/sup 2/ and consists of smooth and gently sloping surfaces, prominent escarpments, knolls, intraslope basins, and submarine canyons and channels. It is an area of extremely diverse topographic and sedimentologic conditions. The slope extends from the shelf break, roughly at the 200-m isobath, to the upper limit of the continental rise at a depth of 2800 m. The most complex province in the basin, and the one of most interest to the petroleum industry, is the Texas-Louisiana slope, occupying 120,000 km/sup 2/ and in which bottom slopes range from less than 1/sup 0/ to greater than 20/sup 0/ around the knolls and basins. The near-surface geology and topography of the slope is a function of the interplay between episodes of rapid shelf-edge and slope progradation and contemporaneous modification of the depositional sequence by diapirism. Development of discrete depocenters throughout the Neogene results in rapid shelf-edge progradation, often exceeding 15-20 km/m.y. This rapid progradation of the shelf edge leads to development of thick wedges of sediment accumulation on the continental slope. Slope oversteepening, high pore pressures in rapidly deposited soft sediments, and changes in eustatic sea level cause subaqueous slope instabilities such as landslides and debris flows. Large-scale features such as shelf-edge separation scars and landslide-related canyons often result from such processes.

  15. Large-area, triple-junction a-Si alloy production scale-up. Semiannual subcontract report, 17 March 1994--18 September 1994

    SciTech Connect (OSTI)

    Oswald, R.; Morris, J. [Solarex Corp., Newtown, PA (United States). Thin Film Div.

    1995-09-01

    This report describes work performed under a 3-y subcontract to advance Solarex`s photovoltaic manufacturing technologies, reduce its a-Si:H module production costs, increase module performance, and expand the Solarex commercial production capacity. During this period, Solarex focused on improving deposition of the front contact, investigating alternate feedstocks for the front contact, maximizing throughput and area utilization for all laser scribes, optimizing a-Si:H deposition equipment to achieve uniform deposition over large areas, optimizing the triple-junction module fabrication process, evaluating the materials to deposit the rear contact, and optimizing the combination of isolation scribe and encapsulant to pass the wet high potential test.

  16. A Large-Scale, High-Resolution Hydrological Model Parameter Data Set for Climate Change Impact Assessment for the Conterminous US

    SciTech Connect (OSTI)

    Oubeidillah, Abdoul A; Kao, Shih-Chieh; Ashfaq, Moetasim; Naz, Bibi S; Tootle, Glenn

    2014-01-01

    To extend geographical coverage, refine spatial resolution, and improve modeling efficiency, a computation- and data-intensive effort was conducted to organize a comprehensive hydrologic dataset with post-calibrated model parameters for hydro-climate impact assessment. Several key inputs for hydrologic simulation including meteorologic forcings, soil, land class, vegetation, and elevation were collected from multiple best-available data sources and organized for 2107 hydrologic subbasins (8-digit hydrologic units, HUC8s) in the conterminous United States at refined 1/24 (~4 km) spatial resolution. Using high-performance computing for intensive model calibration, a high-resolution parameter dataset was prepared for the macro-scale Variable Infiltration Capacity (VIC) hydrologic model. The VIC simulation was driven by DAYMET daily meteorological forcing and was calibrated against USGS WaterWatch monthly runoff observations for each HUC8. The results showed that this new parameter dataset may help reasonably simulate runoff at most US HUC8 subbasins. Based on this exhaustive calibration effort, it is now possible to accurately estimate the resources required for further model improvement across the entire conterminous United States. We anticipate that through this hydrologic parameter dataset, the repeated effort of fundamental data processing can be lessened, so that research efforts can emphasize the more challenging task of assessing climate change impacts. The pre-organized model parameter dataset will be provided to interested parties to support further hydro-climate impact assessment.

  17. Large-area, triple-junction a-Si alloy production scale-up. Semiannual subcontract report, 17 March 1994--18 September 1994

    SciTech Connect (OSTI)

    Oswald, R.; Morris, J. [Solarex Corp., Newtown, PA (United States). Thin Film Div.

    1995-03-01

    This report describes work performed under a 3-year subcontract to advance Solarex`s photovoltaic (PV) manufacturing technologies, reduce its hydrogenated amorphous silicon (a-Si:H) module production costs, increase module performance, and expand the Solarex commercial production capacity. During the period covered by this report, Solarex focused on (1) improving deposition of the front contact, (2) investigating alternate feed stocks for the front contact, (3) maximizing throughput and area utilization for all laser scribes, (4) optimizing a-Si:H deposition equipment to achieve uniform deposition over large areas, (5) optimizing the triple-junction module fabrication process, (6) evaluating the materials to deposit the rear contact, and (7) optimizing the combination of isolation scribe and encapsulant to pass the wet high-potential test.

  18. Case studies of the application of the Certification Framework to two geologic carbon sequestration sites

    SciTech Connect (OSTI)

    Oldenburg, Curtis M.; Nicot, J.-P.; Bryant, S.L.

    2008-11-01

    We have developed a certification framework (CF) for certifying that the risks of geologic carbon sequestration (GCS) sites are below agreed-upon thresholds. The CF is based on effective trapping of CO2, the proposed concept that takes into account both the probability and impact of CO2 leakage. The CF uses probability estimates of the intersection of conductive faults and wells with the CO2 plume along with modeled fluxes or concentrations of CO2 as proxies for impacts to compartments (such as potable groundwater) to calculate CO2 leakage risk. In order to test and refine the approach, we applied the CF to (1) a hypothetical large-scale GCS project in the Texas Gulf Coast, and (2) WESTCARB's Phase III GCS pilot in the southern San Joaquin Valley, California.

  19. Utah Geological Survey | Open Energy Information

    Open Energy Info (EERE)

    Logo: Utah Geological Survey Name: Utah Geological Survey Address: 1594 W. North Temple Place: Salt Lake City, Utah Zip: 84114-6100 Phone Number: 801.537.3300 Website:...

  20. Hawaii geologic map data | Open Energy Information

    Open Energy Info (EERE)

    geologic map data Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Hawaii geologic map data Published USGS, Date Not Provided DOI Not Provided Check for...

  1. AASG State Geological Survey | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    AASG State Geological Survey AASG State Geological Survey presentation at the April 2013 peer review meeting held in Denver, Colorado.Contributions to the NGDSAASG State Geological Survey PDF icon aasg__geo_survey_peer2013.pdf More Documents & Publications State Geological Survey Contributions to the National Geothermal Data System National Geothermal Data System Architecture Design, Testing and Maintenance National Geothermal Data Systems Data Acquisition and Access

  2. Hanford Borehole Geologic Information System (HBGIS)

    SciTech Connect (OSTI)

    Last, George V.; Mackley, Rob D.; Saripalli, Ratna R.

    2005-09-26

    This is a user's guide for viewing and downloading borehold geologic data through a web-based interface.

  3. Montana Bureau of Mines and Geology Website | Open Energy Information

    Open Energy Info (EERE)

    Web Site: Montana Bureau of Mines and Geology Website Abstract Provides access to digital information on Montana's geology. Author Montana Bureau of Mines and Geology...

  4. Oregon Department of Geology and Mineral Industries | Open Energy...

    Open Energy Info (EERE)

    Geology and Mineral Industries Jump to: navigation, search Logo: Oregon Department of Geology and Mineral Industries Name: Oregon Department of Geology and Mineral Industries...

  5. International Collaboration Activities in Different Geologic Disposal

    Energy Savers [EERE]

    Environments | Department of Energy Collaboration Activities in Different Geologic Disposal Environments International Collaboration Activities in Different Geologic Disposal Environments This report describes the current status of international collaboration regarding geologic disposal research in the Used Fuel Disposition (UFD) Campaign. To date, UFD's International Disposal R&D Program has established formal collaboration agreements with five international initiatives and several

  6. Biogeochemical Changes at Early Stage After the Closure of Radioactive Waste Geological Repository in South Korea

    SciTech Connect (OSTI)

    Choung, Sungwook; Um, Wooyong; Choi, Seho; Francis, Arokiasamy J.; Kim, Sungpyo; Park, Jin beak; Kim, Suk-Hoon

    2014-09-01

    Permanent disposal of low- and intermediate-level radioactive wastes in the subterranean environment has been the preferred method of many countries, including Korea. A safety issue after the closure of a geological repository is that biodegradation of organic materials due to microbial activities generates gases that lead to overpressure of the waste containers in the repository and its disintegration with the release of radionuclides. As part of an ongoing large-scale in situ experiment using organic wastes and groundwater to simulate geological radioactive waste repository conditions, we investigated the geochemical alteration and microbial activities at an early stage (~63 days) intended to be representative of the initial period after repository closure. The increased numbers of both aerobes and facultative anaerobes in waste effluents indicate that oxygen content could be the most significant parameter to control biogeochemical conditions at very early periods of reaction (<35 days). Accordingly, the values of dissolved oxygen and redox potential were decreased. The activation of anaerobes after 35 days was supported by the increased concentration to ~50 mg L-1 of ethanol. These results suggest that the biogeochemical conditions were rapidly altered to more reducing and anaerobic conditions within the initial 2 months after repository closure. Although no gases were detected during the study, activated anaerobic microbes will play more important role in gas generation over the long term.

  7. The Effect of Scale on the Mechanical Properties of Jointed Rock Masses

    SciTech Connect (OSTI)

    Heuze, F E

    2004-05-24

    These notes were prepared for presentation at the Defense Threat Reduction Agency's (DTRA) Hard Target Research and Analysis Center (HTRAC), at the occasion of a short course held on June 14-15, 2004. The material is intended for analysts who must evaluate the geo-mechanical characteristics of sites of interest, in order to provide appropriate input to calculations of ground shock effects on underground facilities in rock masses. These analysts are associated with the Interagency Geotechnical Assessment Team (IGAT). Because geological discontinuities introduce scale effects on the mechanical properties of rock formations, these large-scale properties cannot be estimated on the basis of tests on small cores.

  8. Thermohaline pore water trends of southeastern Louisiana: Geologic applications and controls on fluid movement

    SciTech Connect (OSTI)

    Marlin, D.; Schramm, B.

    1995-10-01

    Previous research has suggested that dissolution of salt diapirs and the formation of dense, saline brines at shallow depths are concurrent with large scale fluid migration. A critical foundation of these studies is the determination of salinity from the spontaneous potential (SP) log and the ability to drive fluid vertically through the sediment. Derivation of salinity using the perfect shale model and contouring iso-salinity values over intervals of Lower Miocene and Upper Oligocene sediments that contain thick, impermeable carbonate deposits cloud these findings. The calculation of salinity is based on water resistivity (Rw) variations and the geological constraints on derivation of this variable. Application of the imperfect shale membrane model to determine Rw from the SP log provided a closer approximation to Rw from produced water samples over St. Gabriel Field in Ascension and Iberville parishes, La than past SP models. Further analyses of temperature, pressure, salinity, and freshwater hydraulic head trends of Lower Miocene and Upper Oligocene deposits over the field and surrounding area suggest that dissolution of salt occurred prior to hydrocarbon generation and large scale fluid migration is not dynamic at present. An important control that should be used in future studies of thermohaline fluid movement is the identification of local structure, stratigraphic variation, shale membrane efficiency, and time of salt diapirism.

  9. Large-scale anomalies from primordial dissipation

    SciTech Connect (OSTI)

    D'Amico, Guido; Gobbetti, Roberto; Kleban, Matthew; Schillo, Marjorie E-mail: rg1509@nyu.edu E-mail: mls604@nyu.edu

    2013-11-01

    We analyze an inflationary model in which part of the power in density perturbations arises due to particle production. The amount of particle production is modulated by an auxiliary field. Given an initial gradient for the auxiliary field, this model produces a hemispherical power asymmetry and a suppression of power at low multipoles similar to those observed by WMAP and Planck in the CMB temperature. It also predicts an additive contribution to ?T with support only at very small l that is aligned with the direction of the power asymmetry and has a definite sign, as well as small oscillations in the power spectrum at all l.

  10. Large-scale lateral nanowire arrays nanogenerators

    DOE Patents [OSTI]

    Wang, Zhong L; Xu, Chen; Qin, Yong; Zhu, Guang; Yang, Rusen; Hu, Youfan; Zhang, Yan

    2014-01-07

    In a method of making a generating device, a plurality of spaced apart elongated seen members are deposited onto a surface of a flexible non-conductive substrate. An elongated conductive layer is applied to a top surface and a first side of each seed member, thereby leaving an exposed second side opposite the first side. A plurality of elongated piezoelectric nanostructures is grown laterally from the second side of each seed layer. A second conductive material is deposited onto the substrate adjacent each elongated first conductive layer so as to be soupled the distal end of each of the plurality of elongated piezoelectric nanostructures. The second conductive material is selected so as to form a Schottky barrier between the second conductive material and the distal end of each of the plurality of elongated piezoelectric nanostructures and so as to form an electrical contact with the first conductive layer.

  11. Superconductivity for Large Scale Wind Turbines

    SciTech Connect (OSTI)

    R. Fair; W. Stautner; M. Douglass; R. Rajput-Ghoshal; M. Moscinski; P. Riley; D. Wagner; J. Kim; S. Hou; F. Lopez; K. Haran; J. Bray; T. Laskaris; J. Rochford; R. Duckworth

    2012-10-12

    A conceptual design has been completed for a 10MW superconducting direct drive wind turbine generator employing low temperature superconductors for the field winding. Key technology building blocks from the GE Wind and GE Healthcare businesses have been transferred across to the design of this concept machine. Wherever possible, conventional technology and production techniques have been used in order to support the case for commercialization of such a machine. Appendices A and B provide further details of the layout of the machine and the complete specification table for the concept design. Phase 1 of the program has allowed us to understand the trade-offs between the various sub-systems of such a generator and its integration with a wind turbine. A Failure Modes and Effects Analysis (FMEA) and a Technology Readiness Level (TRL) analysis have been completed resulting in the identification of high risk components within the design. The design has been analyzed from a commercial and economic point of view and Cost of Energy (COE) calculations have been carried out with the potential to reduce COE by up to 18% when compared with a permanent magnet direct drive 5MW baseline machine, resulting in a potential COE of 0.075 $/kWh. Finally, a top-level commercialization plan has been proposed to enable this technology to be transitioned to full volume production. The main body of this report will present the design processes employed and the main findings and conclusions.

  12. Large-Scale Liquid Hydrogen Handling Equipment

    Broader source: Energy.gov [DOE]

    Presentation by Jerry Gillette of Argonne National Laboratory at the Joint Meeting on Hydrogen Delivery Modeling and Analysis, May 8-9, 2007

  13. Large-Scale Computational Fluid Dynamics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computational Fluid Dynamics - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs

  14. Large Scale Evaluation fo Nickel Aluminide Rolls

    SciTech Connect (OSTI)

    2005-09-01

    This completed project was a joint effort between Oak Ridge National Laboratory and Bethlehem Steel (now Mittal Steel) to demonstrate the effectiveness of using nickel aluminide intermetallic alloy rolls as part of an updated, energy-efficient, commercial annealing furnace system.

  15. Avanced Large-scale Integrated Computational Environment

    Energy Science and Technology Software Center (OSTI)

    1998-10-27

    The ALICE Memory Snooper is a software applications programming interface (API) and library for use in implementing computational steering systems. It allows distributed memory parallel programs to publish variables in the computation that may be accessed over the Internet. In this way, users can examine and even change the variables in their running application remotely. The API and library ensure the consistency of the variables across the distributed memory system.

  16. Nevada Weatherizes Large-Scale Complex

    Broader source: Energy.gov [DOE]

    Increased energy efficiency is translating into increased productivity for one Nevada weatherization organization.

  17. Constructing Hydraulic Barriers in Deep Geologic Formations

    SciTech Connect (OSTI)

    Carter, E.E.; Carter, P.E. [Technologies Co, Texas (United States); Cooper, D.C. [Ph.D. Idaho National Laboratory, Idaho Falls, ID (United States)

    2008-07-01

    Many construction methods have been developed to create hydraulic barriers to depths of 30 to 50 meters, but few have been proposed for depths on the order of 500 meters. For these deep hydraulic barriers, most methods are potentially feasible for soil but not for hard rock. In the course of researching methods of isolating large subterranean blocks of oil shale, the authors have developed a wax thermal permeation method for constructing hydraulic barriers in rock to depths of over 500 meters in competent or even fractured rock as well as soil. The technology is similar to freeze wall methods, but produces a permanent barrier; and is potentially applicable in both dry and water saturated formations. Like freeze wall barriers, the wax thermal permeation method utilizes a large number of vertical or horizontal boreholes around the perimeter to be contained. However, instead of cooling the boreholes, they are heated. After heating these boreholes, a specially formulated molten wax based grout is pumped into the boreholes where it seals fractures and also permeates radially outward to form a series of columns of wax-impregnated rock. Rows of overlapping columns can then form a durable hydraulic barrier. These barriers can also be angled above a geologic repository to help prevent influx of water due to atypical rainfall events. Applications of the technique to constructing containment structures around existing shallow waste burial sites and water shutoff for mining are also described. (authors)

  18. Geologic mapping for groundwater resource protection and assessment

    SciTech Connect (OSTI)

    Shafer, J.M. . Earth Sciences and Resources Inst.); Berg, R.C. )

    1993-03-01

    Groundwater is a vital natural resource in the US and around the world. In order to manage and protect this often threatened resource one must better understand its occurrence, extent, and susceptibility to contamination. Geologic mapping is a fundamental approach to developing more detailed and accurate assessments of groundwater resources. The stratigraphy and lithology of earth materials provide the framework for groundwater systems, whether they are deep confined aquifers or shallow, water table environments. These same earth materials control, in large part, the rates of migration of water and contaminants into and through groundwater systems thus establishing the potential yields of the systems and their vulnerability to contamination. Geologic mapping is used to delineate and display the vertical sequencing of earth materials either in cross-section or over lateral areas as in the stack-unit geologic map. These geologic maps, along with supportive hydrogeologic information, are used to identify the three-dimensional positioning and continuity of aquifer and non-aquifer earth materials. For example, detailed stack-unit mapping to a depth of 30 meters has been completed for a portion of a northern Illinois county. Groundwater contamination potentials were assigned to various vertical sequences of materials. Where aquifers are unconfined, groundwater contamination potentials are greatest. Conversely, other considerations being equal, the thicker the confining unit, the lower the contamination potential. This information is invaluable for land use decision-making; water supply assessment, development, and management; and environmental protection planning.

  19. Gable named Geological Society of America Fellow

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Gable named Geological Society of America Fellow Gable named Geological Society of America Fellow GSA members are elected to fellowship in recognition of their distinguished contributions to the geosciences. July 10, 2013 Carl Gable Carl Gable Gable received a doctorate in Geophysics from Harvard University and joined Los Alamos as a postdoc in 1989. The Geological Society of America (GSA) has selected Carl Gable of the Laboratory's Computational Earth Science group to be a Fellow. GSA members

  20. The role of optimality in characterizing CO2 seepage from geological carbon sequestration sites

    SciTech Connect (OSTI)

    Cortis, Andrea; Oldenburg, Curtis M.; Benson, Sally M.

    2008-09-15

    Storage of large amounts of carbon dioxide (CO{sub 2}) in deep geological formations for greenhouse gas mitigation is gaining momentum and moving from its conceptual and testing stages towards widespread application. In this work we explore various optimization strategies for characterizing surface leakage (seepage) using near-surface measurement approaches such as accumulation chambers and eddy covariance towers. Seepage characterization objectives and limitations need to be defined carefully from the outset especially in light of large natural background variations that can mask seepage. The cost and sensitivity of seepage detection are related to four critical length scales pertaining to the size of the: (1) region that needs to be monitored; (2) footprint of the measurement approach, and (3) main seepage zone; and (4) region in which concentrations or fluxes are influenced by seepage. Seepage characterization objectives may include one or all of the tasks of detecting, locating, and quantifying seepage. Each of these tasks has its own optimal strategy. Detecting and locating seepage in a region in which there is no expected or preferred location for seepage nor existing evidence for seepage requires monitoring on a fixed grid, e.g., using eddy covariance towers. The fixed-grid approaches needed to detect seepage are expected to require large numbers of eddy covariance towers for large-scale geologic CO{sub 2} storage. Once seepage has been detected and roughly located, seepage zones and features can be optimally pinpointed through a dynamic search strategy, e.g., employing accumulation chambers and/or soil-gas sampling. Quantification of seepage rates can be done through measurements on a localized fixed grid once the seepage is pinpointed. Background measurements are essential for seepage detection in natural ecosystems. Artificial neural networks are considered as regression models useful for distinguishing natural system behavior from anomalous behavior suggestive of CO{sub 2} seepage without need for detailed understanding of natural system processes. Because of the local extrema in CO{sub 2} fluxes and concentrations in natural systems, simple steepest-descent algorithms are not effective and evolutionary computation algorithms are proposed as a paradigm for dynamic monitoring networks to pinpoint CO{sub 2} seepage areas.

  1. Regional geophysics, Cenozoic tectonics and geologic resources...

    Open Energy Info (EERE)

    and geologic resources of the Basin and Range Province and adjoining regions Author G.P. Eaton Conference Basin and Range Symposium and Great Basin Field Conference; Denver,...

  2. Wyoming State Geological Survey | Open Energy Information

    Open Energy Info (EERE)

    navigation, search Name: Wyoming State Geological Survey Abbreviation: WSGS Address: P.O. Box 1347 Place: Laramie, Wyoming Zip: 82073 Year Founded: 1933 Phone Number:...

  3. Panel 2, Geologic Storage of Hydrogen

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Anna S. Lord Geologist Geotechnology & Engineering Department & Peter H. Kobos Principal Staff Economist, Ph.D. Earth Systems Department 2 Geologic Storage Why underground storage? ...

  4. Study of the isolation system for geologic disposal of radioactive wastes

    SciTech Connect (OSTI)

    Not Available

    1983-01-01

    This study was conducted for the US Department of Energy by a Waste Isolation System Panel of the Board on Radioactive Waste Management under the National Research Council's Commission on Physical Sciences, Mathematics, and Resources. The panel was charged to review the alternative technologies available for the isolation of radioactive waste in mined geologic repositories, evaluate the need for and possible performance benefits from these technologies as potential elements of the isolation system, and identify appropriate technical criteria for choosing among them to achieve satisfactory overall performance of a geologic repository. Information has been acquired through examination of a large body of technical literature, briefings by representatives of government agencies and their industrial and university contractors, in-depth discussions with individual experts in the field, site visits, and calculations by panel members and staff, with deliberations extending over a period of approximately two years. The panel's principal findings are given. Chapters are devoted to: the geologic waste-disposal system; waste characteristics; waste package; conceptual design of repositories; geologic hydrologic, and geochemical properties of geologic waste-disposal systems; overall performance criterion for geologic waste disposal; performance analysis of the geologic waste-disposal system; and natural analogs relevant to geologic disposal. 336 references.

  5. International Symposium on Site Characterization for CO2Geological Storage

    SciTech Connect (OSTI)

    Tsang, Chin-Fu

    2006-02-23

    Several technological options have been proposed to stabilize atmospheric concentrations of CO{sub 2}. One proposed remedy is to separate and capture CO{sub 2} from fossil-fuel power plants and other stationary industrial sources and to inject the CO{sub 2} into deep subsurface formations for long-term storage and sequestration. Characterization of geologic formations for sequestration of large quantities of CO{sub 2} needs to be carefully considered to ensure that sites are suitable for long-term storage and that there will be no adverse impacts to human health or the environment. The Intergovernmental Panel on Climate Change (IPCC) Special Report on Carbon Dioxide Capture and Storage (Final Draft, October 2005) states that ''Site characterization, selection and performance prediction are crucial for successful geological storage. Before selecting a site, the geological setting must be characterized to determine if the overlying cap rock will provide an effective seal, if there is a sufficiently voluminous and permeable storage formation, and whether any abandoned or active wells will compromise the integrity of the seal. Moreover, the availability of good site characterization data is critical for the reliability of models''. This International Symposium on Site Characterization for CO{sub 2} Geological Storage (CO2SC) addresses the particular issue of site characterization and site selection related to the geologic storage of carbon dioxide. Presentations and discussions cover the various aspects associated with characterization and selection of potential CO{sub 2} storage sites, with emphasis on advances in process understanding, development of measurement methods, identification of key site features and parameters, site characterization strategies, and case studies.

  6. SRS Geology/Hydrogeology Environmental Information Document

    SciTech Connect (OSTI)

    Denham, M.E.

    1999-08-31

    The purpose of the Savannah River Site Geology and Hydrogeology Environmental Information Document (EID) is to provide geologic and hydrogeologic information to serve as a baseline to evaluate potential environmental impacts. This EID is based on a summary of knowledge accumulated from research conducted at the Savannah River Site (SRS) and surrounding areas.

  7. Early opportunities of CO₂ geological storage deployment in coal chemical industry in China

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Wei, Ning; Li, Xiaochun; Liu, Shengnan; Dahowski, R. T.; Davidson, C. L.

    2014-12-31

    Carbon dioxide capture and geological storage (CCS) is regarded as a promising option for climate change mitigation; however, the high capture cost is the major barrier to large-scale deployment of CCS technologies. High-purity CO₂ emission sources can reduce or even avoid the capture requirements and costs. Among these high-purity CO₂ sources, certain coal chemical industry processes are very important, especially in China. In this paper, the basic characteristics of coal chemical industries in China is investigated and analyzed. As of 2013 there were more than 100 coal chemical plants in operation. These emission sources together emit 430 million tons CO₂more » per year, of which about 30% are emit high-purity and pure CO₂ (CO₂ concentration >80% and >98.5% respectively). Four typical source-sink pairs are chosen for techno-economic evaluation, including site screening and selection, source-sink matching, concept design, and economic evaluation. The technical-economic evaluation shows that the levelized cost of a CO₂ capture and aquifer storage project in the coal chemistry industry ranges from 14 USD/t to 17 USD/t CO₂. When a 15USD/t CO₂ tax and 20USD/t for CO₂ sold to EOR are considered, the levelized cost of CCS project are negative, which suggests a benefit from some of these CCS projects. This might provide China early opportunities to deploy and scale-up CCS projects in the near future.« less

  8. Early opportunities of CO₂ geological storage deployment in coal chemical industry in China

    SciTech Connect (OSTI)

    Wei, Ning; Li, Xiaochun; Liu, Shengnan; Dahowski, R. T.; Davidson, C. L.

    2014-12-31

    Carbon dioxide capture and geological storage (CCS) is regarded as a promising option for climate change mitigation; however, the high capture cost is the major barrier to large-scale deployment of CCS technologies. High-purity CO₂ emission sources can reduce or even avoid the capture requirements and costs. Among these high-purity CO₂ sources, certain coal chemical industry processes are very important, especially in China. In this paper, the basic characteristics of coal chemical industries in China is investigated and analyzed. As of 2013 there were more than 100 coal chemical plants in operation. These emission sources together emit 430 million tons CO₂ per year, of which about 30% are emit high-purity and pure CO₂ (CO₂ concentration >80% and >98.5% respectively). Four typical source-sink pairs are chosen for techno-economic evaluation, including site screening and selection, source-sink matching, concept design, and economic evaluation. The technical-economic evaluation shows that the levelized cost of a CO₂ capture and aquifer storage project in the coal chemistry industry ranges from 14 USD/t to 17 USD/t CO₂. When a 15USD/t CO₂ tax and 20USD/t for CO₂ sold to EOR are considered, the levelized cost of CCS project are negative, which suggests a benefit from some of these CCS projects. This might provide China early opportunities to deploy and scale-up CCS projects in the near future.

  9. Regional Opportunities for Carbon Dioxide Capture and Storage in China: A Comprehensive CO2 Storage Cost Curve and Analysis of the Potential for Large Scale Carbon Dioxide Capture and Storage in the Peoples Republic of China

    SciTech Connect (OSTI)

    Dahowski, Robert T.; Li, Xiaochun; Davidson, Casie L.; Wei, Ning; Dooley, James J.

    2009-12-01

    This study presents data and analysis on the potential for carbon dioxide capture and storage (CCS) technologies to deploy within China, including a survey of the CO2 source fleet and potential geologic storage capacity. The results presented here indicate that there is significant potential for CCS technologies to deploy in China at a level sufficient to deliver deep, sustained and cost-effective emissions reductions for China over the course of this century.

  10. Federal Control of Geological Carbon Sequestration

    SciTech Connect (OSTI)

    Reitze, Arnold

    2011-04-11

    The United States has economically recoverable coal reserves of about 261 billion tons, which is in excess of a 250-­‐year supply based on 2009 consumption rates. However, in the near future the use of coal may be legally restricted because of concerns over the effects of its combustion on atmospheric carbon dioxide concentrations. In response, the U.S. Department of Energy is making significant efforts to help develop and implement a commercial scale program of geologic carbon sequestration that involves capturing and storing carbon dioxide emitted from coal-­‐burning electric power plants in deep underground formations. This article explores the technical and legal problems that must be resolved in order to have a viable carbon sequestration program. It covers the responsibilities of the United States Environmental Protection Agency and the Departments of Energy, Transportation and Interior. It discusses the use of the Safe Drinking Water Act, the Clean Air Act, the National Environmental Policy Act, the Endangered Species Act, and other applicable federal laws. Finally, it discusses the provisions related to carbon sequestration that have been included in the major bills dealing with climate change that Congress has been considering in 2009 and 2010. The article concludes that the many legal issues that exist can be resolved, but whether carbon sequestration becomes a commercial reality will depend on reducing its costs or by imposing legal requirements on fossil-­‐fired power plants that result in the costs of carbon emissions increasing to the point that carbon sequestration becomes a feasible option.

  11. Influence of Shrinkage and Swelling Properties of Coal on Geologic Sequestration of Carbon Dioxide

    SciTech Connect (OSTI)

    Siriwardane, H.J.; Gondle, R.; Smith, D.H.

    2007-05-01

    The potential for enhanced methane production and geologic sequestration of carbon dioxide in coalbeds needs to be evaluated before large-scale sequestration projects are undertaken. Geologic sequestration of carbon dioxide in deep unmineable coal seams with the potential for enhanced coalbed methane production has become a viable option to reduce greenhouse gas emissions. The coal matrix is believed to shrink during methane production and swell during the injection of carbon dioxide, causing changes in tlie cleat porosity and permeability of the coal seam. However, the influence of swelling and shrinkage, and the geomechanical response during the process of carbon dioxide injection and methane recovery, are not well understood. A three-dimensional swelling and shrinkage model based on constitutive equations that account for the coupled fluid pressure-deformation behavior of a porous medium was developed and implemented in an existing reservoir model. Several reservoir simulations were performed at a field site located in the San Juan basin to investigate the influence of swelling and shrinkage, as well as other geomechanical parameters, using a modified compositional coalbed methane reservoir simulator (modified PSU-COALCOMP). The paper presents numerical results for interpretation of reservoir performance during injection of carbon dioxide at this site. Available measured data at the field site were compared with computed values. Results show that coal swelling and shrinkage during the process of enhanced coalbed methane recovery can have a significant influence on the reservoir performance. Results also show an increase in the gas production rate with an increase in the elastic modulus of the reservoir material and increase in cleat porosity. Further laboratory and field tests of the model are needed to furnish better estimates of petrophysical parameters, test the applicability of thee model, and determine the need for further refinements to the mathematical model.

  12. Comparison of methods for geologic storage of carbon dioxide...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: Comparison of methods for geologic storage of carbon dioxide in saline formations Citation Details In-Document Search Title: Comparison of methods for geologic...

  13. Summary of geology of Colorado related to geothermal potential...

    Open Energy Info (EERE)

    Journal Article: Summary of geology of Colorado related to geothermal potential Author L.T. Grose Published Journal Colorado Geological Survey Bulletin, 1974 DOI Not Provided...

  14. Regional Geology: GIS Database for Alternative Host Rocks and...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Regional Geology: GIS Database for Alternative Host Rocks and Potential Siting Guidelines Regional Geology: GIS Database for Alternative Host Rocks and Potential Siting Guidelines ...

  15. Idaho Geological Survey and University of Idaho Explore for Geothermal...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Idaho Geological Survey and University of Idaho Explore for Geothermal Energy Idaho Geological Survey and University of Idaho Explore for Geothermal Energy January 11, 2013 -...

  16. Rock Physics of Geologic Carbon Sequestration/Storage (Technical...

    Office of Scientific and Technical Information (OSTI)

    Rock Physics of Geologic Carbon SequestrationStorage Citation Details In-Document Search Title: Rock Physics of Geologic Carbon SequestrationStorage This report covers the ...

  17. Rock Physics of Geologic Carbon Sequestration/Storage (Technical...

    Office of Scientific and Technical Information (OSTI)

    Rock Physics of Geologic Carbon SequestrationStorage Citation Details In-Document Search Title: Rock Physics of Geologic Carbon SequestrationStorage You are accessing a ...

  18. State Geological Survey Contributions to the National Geothermal...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    State Geological Survey Contributions to the National Geothermal Data System State Geological Survey Contributions to the National Geothermal Data System Project objectives: Deploy...

  19. North Carolina Geological Survey | Open Energy Information

    Open Energy Info (EERE)

    Address: 1612 Mail Service Center Place: North Carolina Zip: 27699-1612 Website: www.geology.enr.state.nc.us Coordinates: 35.67, -78.66 Show Map Loading map......

  20. Panel 2, Geologic Storage of Hydrogen

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND2014-3954P Geologic Storage of Hydrogen Anna S. Lord Geologist Geotechnology & Engineering Department & Peter H. Kobos Principal Staff Economist, Ph.D. Earth Systems Department 2 Geologic Storage Why underground storage?

  1. Field-Scale Effective Matrix Diffusion Coefficient for FracturedRock: Results From Literature Survey

    SciTech Connect (OSTI)

    Zhou, Quanlin; Liu, Hui Hai; Molz, Fred J.; Zhang, Yingqi; Bodvarsson, Gudmundur S.

    2005-03-28

    Matrix diffusion is an important mechanism for solutetransport in fractured rock. We recently conducted a literature survey onthe effective matrix diffusion coefficient, Dem, a key parameter fordescribing matrix diffusion processes at the field scale. Forty fieldtracer tests at 15 fractured geologic sites were surveyed and selectedfor study, based on data availability and quality. Field-scale Dem valueswere calculated, either directly using data reported in the literature orby reanalyzing the corresponding field tracer tests. Surveyed dataindicate that the effective-matrix-diffusion-coefficient factor FD(defined as the ratio of Dem to the lab-scale matrix diffusioncoefficient [Dem]of the same tracer) is generally larger than one,indicating that the effective matrix diffusion coefficient in the fieldis comparatively larger than the matrix diffusion coefficient at therock-core scale. This larger value could be attributed to the manymass-transfer processes at different scales in naturally heterogeneous,fractured rock systems. Furthermore, we observed a moderate trend towardsystematic increase in the emDFmDDF value with observation scale,indicating that the effective matrix diffusion coefficient is likely tobe statistically scale dependent. The FD value ranges from 1 to 10,000for observation scales from 5 to 2,000 m. At a given scale, the FD valuevaries by two orders of magnitude, reflecting the influence of differingdegrees of fractured rock heterogeneity at different sites. In addition,the surveyed data indicate that field-scale longitudinal dispersivitygenerally increases with observation scale, which is consistent withprevious studies. The scale-dependent field-scale matrix diffusioncoefficient (and dispersivity) may have significant implications forassessing long-term, large-scale radionuclide and contaminant transportevents in fractured rock, both for nuclear waste disposal and contaminantremediation.

  2. Generic Deep Geologic Disposal Safety Case | Department of Energy

    Office of Environmental Management (EM)

    Deep Geologic Disposal Safety Case Generic Deep Geologic Disposal Safety Case The Generic Deep Geologic Disposal Safety Case presents generic information that is of use in understanding potential deep geologic disposal options in the U.S. for used nuclear fuel (UNF) from reactors and high-level radioactive waste (HLW). Potential disposal options include mined disposal in a variety of geologic media (e.g., salt, shale, granite), and deep borehole disposal in basement rock. The Generic Safety Case

  3. Geologic development and characteristics of the continental margins, Gulf of Mexico. Research report, 1983-1986

    SciTech Connect (OSTI)

    Coleman, J.M.; Prior, D.B.; Roberts, H.H.

    1986-01-01

    The continental slope of the Gulf Basin covers more than 500,000 sq km and consists of smooth and gently sloping surfaces, prominent escarpments, knolls, intraslope basins, and submarine canyons and channels. It is an area of extremely diverse topographic and sedimentologic conditions. The slope extends from the shelf break, roughly at the 200 m isobath, to the upper limit of the continental rise, at a depth of 2800 m. The most-complex province in the basin, and the one of most interest to the petroleum industry, is the Texas-Louisiana slope, occupying 120,000 sq km and in which bottom slopes range from < 1 deg to > 20 deg around the knolls and basins. The near-surface geology and topography of the slope are functions of the interplay between episodes of rapid shelf-edge and slope progradation and contemporaneous modification of the depositional sequence by diapirism. Development of discrete depo-centers throughout the Neogene results in rapid shelf-edge progradation, often in excess of 15-20 km/my. This rapid progradation of the shelf edge leads to development of thick wedges of sediment accumulation on the continental slope. Oversteeping, high pore pressures in rapidly deposited soft sediments and changes in eustatic sea level cause subaqueous slope instabilities such as landsliding and debris flows. Large scale features such as shelf edge separation scars and landslide related canyons often results from such processes.

  4. Method of fracturing a geological formation

    DOE Patents [OSTI]

    Johnson, James O. (2679-B Walnut, Los Alamos, NM 87544)

    1990-01-01

    An improved method of fracturing a geological formation surrounding a well bore is disclosed. A relatively small explosive charge is emplaced in a well bore and the bore is subsequently hydraulically pressurized to a pressure less than the formation breakdown pressure and preferably greater than the fracture propagation pressure of the formation. The charge is denoted while the bore is so pressurized, resulting in the formation of multiple fractures in the surrounding formation with little or no accompanying formation damage. Subsequent hydraulic pressurization can be used to propagate and extend the fractures in a conventional manner. The method is useful for stimulating production of oil, gas and possibly water from suitable geologic formations.

  5. Scaling Up

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scaling Up Scaling Up Many scientists appreciate Python's power for prototyping and developing scientific computing and data-intensive applications. However, creating parallel Python applications that scale well in modern high-performance computing environments can be challenging for a variety of reasons. Approaches to parallel processing in Python at NERSC are described on this page. Here we outline various approaches to scaling parallel Python applications at NERSC so that users may select the

  6. Geologic Analysis of Priority Basins for Exploration and Drilling

    SciTech Connect (OSTI)

    Carroll, H.B.; Reeves, T.K.

    1999-04-27

    There has been a substantial decline in both exploratory drilling and seismic field crew activity in the United States over the last 10 years, due primarily to the declining price of oil. To reverse this trend and to preserve the entrepreneurial independent operator, the U.S. DOE is attempting to encourage hydrocarbon exploration activities in some of the under exploited regions of the United States. This goal is being accomplished by conducting broad regional reviews of potentially prospective areas within the lower 48 states. Data are being collected on selected areas, and studies are being done on a regional scale generally unavailable to the smaller independent. The results of this work will be made available to the public to encourage the undertaking of operations in areas which have been overlooked until this project. Fifteen criteria have been developed for the selection of study areas. Eight regions have been identified where regional geologic analysis will be performed. This report discusses preliminary findings concerning the geology, early tectonic history, structure and potential unconventional source rocks for the Black Mesa basin and South Central states region, the two highest priority study areas.

  7. The Cielo Petascale Capability Supercomputer: Providing Large...

    Office of Scientific and Technical Information (OSTI)

    Title: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship Authors: Vigil, Benny Manuel 1 ; Doerfler, Douglas W. 1 + Show ...

  8. Early opportunities of CO2 geological storage deployment in coal chemical industry in China

    SciTech Connect (OSTI)

    Wei, Ning; Li, Xiaochun; Liu, Shengnan; Dahowski, Robert T.; Davidson, Casie L.

    2014-11-12

    Abstract: Carbon dioxide capture and geological storage (CCS) is regarded as a promising option for climate change mitigation; however, the high capture cost is the major barrier to large-scale deployment of CCS technologies. High-purity CO2 emission sources can reduce or even avoid the capture requirements and costs. Among these high-purity CO2 sources, certain coal chemical industry processes are very important, especially in China. In this paper, the basic characteristics of coal chemical industries in China is investigated and analyzed. As of 2013 there were more than 100 coal chemical plants in operation or in late planning stages. These emission sources together emit 430 million tons CO2 per year, of which about 30% are emit high-purity and pure CO2 (CO2 concentration >80% and >99% respectively).Four typical source-sink pairs are studied by a techno-economic evaluation, including site screening and selection, source-sink matching, concept design, and experienced economic evaluation. The technical-economic evaluation shows that the levelized cost of a CO2 capture and aquifer storage project in the coal chemistry industry ranges from 14 USD/t to 17 USD/t CO2. When a 15USD/t CO2 tax and 15USD/t for CO2 sold to EOR are considered, the levelized cost of CCS project are negative, which suggests a net economic benefit from some of these CCS projects. This might provide China early opportunities to deploy and scale-up CCS projects in the near future.

  9. Geological and petrophysical characterization of the ferron sandstone for 3-D simulation of a fluvial-deltaic reservoir. Annual report, October 1, 1994--September 30, 1995

    SciTech Connect (OSTI)

    Chidsey, T.C. Jr.; Allison, M.L.

    1996-05-01

    The objective of the Ferron Sandstone project is to develop a comprehensive, interdisciplinary, quantitative characterization of a fluvial-deltaic reservoir to allow realistic interwell and reservoir-scale models to be developed for improved oil-field development in similar reservoirs world-wide. Quantitative geological and petrophysical information on the Cretaceous Ferron Sandstone in east-central Utah was collected. Both new and existing data is being integrated into a three-dimensional model of spatial variations in porosity, storativity, and tensorial rock permeability at a scale appropriate for inter-well to regional-scale reservoir simulation. Simulation results could improve reservoir management through proper infill and extension drilling strategies, reduction of economic risks, increased recovery from existing oil fields, and more reliable reserve calculations. Transfer of the project results to the petroleum industry is an integral component of the project. This report covers research activities for fiscal year 1994-95, the second year of the project. Most work consisted of developing field methods and collecting large quantities of existing and new data. We also continued to develop preliminary regional and case-study area interpretations. The project is divided into four tasks: (1) regional stratigraphic analysis, (2) case studies, (3) reservoirs models, and (4) field-scale evaluation of exploration strategies.

  10. An Assessment of Geological Carbon Storage Options in the Illinois Basin: Validation Phase

    SciTech Connect (OSTI)

    Robert Finley

    2012-12-01

    The Midwest Geological Sequestration Consortium (MGSC) assessed the options for geological carbon dioxide (CO{sub 2}) storage in the 155,400 km{sup 2} (60,000 mi{sup 2}) Illinois Basin, which underlies most of Illinois, western Indiana, and western Kentucky. The region has annual CO{sub 2} emissions of about 265 million metric tonnes (292 million tons), primarily from 122 coal-fired electric generation facilities, some of which burn almost 4.5 million tonnes (5 million tons) of coal per year (U.S. Department of Energy, 2010). Validation Phase (Phase II) field tests gathered pilot data to update the Characterization Phase (Phase I) assessment of options for capture, transportation, and storage of CO{sub 2} emissions in three geological sink types: coal seams, oil fields, and saline reservoirs. Four small-scale field tests were conducted to determine the properties of rock units that control injectivity of CO{sub 2}, assess the total storage resources, examine the security of the overlying rock units that act as seals for the reservoirs, and develop ways to control and measure the safety of injection and storage processes. The MGSC designed field test operational plans for pilot sites based on the site screening process, MVA program needs, the selection of equipment related to CO{sub 2} injection, and design of a data acquisition system. Reservoir modeling, computational simulations, and statistical methods assessed and interpreted data gathered from the field tests. Monitoring, Verification, and Accounting (MVA) programs were established to detect leakage of injected CO{sub 2} and ensure public safety. Public outreach and education remained an important part of the project; meetings and presentations informed public and private regional stakeholders of the results and findings. A miscible (liquid) CO{sub 2} flood pilot project was conducted in the Clore Formation sandstone (Mississippian System, Chesterian Series) at Mumford Hills Field in Posey County, southwestern Indiana, and an immiscible CO{sub 2} flood pilot was conducted in the Jackson sandstone (Mississippian System Big Clifty Sandstone Member) at the Sugar Creek Field in Hopkins County, western Kentucky. Up to 12% incremental oil recovery was estimated based on these pilots. A CO{sub 2} huff ‘n’ puff (HNP) pilot project was conducted in the Cypress Sandstone in the Loudon Field. This pilot was designed to measure and record data that could be used to calibrate a reservoir simulation model. A pilot project at the Tanquary Farms site in Wabash County, southeastern Illinois, tested the potential storage of CO{sub 2} in the Springfield Coal Member of the Carbondale Formation (Pennsylvanian System), in order to gauge the potential for large-scale CO{sub 2} storage and/or enhanced coal bed methane recovery from Illinois Basin coal beds. The pilot results from all four sites showed that CO{sub 2} could be injected into the subsurface without adversely affecting groundwater. Additionally, hydrocarbon production was enhanced, giving further evidence that CO{sub 2} storage in oil reservoirs and coal beds offers an economic advantage. Results from the MVA program at each site indicated that injected CO{sub 2} did not leave the injection zone. Topical reports were completed on the Middle and Late Devonian New Albany Shale and Basin CO{sub 2} emissions. The efficacy of the New Albany Shale as a storage sink could be substantial if low injectivity concerns can be alleviated. CO{sub 2} emissions in the Illinois Basin were projected to be dominated by coal-fired power plants.

  11. ORS 516 - Department of Geology and Mineral Industries | Open...

    Open Energy Info (EERE)

    6 - Department of Geology and Mineral Industries Jump to: navigation, search OpenEI Reference LibraryAdd to library Legal Document- StatuteStatute: ORS 516 - Department of Geology...

  12. FMI Borehole Geology, Geomechanics and 3D Reservoir Modeling...

    Open Energy Info (EERE)

    FMI Borehole Geology, Geomechanics and 3D Reservoir Modeling Jump to: navigation, search OpenEI Reference LibraryAdd to library Report: FMI Borehole Geology, Geomechanics and 3D...

  13. Subsurface geology of the Raft River geothermal area, Idaho ...

    Open Energy Info (EERE)

    geology of the Raft River geothermal area, Idaho Jump to: navigation, search OpenEI Reference LibraryAdd to library Conference Proceedings: Subsurface geology of the Raft River...

  14. Map of Geologic Sequestration Training and Research Projects

    Broader source: Energy.gov [DOE]

    A larger map of FE's Geologic Sequestration Training and Research Projects awarded as part of the Recovery Act.

  15. Geology and Groundwater Investigation Many Devils Wash, Shiprock Site, New

    Office of Environmental Management (EM)

    Mexico | Department of Energy Geology and Groundwater Investigation Many Devils Wash, Shiprock Site, New Mexico Geology and Groundwater Investigation Many Devils Wash, Shiprock Site, New Mexico Geology and Groundwater Investigation Many Devils Wash, Shiprock Site, New Mexico PDF icon Geology and Groundwater Investigation Many Devils Wash, Shiprock Site, New Mexico More Documents & Publications Natural Contamination from the Mancos Shale Application of Environmental Isotopes to the

  16. License for the Konrad Deep Geological Repository

    SciTech Connect (OSTI)

    Biurrun, E.; Hartje, B.

    2003-02-24

    Deep geological disposal of long-lived radioactive waste is currently considered a major challenge. Until present, only three deep geological disposal facilities have worldwide been operated: the Asse experimental repository (1967-1978) and the Morsleben repository (1971-1998) in Germany as well as the Waste Isolation Pilot Plant (WIPP) in the USA (1999 to present). Recently, the licensing procedure for the fourth such facility, the German Konrad repository, ended with a positive ''Planfeststellung'' (plan approval). With its plan approval decision, the licensing authority, the Ministry of the Environment of the state of Lower Saxony, approved the single license needed pursuant to German law to construct, operate, and later close down this facility.

  17. RECOVERY ACT: Geologic Sequestration Training and Research

    Office of Scientific and Technical Information (OSTI)

    RECOVERY ACT: Geologic Sequestration Training and Research Final Scientific/Technical Report Reporting Period Start Date: December 1, 2009 Reporting Period End Date: June 30, 2013 Peter M. Walsh,* Richard A. Esposito,†* Konstantinos Theodorou,‡* Michael J. Hannon, Jr.,* Aaron D. Lamplugh,§* and Kirk M. Ellison†* *University of Alabama at Birmingham †Southern Company, Birmingham, AL ‡Jefferson State Community College, Birmingham, AL §John A. Volpe National Transportation Systems

  18. geologic-sequestration | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Geological Sequestration Training and Research Program in Capture and Transport: Development of the Most Economical Separation Method for CO2 Capture Project No.: DE-FE0001953 NETL has partnered with Tuskegee University (TU) to provide fundamental research and hands-on training and networking opportunities to undergraduate students at TU in the area of CO2 capture and transport with a focus on the development of the most economical separation methods for pre-combustion CO2 capture. The bulk of

  19. NEVADA BUREAU OF MINES AND GEOLOGY

    National Nuclear Security Administration (NNSA)

    " ,,"'1' NEVADA BUREAU OF MINES AND GEOLOGY BULLETIN 104 OIL AND GAS DEVELOPMENTS IN NEVADA LARRY J. GARSIDE, RONALD H. HESS, KERYL L. FLEMING, AND BECKY S. WEIMER I 1988 .,", " "- "" ~-". - CONTENTS INTRODUCTION 3 LYON COUNTY 41 Sources of infonnation 3 Well data 42 Regulation 3 Organization of bulletin and NYE COUNTY 42 explanation of tenns 3 Railroad Valley field summaries 44 Acknowledgments 5 Well data 47 HISTORICAL SUMMARY 5 PERSHING COUNTY 79 Well

  20. Geological problems in radioactive waste isolation

    SciTech Connect (OSTI)

    Witherspoon, P.A.

    1991-01-01

    The problem of isolating radioactive wastes from the biosphere presents specialists in the fields of earth sciences with some of the most complicated problems they have ever encountered. This is especially true for high level waste (HLW) which must be isolated in the underground and away from the biosphere for thousands of years. Essentially every country that is generating electricity in nuclear power plants is faced with the problem of isolating the radioactive wastes that are produced. The general consensus is that this can be accomplished by selecting an appropriate geologic setting and carefully designing the rock repository. Much new technology is being developed to solve the problems that have been raised and there is a continuing need to publish the results of new developments for the benefit of all concerned. The 28th International Geologic Congress that was held July 9--19, 1989 in Washington, DC provided an opportunity for earth scientists to gather for detailed discussions on these problems. Workshop W3B on the subject, Geological Problems in Radioactive Waste Isolation -- A World Wide Review'' was organized by Paul A Witherspoon and Ghislain de Marsily and convened July 15--16, 1989 Reports from 19 countries have been gathered for this publication. Individual papers have been cataloged separately.

  1. Development of an integrated in-situ remediation technology. Topical report for task No. 12 and 13 entitled: Large scale field test of the Lasagna{trademark} process, September 26, 1994--May 25, 1996

    SciTech Connect (OSTI)

    Athmer, C.J.; Ho, Sa V.; Hughes, B.M.

    1997-04-01

    Contamination in low permeability soils poses a significant technical challenge to in-situ remediation efforts. Poor accessibility to the contaminants and difficulty in delivery of treatment reagents have rendered existing in-situ treatments such as bioremediation, vapor extraction, pump and treat rather ineffective when applied to low permeability soils present at many contaminated sites. This technology is an integrated in-situ treatment in which established geotechnical methods are used to instant degradation zones directly in the contaminated soil and electroosmosis is utilized to move the contaminants back and forth through those zones until the treatment is completed. This topical report summarizes the results of the field experiment conducted at the Paducah Gaseous Diffusion Plant in Paducah, KY. The test site covered 15 feet wide by 10 feet across and 15 feet deep with steel panels as electrodes and wickdrains containing granular activated carbon as treatment zone& The electrodes and treatment zones were installed utilizing innovative adaptation of existing emplacement technologies. The unit was operated for four months, flushing TCE by electroosmosis from the soil into the treatment zones where it was trapped by the activated carbon. The scale up from laboratory units to this field scale was very successful with respect to electrical parameters as weft as electroosmotic flow. Soil samples taken throughout the site before and after the test showed over 98% TCE removal, with most samples showing greater than 99% removal.

  2. Applications of Micro-Fourier Transform Infrared Spectroscopy (FTIR) in the Geological Sciences—A Review

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Chen, Yanyan; Zou, Caineng; Mastalerz, Maria; Hu, Suyun; Gasaway, Carley; Tao, Xiaowan

    2015-12-18

    Fourier transform infrared spectroscopy (FTIR) can provide crucial information on the molecular structure of organic and inorganic components and has been used extensively for chemical characterization of geological samples in the past few decades. In this paper, recent applications of FTIR in the geological sciences are reviewed. Particularly, its use in the characterization of geochemistry and thermal maturation of organic matter in coal and shale is addressed. These investigations demonstrate that the employment of high-resolution micro-FTIR imaging enables visualization and mapping of the distributions of organic matter and minerals on a micrometer scale in geological samples, and promotes an advancedmore » understanding of heterogeneity of organic rich coal and shale. Additionally, micro-FTIR is particularly suitable for in situ, non-destructive characterization of minute microfossils, small fluid and melt inclusions within crystals, and volatiles in glasses and minerals. This technique can also assist in the chemotaxonomic classification of macrofossils such as plant fossils. These features, barely accessible with other analytical techniques, may provide fundamental information on paleoclimate, depositional environment, and the evolution of geological (e.g., volcanic and magmatic) systems.« less

  3. Applications of Micro-Fourier Transform Infrared Spectroscopy (FTIR) in the Geological Sciences—A Review

    SciTech Connect (OSTI)

    Chen, Yanyan; Zou, Caineng; Mastalerz, Maria; Hu, Suyun; Gasaway, Carley; Tao, Xiaowan

    2015-12-18

    Fourier transform infrared spectroscopy (FTIR) can provide crucial information on the molecular structure of organic and inorganic components and has been used extensively for chemical characterization of geological samples in the past few decades. In this paper, recent applications of FTIR in the geological sciences are reviewed. Particularly, its use in the characterization of geochemistry and thermal maturation of organic matter in coal and shale is addressed. These investigations demonstrate that the employment of high-resolution micro-FTIR imaging enables visualization and mapping of the distributions of organic matter and minerals on a micrometer scale in geological samples, and promotes an advanced understanding of heterogeneity of organic rich coal and shale. Additionally, micro-FTIR is particularly suitable for in situ, non-destructive characterization of minute microfossils, small fluid and melt inclusions within crystals, and volatiles in glasses and minerals. This technique can also assist in the chemotaxonomic classification of macrofossils such as plant fossils. These features, barely accessible with other analytical techniques, may provide fundamental information on paleoclimate, depositional environment, and the evolution of geological (e.g., volcanic and magmatic) systems.

  4. Angular Scaling In Jets

    SciTech Connect (OSTI)

    Jankowiak, Martin; Larkoski, Andrew J.; /SLAC

    2012-02-17

    We introduce a jet shape observable defined for an ensemble of jets in terms of two-particle angular correlations and a resolution parameter R. This quantity is infrared and collinear safe and can be interpreted as a scaling exponent for the angular distribution of mass inside the jet. For small R it is close to the value 2 as a consequence of the approximately scale invariant QCD dynamics. For large R it is sensitive to non-perturbative effects. We describe the use of this correlation function for tests of QCD, for studying underlying event and pile-up effects, and for tuning Monte Carlo event generators.

  5. Searches for large-scale anisotropy in the arrival directions of cosmic rays detected above energy of 10{sup 19} eV at the Pierre Auger observatory and the telescope array

    SciTech Connect (OSTI)

    Aab, A.; Abreu, P.; Andringa, S.; Aglietta, M.; Ahn, E. J.; Al Samarai, I.; Albuquerque, I. F. M.; Allekotte, I.; Asorey, H.; Allen, J.; Allison, P.; Almela, A.; Castillo, J. Alvarez; Alvarez-Muñiz, J.; Batista, R. Alves; Ambrosio, M.; Aramo, C.; Aminaei, A.; Anchordoqui, L.; Arqueros, F.; Collaboration: Pierre Auger Collaboration; Telescope Array Collaboration; and others

    2014-10-20

    Spherical harmonic moments are well-suited for capturing anisotropy at any scale in the flux of cosmic rays. An unambiguous measurement of the full set of spherical harmonic coefficients requires full-sky coverage. This can be achieved by combining data from observatories located in both the northern and southern hemispheres. To this end, a joint analysis using data recorded at the Telescope Array and the Pierre Auger Observatory above 10{sup 19} eV is presented in this work. The resulting multipolar expansion of the flux of cosmic rays allows us to perform a series of anisotropy searches, and in particular to report on the angular power spectrum of cosmic rays above 10{sup 19} eV. No significant deviation from isotropic expectations is found throughout the analyses performed. Upper limits on the amplitudes of the dipole and quadrupole moments are derived as a function of the direction in the sky, varying between 7% and 13% for the dipole and between 7% and 10% for a symmetric quadrupole.

  6. Searches for Large-Scale Anisotropy in the Arrival Directions of Cosmic Rays Detected above Energy of $10^{19}$ eV at the Pierre Auger Observatory and the Telescope Array

    SciTech Connect (OSTI)

    Aab, Alexander; et al,

    2014-10-07

    Spherical harmonic moments are well-suited for capturing anisotropy at any scale in the flux of cosmic rays. An unambiguous measurement of the full set of spherical harmonic coefficients requires full-sky coverage. This can be achieved by combining data from observatories located in both the northern and southern hemispheres. To this end, a joint analysis using data recorded at the Telescope Array and the Pierre Auger Observatory above 1019 eV is presented in this work. The resulting multipolar expansion of the flux of cosmic rays allows us to perform a series of anisotropy searches, and in particular to report on the angular power spectrum of cosmic rays above 1019 eV. No significant deviation from isotropic expectations is found throughout the analyses performed. Upper limits on the amplitudes of the dipole and quadrupole moments are derived as a function of the direction in the sky, varying between 7% and 13% for the dipole and between 7% and 10% for a symmetric quadrupole.

  7. Investigating the Fundamental Scientific Issues Affecting the Long-term Geologic Storage of Carbon Dioxide

    SciTech Connect (OSTI)

    Spangler, Lee; Cunningham, Alfred; Barnhart, Elliot; Lageson, David; Nall, Anita; Dobeck, Laura; Repasky, Kevin; Shaw, Joseph; Nugent, Paul; Johnson, Jennifer; Hogan, Justin; Codd, Sarah; Bray, Joshua; Prather, Cody; McGrail, B.; Oldenburg, Curtis; Wagoner, Jeff; Pawar, Rajesh

    2014-09-30

    The Zero Emissions Research and Technology (ZERT) collaborative was formed to address basic science and engineering knowledge gaps relevant to geologic carbon sequestration. The original funding round of ZERT (ZERT I) identified and addressed many of these gaps. ZERT II has focused on specific science and technology areas identified in ZERT I that showed strong promise and needed greater effort to fully develop. Specific focal areas of ZERT II included:  Continued use of the unique ZERT field site to test and prove detection technologies and methods developed by Montana State University, Stanford, University of Texas, several private sector companies, and others. Additionally, transport in the near surface was modelled.  Further development of near-surface detection technologies that cover moderate area at relatively low cost (fiber sensors and compact infrared imagers).  Investigation of analogs for escape mechanisms including characterization of impact of CO2 and deeper brine on groundwater quality at a natural analog site in Chimayo, NM and characterization of fracture systems exposed in outcrops in the northern Rockies.  Further investigation of biofilms and biomineralization for mitigation of small aperture leaks focusing on fundamental studies of rates that would allow engineered control of deposition in the subsurface.  Development of magnetic resonance techniques to perform muti-phase fluid measurements in rock cores.  Laboratory investigation of hysteretic relative permeability and its effect on residual gas trapping in large-scale reservoir simulations.  Further development of computational tools including a new version (V2) of the LBNL reactive geochemical transport simulator, TOUGHREACT, extension of the coupled flow and stress simulation capabilities in LANL’s FEHM simulator and an online gas-mixtureproperty estimation tool, WebGasEOS Many of these efforts have resulted in technologies that are being utilized in other field tests or demonstration projects.

  8. Environmental resources of selected areas of Hawaii: Geological hazards

    SciTech Connect (OSTI)

    Staub, W.P.; Reed, R.M.

    1995-03-01

    This report has been prepared to make available and archive the background scientific data and related information collected on geologic hazards during the preparation of the environmental impact statement (EIS) for Phases 3 and 4 of the Hawaii Geothermal Project (HGP) as defined by the state of Hawaii in its April 1989 proposal to Congress. The US Department of Energy (DOE) published a notice withdrawing its Notice of Intent to prepare the HGP-EIS. Since the state of Hawaii is no longer pursuing or planning to pursue the HGP, DOE considers the project to be terminated. This report presents a review of current information on geologic hazards in the Hawaiian Islands. Interrelationships among these hazards are discussed. Probabilities of occurrence of given geologic hazards are provided in various regions where sufficient geologic or historical data are available. Most of the information contained herein is compiled from recent US Geological Survey (USGS) publications and USGS open-file reports related to this project. This report describes the natural geologic hazards present in the area and does not represent an assessment of environmental impacts. Geologic hazards originate both onshore and offshore. Onshore geologic hazards such as volcanic eruptions, earthquakes, surface rupture, landslides, uplift and subsidence occur mainly on the southern third of the island of Hawaii (hereinafter referred to as Hawaii). Offshore geologic hazards are more widely distributed throughout the Hawaiian Islands. Examples of offshore geologic hazards are submarine landslides, turbidity currents, and seismic sea waves (tsunamis).

  9. Data triage enables extreme-scale computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Data triage enables extreme-scale computing Data triage enables extreme-scale computing Data selection and triage are important techniques for large-scale data, which can drastically reduce the amount of data written to disk or transmitted over a network. August 1, 2014 Spatial partitioning for the ocean simulation data set. Spatial partitioning for the ocean simulation data set. The main focus for ADR is to prioritize data primarily generated by large-scale scientific simulations run on

  10. Precise rare earth analysis of geological materials

    SciTech Connect (OSTI)

    Laul, J.C.; Wogman, N.A.

    1982-01-01

    Rare earth element (REE) concentrations are very informative in revealing chemical fractionation processs in geological systems. The REE's (La-Lu) behavior is characteristic of various primary and secondary minerals which comprise a rock. The REE's contents and their patterns provide a strong fingerprint in distinguishing among various rock types and in understanding the partial melting and/or fractional crystallization of the source region. The REE contents in geological materials are usually at trace levels. To measure all the REE at such levels, radiochemical neutron activation analysis (RNAA) has been used with a REE group separation scheme. To maximize detection sensitivites for individual REE, selective ..gamma..-ray/x-ray measurements have been made using normal Ge(Li) and low-energy photon detectors (LEPD), and Ge(Li)-NaI(Tl) coincidence-noncoincidence spectrometer systems. Using these detection methods an individual REE can be measured at or below the ppB levels; chemical yields of the REE are determined by reactivation.

  11. REVEALING THE PHYSICAL PROPERTIES OF MOLECULAR GAS IN ORION WITH A LARGE-SCALE SURVEY IN J = 2-1 LINES OF {sup 12}CO, {sup 13}CO, AND C{sup 18}O

    SciTech Connect (OSTI)

    Nishimura, Atsushi; Tokuda, Kazuki; Kimura, Kimihiro; Muraoka, Kazuyuki; Maezawa, Hiroyuki; Ogawa, Hideo; Onishi, Toshikazu [Department of Physical Science, Graduate School of Science, Osaka Prefecture University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531 (Japan); Dobashi, Kazuhito; Shimoikura, Tomomi [Department of Astronomy and Earth Sciences, Tokyo Gakugei University, 4-1-1 Nukuikita-machi, Koganei, Tokyo 184-8501 (Japan); Mizuno, Akira [Solar-terrestrial Environment Laboratory, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi 464-8601 (Japan); Fukui, Yasuo, E-mail: atsushi.nishimura@nao.ac.jp [Department of Physics and Astrophysics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi 464-8602 (Japan)

    2015-01-01

    We present fully sampled ?3' resolution images of {sup 12}CO(J=2-1), {sup 13}CO(J=2-1), and C{sup 18}O(J=2-1) emission taken with the newly developed 1.85 m millimeter-submillimeter telescope over the entire area of the Orion A and B giant molecular clouds. The data were compared with J=1-0 of the {sup 12}CO, {sup 13}CO, and C{sup 18}O data taken with the Nagoya 4 m telescope and the NANTEN telescope at the same angular resolution to derive the spatial distributions of the physical properties of the molecular gas. We explore the large velocity gradient formalism to determine the gas density and temperature using line combinations of {sup 12}CO(J=2-1), {sup 13}CO(J=2-1), and {sup 13}CO(J=1-0) assuming a uniform velocity gradient and abundance ratio of CO. The derived gas density is in the range of 500 to 5000 cm{sup 3}, and the derived gas temperature is mostly in the range of 20 to 50 K along the cloud ridge with a temperature gradient depending on the distance from the star forming region. We found that the high-temperature region at the cloud edge faces the H II region, indicating that the molecular gas is interacting with the stellar wind and radiation from the massive stars. In addition, we compared the derived gas properties with the young stellar objects distribution obtained with the Spitzer telescope to investigate the relationship between the gas properties and the star formation activity therein. We found that the gas density and star formation efficiency are positively well correlated, indicating that stars form effectively in the dense gas region.

  12. Subsurface exploration using bucket auger borings and down-hole geologic inspection

    SciTech Connect (OSTI)

    Scullin, C.M. )

    1994-03-01

    The down-hole geologic inspection of 24 in. bucket auger borings has been a hands-on technique for collecting valuable geologic structural and lithologic detail in southern California investigations for over 35 yr. Although it has been used for all types of investigations for hillside urban development, it is of particular benefit in landslide investigations and evaluations. The benefits of down-hole geologic inspection during detailed mapping of large landslide complexes with multiple slide planes are discussed in this paper. Many of the geotechnical investigations of these massive landslide complexes have been very limited in their determinations of accurate landslide parameters and very deficient in proper engineering analysis while based upon this limited data. This has resulted in many cases where the geotechnical consultant erroneously concludes that ancient landslides don't move and it is all right to build upon them, even though they have neither justified the landslide parameters, nor the slope stability or safety. Because this author and the many consultants contacted during the preparation of this paper were not aware of other publications regarding this method of collecting detailed geologic data, this author included the safety considerations, safety equipment, the cost and the Cal OSHA requirements for entering exploration shafts.

  13. Extreme Scale Visual Analytics

    SciTech Connect (OSTI)

    Steed, Chad A; Potok, Thomas E; Pullum, Laura L; Ramanathan, Arvind; Shipman, Galen M; Thornton, Peter E

    2013-01-01

    Given the scale and complexity of today s data, visual analytics is rapidly becoming a necessity rather than an option for comprehensive exploratory analysis. In this paper, we provide an overview of three applications of visual analytics for addressing the challenges of analyzing climate, text streams, and biosurveilance data. These systems feature varying levels of interaction and high performance computing technology integration to permit exploratory analysis of large and complex data of global significance.

  14. Geologic reconnaissance of natural fore-reef slope and a large submarine rockfall exposure, Enewetak Atoll

    SciTech Connect (OSTI)

    Halley, R.B.; Slater, R.A.

    1987-05-01

    In 1958 a submarine rockfall exposed a cross section through the reef and fore-reef deposits along the northwestern margin of Enewetak Atoll, Marshall Islands. Removal of more than 10/sup 8/ MT of rock left a cirque-shaped submarine scarp 220 m high, extending back 190 m into the modern reef, and 1000 m along the reef trend. The scarp exposed older, steeply dipping beds below 220 m along which the rockfall detached. They sampled this exposure and the natural fore-reef slope surrounding it in 1984 and 1985 using a manned submersible. The natural slope in this area is characterized by three zone: (1) the reef plate, crest, and near fore reef that extends from sea level to -16 m, with a slope of less than 10/sup 0/, (2) the bypass slope that extends from -16 to -275 m, with slopes of 55/sup 0/ decreasing to 35/sup 0/ near the base, and (3) a debris slope of less than 35/sup 0/ below -275 m. Vertical walls, grooves, and chutes, common on other fore-reef slopes, are sparse on the northwestern slope of Enewetak. The scarp exposes three stratigraphic units that are differentiated by surficial appearance: (1) a near-vertical wall from the reef crest to 76 m that appears rubbly, has occasional debris-covered ledges, and is composed mainly of coral; (2) a vertical to overhanging wall from -76 m to -220 m that is massive and fractured, and has smooth, blocky surfaces; and (3) inclined bedding below -220 m along which the slump block has fractured, exposing a dip slope of hard, dense, white limestone and dolomite that extends below -400 m. Caves occur in all three units. Open cement-lined fractures and voids layered with cements are most common in the middle unit, which now lies within the thermocline. Along the sides of the scarp are exposed fore-reef boulder beds dipping at 30/sup 0/ toward the open sea; the steeper (55/sup 0/) dipping natural surface truncates these beds, which gives evidence of the erosional nature of the bypass slope.

  15. Environmental Resources of Selected Areas of Hawaii: Geological Hazards (DRAFT)

    SciTech Connect (OSTI)

    Staub, W.P.

    1994-06-01

    This report has been prepared to make available and archive the background scientific data and related information collected on geologic hazards during the preparation of the environmental impact statement (EIS) for Phases 3 and 4 of the Hawaii Geothermal Project (HGP) as defined by the state of Hawaii in its April 1989 proposal to Congress. The U.S. Department of Energy (DOE) published a notice in the Federal Register on May 17, 1994 (Fed Regis. 5925638) withdrawing its Notice of Intent (Fed Regis. 575433) of February 14, 1992, to prepare the HGP-EIS. Since the state of Hawaii is no longer pursuing or planning to pursue the HGP, DOE considers the project to be terminated This report presents a review of current information on geologic hazards in the Hawaiian Islands. Interrelationships among these hazards are discussed. Probabilities of occurrence of given geologic hazards are provided in various regions where sufficient geologic or historical data are available. Most of the information contained herein is compiled from recent U.S. Geological Survey (USGS) publications and open-file reports. This report describes the natural geologic hazards present in the area and does not represent an assessment of environmental impacts. Geologic hazards originate both onshore and offshore. Onshore geologic hazards such as volcanic eruptions, earthquakes, surface rupture, landslides, uplift, and subsidence occur mainly on the southern third of the island of Hawaii (hereinafter referred to as Hawaii). Offshore geologic hazards are more widely distributed throughout the Hawaiian Islands. Examples of offshore geologic hazards are submarine landslides, turbidity currents, and seismic sea waves (tsunamis). First, overviews of volcanic and earthquake activity, and details of offshore geologic hazards is provided for the Hawaiian Islands. Then, a more detailed discussion of onshore geologic hazards is presented with special emphasis on the southern third of Hawaii and the east rift zone of Kilauea.

  16. Establishing MICHCARB, a geological carbon sequestration research and

    Office of Scientific and Technical Information (OSTI)

    education center for Michigan, implemented through the Michigan Geological Repository for Research and Education, part of the Department of Geosciences at Western Michigan University (Technical Report) | SciTech Connect Establishing MICHCARB, a geological carbon sequestration research and education center for Michigan, implemented through the Michigan Geological Repository for Research and Education, part of the Department of Geosciences at Western Michigan University Citation Details

  17. CMI Education Course Inventory: Geology Engineering/Geochemistry | Critical

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Materials Institute Course Inventory: Geology Engineering/Geochemistry Geology Engineering/Geochemistry Of the six CMI Team members that are educational institutions, five offer courses in Geology. These are Colorado School of Mines, Iowa State University, Purdue University, University of California, Davis and Rutgers University. The following links go to the class list on the CMI page for that school. Colorado School of Mines Iowa State University Purdue University University of California,

  18. Geologic Map and GIS Data for the Tuscarora Geothermal Area

    SciTech Connect (OSTI)

    Faulds, James E.

    2013-12-31

    TuscaroraESRI Geodatabase (ArcGeology v1.3): - Contains all the geologic map data, including faults, contacts, folds, unit polygons, and attitudes of strata and faults. - List of stratigraphic units and stratigraphic correlation diagram. - Detailed unit descriptions of stratigraphic units. - Five cross?sections. - Locations of production, injection, and monitor wells. - 3D model constructed with EarthVision using geologic map data, cross?sections, drill?hole data, and geophysics (model not in the ESRI geodatabase).

  19. Site Characterization of Promising Geologic Formations for CO2 Storage |

    Energy Savers [EERE]

    Department of Energy Site Characterization of Promising Geologic Formations for CO2 Storage Site Characterization of Promising Geologic Formations for CO2 Storage In September 2009, the U.S. Department of Energy announced the award of 11 projects with a total project value of $75.5 million* to conduct site characterization of promising geologic formations for CO2 storage. These Recovery Act projects will increase our understanding of the potential for these formations to safely and

  20. Regional Geology: GIS Database for Alternative Host Rocks and Potential

    Energy Savers [EERE]

    Siting Guidelines | Department of Energy Regional Geology: GIS Database for Alternative Host Rocks and Potential Siting Guidelines Regional Geology: GIS Database for Alternative Host Rocks and Potential Siting Guidelines The objective of this work is to develop a spatial database that integrates both geologic data for alternative host-rock formations and information that has been historically used for siting guidelines, both in the US and other countries. The Used Fuel Disposition Campaign