National Library of Energy BETA

Sample records for require expensive large-scale

  1. Large Scale Computing and Storage Requirements for Advanced Scientific...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for ...

  2. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and ...

  3. Large Scale Computing and Storage Requirements for High Energy Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for High Energy Physics HEPFrontcover.png Large Scale Computing and Storage Requirements for High Energy Physics An HEP / ASCR / NERSC Workshop November 12-13, 2009 Report Large Scale Computing and Storage Requirements for High Energy Physics, Report of the Joint HEP / ASCR / NERSC Workshop conducted Nov. 12-13, 2009 https://www.nersc.gov/assets/HPC-Requirements-for-Science/HEPFrontcover.png Goals This workshop was organized by the Department of

  4. Large Scale Computing and Storage Requirements for Advanced Scientific

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Research: Target 2014 Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research An ASCR / NERSC Review January 5-6, 2011 Final Report Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research, Report of the Joint ASCR / NERSC Workshop conducted January 5-6, 2011 Goals This workshop is being

  5. Large Scale Computing and Storage Requirements for Basic Energy Sciences:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Target 2014 Large Scale Computing and Storage Requirements for Basic Energy Sciences: Target 2014 BESFrontcover.png Final Report Large Scale Computing and Storage Requirements for Basic Energy Sciences, Report of the Joint BES/ ASCR / NERSC Workshop conducted February 9-10, 2010 Workshop Agenda The agenda for this workshop is presented here: including presentation times and speaker information. Read More » Workshop Presentations Large Scale Computing and Storage Requirements for Basic

  6. Harvey Wasserman! Large Scale Computing and Storage Requirements...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for High Energy Physics Research: Target 2017 ...www.nersc.govsciencerequirementsHEP * Mid---morning a

  7. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 ... Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific ...

  8. Large Scale Production Computing and Storage Requirements for Fusion Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences" is organized by the Department of Energy's Office of Fusion Energy Sciences (FES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to

  9. Large Scale Production Computing and Storage Requirements for High Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 HEPlogo.jpg The NERSC Program Requirements Review "Large Scale Computing and Storage Requirements for High Energy Physics" is organized by the Department of Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to characterize

  10. Large Scale Computing and Storage Requirements for Biological and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Environmental Research: Target 2014 Large Scale Computing and Storage Requirements for Biological and Environmental Research: Target 2014 BERFrontcover.png A BER / ASCR / NERSC Workshop May 7-8, 2009 Final Report Large Scale Computing and Storage Requirements for Biological and Environmental Research, Report of the Joint BER / NERSC Workshop Conducted May 7-8, 2009 Rockville, MD Goals This workshop was jointly organized by the Department of Energy's Office of Biological & Environmental

  11. Large Scale Computing and Storage Requirements for Nuclear Physics: Target

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2014 Large Scale Computing and Storage Requirements for Nuclear Physics: Target 2014 NPFrontcover.png May 26-27, 2011 Hyatt Regency Bethesda One Bethesda Metro Center (7400 Wisconsin Ave) Bethesda, Maryland, USA 20814 Final Report Large Scale Computing and Storage Requirements for Nuclear Physics Research, Report of the Joint NP / NERSC Workshop Conducted May 26-27, 2011 Bethesda, MD Sponsored by the U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing

  12. Large Scale Computing and Storage Requirements for Fusion Energy Sciences:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Target 2014 High Energy Physics (HEP) Nuclear Physics (NP) Overview Published Reports Case Study FAQs NERSC HPC Achievement Awards Share Your Research User Submitted Research Citations NERSC Citations Home » Science at NERSC » HPC Requirements Reviews » Requirements Reviews: Target 2014 » Fusion Energy Sciences (FES) Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2014 FESFrontcover.png An FES / ASCR / NERSC Workshop August 3-4, 2010 Final Report Large

  13. Large Scale Production Computing and Storage Requirements for Advanced

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scientific Computing Research: Target 2017 Large Scale Production Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2017 ASCRLogo.png This is an invitation-only review organized by the Department of Energy's Office of Advanced Scientific Computing Research (ASCR) and NERSC. The general goal is to determine production high-performance computing, storage, and services that will be needed for ASCR to achieve its science goals through 2017. A specific focus

  14. Large Scale Production Computing and Storage Requirements for Basic Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Basic Energy Sciences: Target 2017 BES-Montage.png This is an invitation-only review organized by the Department of Energy's Office of Basic Energy Sciences (BES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The goal is to determine production high-performance computing, storage, and services that will be needed for BES to

  15. Large Scale Production Computing and Storage Requirements for Biological

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Environmental Research: Target 2017 Large Scale Production Computing and Storage Requirements for Biological and Environmental Research: Target 2017 BERmontage.gif September 11-12, 2012 Hilton Rockville Hotel and Executive Meeting Center 1750 Rockville Pike Rockville, MD, 20852-1699 TEL: 1-301-468-1100 Sponsored by: U.S. Department of Energy Office of Science Office of Advanced Scientific Computing Research (ASCR) Office of Biological and Environmental Research (BER) National Energy

  16. Large Scale Production Computing and Storage Requirements for Nuclear

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for Nuclear Physics: Target 2017 NPicon.png This invitation-only review is organized by the Department of Energy's Offices of Nuclear Physics (NP) and Advanced Scientific Computing Research (ASCR) and by NERSC. The goal is to determine production high-performance computing, storage, and services that will be needed for NP to achieve its science goals through 2017. The review brings together DOE Program Managers,

  17. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  18. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  19. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Requirements for Advanced Scientific Computing Research: Target 2017 ASCRLogo.png This is an invitation-only review organized by the Department of Energy's Office of Advanced ...

  20. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    SciTech Connect (OSTI)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a brief summary of those relevant to issues

  1. QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP),

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP), Bethesda MD, April 29-30, 2014 NY Center for Computational Science 2 Defining questions of nuclear physics research in US: Nuclear Science Advisory Committee (NSAC) "The Frontiers of Nuclear Science", 2007 Long Range Plan "What are the phases of strongly interacting matter and what roles do they play in the cosmos ?" "What does QCD predict for

  2. Impacts of Array Configuration on Land-Use Requirements for Large-Scale Photovoltaic Deployment in the United States: Preprint

    SciTech Connect (OSTI)

    Denholm, P.; Margolis, R. M.

    2008-05-01

    Land use is often cited as an important issue for renewable energy technologies. In this paper we examine the relationship between land-use requirements for large-scale photovoltaic (PV) deployment in the U.S. and PV-array configuration. We estimate the per capita land requirements for solar PV and find that array configuration is a stronger driver of energy density than regional variations in solar insolation. When deployed horizontally, the PV land area needed to meet 100% of an average U.S. citizen's electricity demand is about 100 m2. This requirement roughly doubles to about 200 m2 when using 1-axis tracking arrays. By comparing these total land-use requirements with other current per capita land uses, we find that land-use requirements of solar photovoltaics are modest, especially when considering the availability of zero impact 'land' on rooftops. Additional work is need to examine the tradeoffs between array spacing, self-shading losses, and land use, along with possible techniques to mitigate land-use impacts of large-scale PV deployment.

  3. Large Scale Computing and Storage Requirements for Biological and Environmental Research

    SciTech Connect (OSTI)

    DOE Office of Science, Biological and Environmental Research Program Office ,

    2009-09-30

    In May 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of Biological and Environmental Research (BER) held a workshop to characterize HPC requirements for BER-funded research over the subsequent three to five years. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. Chief among them: scientific progress in BER-funded research is limited by current allocations of computational resources. Additionally, growth in mission-critical computing -- combined with new requirements for collaborative data manipulation and analysis -- will demand ever increasing computing, storage, network, visualization, reliability and service richness from NERSC. This report expands upon these key points and adds others. It also presents a number of"case studies" as significant representative samples of the needs of science teams within BER. Workshop participants were asked to codify their requirements in this"case study" format, summarizing their science goals, methods of solution, current and 3-5 year computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel,"multi-core" environment that is expected to dominate HPC architectures over the next few years.

  4. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    SciTech Connect (OSTI)

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  5. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running Large Scale Jobs Running Large Scale Jobs Users face various challenges with running and scaling large scale jobs on peta-scale production systems. For example, certain applications may not have enough memory per core, the default environment variables may need to be adjusted, or I/O dominates run time. This page lists some available programming and run time tuning options and tips users can try on their large scale applications on Hopper for better performance. Try different compilers

  6. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    try on their large scale applications on Hopper for better performance. Try different compilers and compiler options The available compilers on Hopper are PGI, Cray, Intel, GNU,...

  7. Large scale tracking algorithms.

    SciTech Connect (OSTI)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  8. Large-Scale Federal Renewable Energy Projects | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Federal Renewable Energy Projects Large-Scale Federal Renewable Energy Projects Renewable energy projects larger than 10 megawatts (MW), also known as utility-scale projects, are complex and typically require private-sector financing. The Federal Energy Management Program (FEMP) developed a guide to help federal agencies, and the developers and financiers that work with them, to successfully install these projects at federal facilities. FEMP's Large-Scale Renewable Energy Guide,

  9. Large-Scale Information Systems

    SciTech Connect (OSTI)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  10. Large-Scale Renewable Energy Guide Webinar

    Broader source: Energy.gov [DOE]

    Webinar introduces the “Large Scale Renewable Energy Guide." The webinar will provide an overview of this important FEMP guide, which describes FEMP's approach to large-scale renewable energy projects and provides guidance to Federal agencies and the private sector on how to develop a common process for large-scale renewable projects.

  11. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect (OSTI)

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  12. Batteries for Large Scale Energy Storage

    SciTech Connect (OSTI)

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  13. Large-Scale Liquid Hydrogen Handling Equipment

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    8, 2007 Jerry Gillette Large-Scale Liquid Hydrogen Handling Equipment Hydrogen Delivery Analysis Meeting Argonne National Laboratory Some Delivery Pathways Will Necessitate the Use of Large- Scale Liquid Hydrogen Handling Equipment „ Potential Scenarios include: - Production plant shutdowns - Summer-peak storage „ Equipment Needs include: - Storage tanks - Liquid Pumps - Vaporizers - Ancillaries 2 1 Concern is that Scaling up from Small Units Could Significantly Underestimate Costs of Larger

  14. Large-Scale PCA for Climate

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large-Scale PCA for Climate Large-Scale PCA for Climate The most widely used tool for extracting important patterns from the measurements of atmospheric and oceanic variables is the Empirical Orthogonal Function (EOF) technique. EOFs are popular because of their simplicity and their ability to reduce the dimensionality of large nonlinear, high-dimensional systems into fewer dimensions while preserving the most important patterns of variations in the measurements. Because EOFs are a particular

  15. Large-Scale Computational Fluid Dynamics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large-Scale Computational Fluid Dynamics - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management

  16. Large-Scale Renewable Energy Guide | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Renewable Energy Guide Large-Scale Renewable Energy Guide Presentation covers the Large-scale RE Guide: Developing Renewable Energy Projects Larger than 10 MWs at...

  17. Relic vector field and CMB large scale anomalies

    SciTech Connect (OSTI)

    Chen, Xingang; Wang, Yi E-mail: yw366@cam.ac.uk

    2014-10-01

    We study the most general effects of relic vector fields on the inflationary background and density perturbations. Such effects are observable if the number of inflationary e-folds is close to the minimum requirement to solve the horizon problem. We show that this can potentially explain two CMB large scale anomalies: the quadrupole-octopole alignment and the quadrupole power suppression. We discuss its effect on the parity anomaly. We also provide analytical template for more detailed data comparison.

  18. Determination of Large-Scale Cloud Ice Water Concentration by...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Determination of Large-Scale Cloud Ice Water Concentration by Combining ... Title: Determination of Large-Scale Cloud Ice Water Concentration by Combining Surface ...

  19. Large-Scale Renewable Energy Guide: Developing Renewable Energy...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities Large-Scale Renewable Energy Guide: Developing Renewable Energy ...

  20. Large-Scale Residential Energy Efficiency Programs Based on CFLs...

    Open Energy Info (EERE)

    Large-Scale Residential Energy Efficiency Programs Based on CFLs Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Large-Scale Residential Energy Efficiency Programs Based...

  1. The Effective Field Theory of Cosmological Large Scale Structures...

    Office of Scientific and Technical Information (OSTI)

    The Effective Field Theory of Cosmological Large Scale Structures Citation Details In-Document Search Title: The Effective Field Theory of Cosmological Large Scale Structures...

  2. Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives PDF icon nanoparticulate-basedlubricati...

  3. Creating Large Scale Database Servers (Technical Report) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    Creating Large Scale Database Servers Citation Details In-Document Search Title: Creating Large Scale Database Servers The BaBar experiment at the Stanford Linear Accelerator ...

  4. Rapid Software Prototyping Into Large Scale Control Systems ...

    Office of Scientific and Technical Information (OSTI)

    Rapid Software Prototyping Into Large Scale Control Systems Citation Details In-Document Search Title: Rapid Software Prototyping Into Large Scale Control Systems Authors: Fishler, ...

  5. DLFM library tools for large scale dynamic applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    applications DLFM library tools for large scale dynamic applications Large scale Python and other dynamic applications may spend huge time at startup. The DLFM library,...

  6. ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and Analyses of Automotive Engines Title ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and...

  7. Large-Scale PV Integration Study

    SciTech Connect (OSTI)

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  8. (Sparsity in large scale scientific computation)

    SciTech Connect (OSTI)

    Ng, E.G.

    1990-08-20

    The traveler attended a conference organized by the 1990 IBM Europe Institute at Oberlech, Austria. The theme of the conference was on sparsity in large scale scientific computation. The conference featured many presentations and other activities of direct interest to ORNL research programs on sparse matrix computations and parallel computing, which are funded by the Applied Mathematical Sciences Subprogram of the DOE Office of Energy Research. The traveler presented a talk on his work at ORNL on the development of efficient algorithms for solving sparse nonsymmetric systems of linear equations. The traveler held numerous technical discussions on issues having direct relevance to the research programs on sparse matrix computations and parallel computing at ORNL.

  9. Supporting large-scale computational science

    SciTech Connect (OSTI)

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  10. Stimulated forward Raman scattering in large scale-length laser...

    Office of Scientific and Technical Information (OSTI)

    in large scale-length laser-produced plasmas Citation Details In-Document Search Title: Stimulated forward Raman scattering in large scale-length laser-produced plasmas You ...

  11. Locations of Smart Grid Demonstration and Large-Scale Energy...

    Office of Environmental Management (EM)

    Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Map of the United States ...

  12. SimFS: A Large Scale Parallel File System Simulator

    Energy Science and Technology Software Center (OSTI)

    2011-08-30

    The software provides both framework and tools to simulate a large-scale parallel file system such as Lustre.

  13. DLFM library tools for large scale dynamic applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    DLFM library tools for large scale dynamic applications DLFM library tools for large scale dynamic applications Large scale Python and other dynamic applications may spend huge time at startup. The DLFM library, developed by Mike Davis at Cray, Inc., is a set of functions that can be incorporated into a dynamically-linked application to provide improved performance during the loading of dynamic libraries when running the application at large scale on Edison. To access this library, do module

  14. Sensitivity technologies for large scale simulation.

    SciTech Connect (OSTI)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  15. Large Scale Computing and Storage Requirements for Basic Energy...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    SciencesAn BES ASCR NERSC WorkshopFebruary 9-10, 2010... Read More Workshop Logistics Workshop location, directions, and registration information are included here......

  16. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    This is an invitation-only review organized by the Department of Energy's Office of Basic Energy Sciences (BES), Office of Advanced Scientific Computing Research (ASCR), and the ...

  17. Large Scale Computing Requirements for Basic Energy Sciences...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Acoustic Waves ). ( ) , , , ( 1 2 2 2 2 2 2 2 2 2 t s t z y x p z y x t v ... Starting Models - Test Different Noise Assumptions * Scale Problem Up to Ever ...

  18. Large Scale Computing and Storage Requirements for High Energy...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    for High Energy Physics Accelerator Physics P. Spentzouris, Fermilab Motivation ... Project-X http:www.er.doe.govhepHEPAPreportsP5Report%2006022008.pdf ComPASS The SciDAC2 ...

  19. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    11-12, 2012 Hilton Rockville Hotel and Executive Meeting Center 1750 Rockville Pike Rockville, MD, 20852-1699 TEL: 1-301-468-1100 Sponsored by: U.S. Department of Energy...

  20. Energy Department Applauds Nation's First Large-Scale Industrial Carbon

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Capture and Storage Facility | Department of Energy Nation's First Large-Scale Industrial Carbon Capture and Storage Facility Energy Department Applauds Nation's First Large-Scale Industrial Carbon Capture and Storage Facility August 24, 2011 - 6:23pm Addthis Washington, D.C. - The U.S. Department of Energy issued the following statement in support of today's groundbreaking for construction of the nation's first large-scale industrial carbon capture and storage (ICCS) facility in Decatur,

  1. Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction August 24, 2011 - 1:00pm Addthis Washington, DC - Construction activities have begun at an Illinois ethanol plant that will demonstrate carbon capture and storage. The project, sponsored by the U.S. Department of Energy's Office of Fossil Energy, is the first large-scale integrated carbon capture and storage (CCS) demonstration

  2. Large-Scale All-Dielectric Metamaterial Perfect Reflectors

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Moitra, Parikshit; Slovick, Brian A.; li, Wei; Kravchencko, Ivan I.; Briggs, Dayrl P.; Krishnamurthy, S.; Valentine, Jason

    2015-05-08

    All-dielectric metamaterials offer a potential low-loss alternative to plasmonic metamaterials at optical frequencies. In this paper, we take advantage of the low absorption loss as well as the simple unit cell geometry to demonstrate large-scale (centimeter-sized) all-dielectric metamaterial perfect reflectors made from silicon cylinder resonators. These perfect reflectors, operating in the telecommunications band, were fabricated using self-assembly based nanosphere lithography. In spite of the disorder originating from the self-assembly process, the average reflectance of the metamaterial perfect reflectors is 99.7% at 1530 nm, surpassing the reflectance of metallic mirrors. Moreover, the spectral separation of the electric and magnetic resonances canmore » be chosen to achieve the required reflection bandwidth while maintaining a high tolerance to disorder. Finally, the scalability of this design could lead to new avenues of manipulating light for low-loss and large-area photonic applications.« less

  3. Overcoming the Barrier to Achieving Large-Scale Production -...

    Broader source: Energy.gov (indexed) [DOE]

    Semprius Confidential 1 Overcoming the Barriers to Achieving Large-Scale Production - A ... August 31, 2011 Semprius Confidential 2 Semprius Overview Background Company: * Leading ...

  4. Optimizing Cluster Heads for Energy Efficiency in Large-Scale...

    Office of Scientific and Technical Information (OSTI)

    clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy...

  5. A Model for Turbulent Combustion Simulation of Large Scale Hydrogen...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A Model for Turbulent Combustion Simulation of Large Scale Hydrogen Explosions Event Sponsor: Argonne Leadership Computing Facility Seminar Start Date: Oct 6 2015 - 10:00am...

  6. Stimulated forward Raman scattering in large scale-length laser...

    Office of Scientific and Technical Information (OSTI)

    Stimulated forward Raman scattering in large scale-length laser-produced plasmas Citation Details In-Document Search Title: Stimulated forward Raman scattering in large ...

  7. Strategies to Finance Large-Scale Deployment of Renewable Energy...

    Open Energy Info (EERE)

    to Finance Large-Scale Deployment of Renewable Energy Projects: An Economic Development and Infrastructure Approach Jump to: navigation, search Tool Summary LAUNCH TOOL Name:...

  8. Understanding large scale HPC systems through scalable monitoring...

    Office of Scientific and Technical Information (OSTI)

    HPC systems through scalable monitoring and analysis. Citation Details In-Document Search Title: Understanding large scale HPC systems through scalable monitoring and analysis. ...

  9. FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy...

    Broader source: Energy.gov (indexed) [DOE]

    jobs, and advancing national goals for energy security. The guide describes the fundamentals of deploying financially attractive, large-scale renewable energy projects and...

  10. Optimizing Cluster Heads for Energy Efficiency in Large-Scale...

    Office of Scientific and Technical Information (OSTI)

    Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks Gu, Yi; Wu, Qishi; Rao, Nageswara S. V. Hindawi Publishing Corporation None...

  11. Energy Department Applauds Nation's First Large-Scale Industrial...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... News Media Contact: 202-586-4940 Addthis Related Articles Large-Scale Industrial Carbon ... designed National Sequestration Education Center, located at Richland Community ...

  12. Large-Scale Hydropower Basics | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Renewable Energy » Hydropower » Large-Scale Hydropower Basics Large-Scale Hydropower Basics August 14, 2013 - 3:11pm Addthis Large-scale hydropower plants are generally developed to produce electricity for government or electric utility projects. These plants are more than 30 megawatts (MW) in size, and there is more than 80,000 MW of installed generation capacity in the United States today. Most large-scale hydropower projects use a dam and a reservoir to retain water from a river. When the

  13. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    SciTech Connect (OSTI)

    Margaret Riley; Merry Buckley

    2009-01-01

    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin

  14. Large-Scale First-Principles Molecular Dynamics Simulations on...

    Office of Scientific and Technical Information (OSTI)

    for large-scale parallel platforms such as BlueGeneL. Strong scaling tests for a Materials Science application show an 86% scaling efficiency between 1024 and 32,768 CPUs. ...

  15. Self-consistency tests of large-scale dynamics parameterizations...

    Office of Scientific and Technical Information (OSTI)

    In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to ...

  16. ARM - Evaluation Product - Vertical Air Motion during Large-Scale...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ProductsVertical Air Motion during Large-Scale Stratiform Rain ARM Data Discovery Browse ... Send us a note below or call us at 1-888-ARM-DATA. Send Evaluation Product : Vertical Air ...

  17. Towards a Large-Scale Recording System: Demonstration of Polymer...

    Office of Scientific and Technical Information (OSTI)

    of Polymer-Based Penetrating Array for Chronic Neural Recording Citation Details In-Document Search Title: Towards a Large-Scale Recording System: Demonstration of Polymer-Based ...

  18. How Three Retail Buyers Source Large-Scale Solar Electricity

    Broader source: Energy.gov [DOE]

    Large-scale, non-utility solar power purchase agreements (PPAs) are still a rarity despite the growing popularity of PPAs across the country. In this webinar, participants will learn more about how...

  19. Cosmological Simulations for Large-Scale Sky Surveys | Argonne...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The focus of cosmology today is on its two mysterious pillars, dark matter and dark energy. Large-scale sky surveys are the current drivers of precision cosmology and have been ...

  20. Cosmological Simulations for Large-Scale Sky Surveys | Argonne...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The focus of cosmology today revolves around two mysterious pillars, dark matter and dark energy. Large-scale sky surveys are the current drivers of precision cosmology and have ...

  1. COLLOQUIUM: Liquid Metal Batteries for Large-scale Energy Storage...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    June 22, 2016, 4:15pm to 5:30pm Colloquia MBG Auditorium, PPPL (284 cap.) COLLOQUIUM: Liquid Metal Batteries for Large-scale Energy Storage Dr. Hojong Kim Pennsylvania State ...

  2. The Cielo Petascale Capability Supercomputer: Providing Large-Scale

    Office of Scientific and Technical Information (OSTI)

    Computing for Stockpile Stewardship (Conference) | SciTech Connect Conference: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship Citation Details In-Document Search Title: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship Authors: Vigil, Benny Manuel [1] ; Doerfler, Douglas W. [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2013-03-11 OSTI Identifier:

  3. Revised Environmental Assessment Large-Scale, Open-Air Explosive

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Environmental Assessment Large-Scale, Open-Air Explosive Detonation, DIVINE STRAKE, at the Nevada Test Site May 2006 Prepared by Department of Energy National Nuclear Security Administration Nevada Site Office Environmental Assessment May 2006 Large-Scale, Open-Air Explosive Detonation, DIVINE STRAKE, at the Nevada Test Site TABLE OF CONTENTS 1.0 PURPOSE AND NEED FOR ACTION.....................................................1-1 1.1 Introduction and

  4. Breakthrough Large-Scale Industrial Project Begins Carbon Capture and

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Utilization | Department of Energy Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization January 25, 2013 - 12:00pm Addthis Washington, DC - A breakthrough carbon capture, utilization, and storage (CCUS) project in Texas has begun capturing carbon dioxide (CO2) and piping it to an oilfield for use in enhanced oil recovery (EOR). Read the project factsheet The project at Air Products

  5. Cosmological Simulations for Large-Scale Sky Surveys | Argonne Leadership

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Facility Cosmological Simulations for Large-Scale Sky Surveys PI Name: Salman Habib PI Email: habib@anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 100 Million Year: 2014 Research Domain: Physics The next generation of large-scale sky surveys aims to establish a new regime of cosmic discovery through fundamental measurements of the universe's geometry and the growth of structure. The aim of this project is to accurately

  6. COLLOQUIUM: Large Scale Superconducting Magnets for Variety of Applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    | Princeton Plasma Physics Lab October 15, 2014, 4:00pm to 5:30pm Colloquia MBG Auditorium COLLOQUIUM: Large Scale Superconducting Magnets for Variety of Applications Professor Joseph Minervini Massachusetts Institute of Technology Presentation: PDF icon Superconducting_Magnet_Technology_for_Fusion_and_Large_Scale_Applications.pdf Over the past several decades the U. S. magnetic confinement fusion program, working in collaboration with international partners, has developed superconductor and

  7. Large-Scale Data Challenges in Future Power Grids

    SciTech Connect (OSTI)

    Yin, Jian; Sharma, Poorva; Gorton, Ian; Akyol, Bora A.

    2013-03-25

    This paper describes technical challenges in supporting large-scale real-time data analysis for future power grid systems and discusses various design options to address these challenges. Even though the existing U.S. power grid has served the nation remarkably well over the last 120 years, big changes are in the horizon. The widespread deployment of renewable generation, smart grid controls, energy storage, plug-in hybrids, and new conducting materials will require fundamental changes in the operational concepts and principal components. The whole system becomes highly dynamic and needs constant adjustments based on real time data. Even though millions of sensors such as phase measurement units (PMUs) and smart meters are being widely deployed, a data layer that can support this amount of data in real time is needed. Unlike the data fabric in cloud services, the data layer for smart grids must address some unique challenges. This layer must be scalable to support millions of sensors and a large number of diverse applications and still provide real time guarantees. Moreover, the system needs to be highly reliable and highly secure because the power grid is a critical piece of infrastructure. No existing systems can satisfy all the requirements at the same time. We examine various design options. In particular, we explore the special characteristics of power grid data to meet both scalability and quality of service requirements. Our initial prototype can improve performance by orders of magnitude over existing general-purpose systems. The prototype was demonstrated with several use cases from PNNL’s FPGI and was shown to be able to integrate huge amount of data from a large number of sensors and a diverse set of applications.

  8. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect (OSTI)

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ?CDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ?. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ?, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ?. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  9. Large-Scale Industrial CCS Projects Selected for Continued Testing |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy CCS Projects Selected for Continued Testing Large-Scale Industrial CCS Projects Selected for Continued Testing June 10, 2010 - 1:00pm Addthis Washington, DC - Three Recovery Act funded projects have been selected by the U.S. Department of Energy (DOE) to continue testing large-scale carbon capture and storage (CCS) from industrial sources. The projects - located in Texas, Illinois, and Louisiana - were initially selected for funding in October 2009 as part of a $1.4

  10. DOE Completes Large-Scale Carbon Sequestration Project Awards | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy Large-Scale Carbon Sequestration Project Awards DOE Completes Large-Scale Carbon Sequestration Project Awards November 17, 2008 - 4:58pm Addthis Regional Partner to Demonstrate Safe and Permanent Storage of 2 Million Tons of CO2 at Wyoming Site WASHINGTON, DC - Completing a series of awards through its Regional Carbon Sequestration Partnership Program, the U.S. Department of Energy (DOE) today awarded $66.9 million to the Big Sky Regional Carbon Sequestration Partnership for the

  11. Report of the Workshop on Petascale Systems Integration for LargeScale Facilities

    SciTech Connect (OSTI)

    Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin

    2007-10-01

    There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.

  12. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect (OSTI)

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  13. Large-scale seismic waveform quality metric calculation using Hadoop

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Magana-Zook, Steven; Gaylord, Jessie M.; Knapp, Douglas R.; Dodge, Douglas A.; Ruppert, Stanley D.

    2016-05-27

    Here in this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/Omore » performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. We conducted these experiments multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will

  14. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    SciTech Connect (OSTI)

    Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  15. Electron drift in a large scale solid xenon

    SciTech Connect (OSTI)

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  16. Electron drift in a large scale solid xenon

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  17. LARGE-SCALE MOTIONS IN THE PERSEUS GALAXY CLUSTER

    SciTech Connect (OSTI)

    Simionescu, A.; Werner, N.; Urban, O.; Allen, S. W.; Fabian, A. C.; Sanders, J. S.; Mantz, A.; Nulsen, P. E. J.; Takei, Y.

    2012-10-01

    By combining large-scale mosaics of ROSAT PSPC, XMM-Newton, and Suzaku X-ray observations, we present evidence for large-scale motions in the intracluster medium of the nearby, X-ray bright Perseus Cluster. These motions are suggested by several alternating and interleaved X-ray bright, low-temperature, low-entropy arcs located along the east-west axis, at radii ranging from {approx}10 kpc to over a Mpc. Thermodynamic features qualitatively similar to these have previously been observed in the centers of cool-core clusters, and were successfully modeled as a consequence of the gas sloshing/swirling motions induced by minor mergers. Our observations indicate that such sloshing/swirling can extend out to larger radii than previously thought, on scales approaching the virial radius.

  18. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect (OSTI)

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  19. Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect (OSTI)

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their

  20. UNIVERSITY OF CALIFORNIA The Future of Large Scale Visual Data

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CALIFORNIA The Future of Large Scale Visual Data Analysis Joint Facilities User Forum on Data Intensive Computing Oakland, CA E. Wes Bethel Lawrence Berkeley National Laboratory 16 June 2014 16 June 2014 The World that Was: Computational Architectures * Machine architectures - Single CPU, single core - Vector, then single-core MPPs - "Large" SMP platforms - Relatively well balanced: memory, FLOPS,I/O 16 June 2014 The World that Was: Software Architecture * Data Analysis and

  1. Robust, Multifunctional Joint for Large Scale Power Production Stacks -

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Energy Innovation Portal Robust, Multifunctional Joint for Large Scale Power Production Stacks Lawrence Berkeley National Laboratory Contact LBL About This Technology DIAGRAM OF BERKELEY LAB'S MULTIFUNCTIONAL JOINT DIAGRAM OF BERKELEY LAB'S MULTIFUNCTIONAL JOINT Technology Marketing SummaryBerkeley Lab scientists have developed a multifunctional joint for metal supported, tubular SOFCs that divides various joint functions so that materials and methods optimizing each function can be chosen

  2. Large-scale ab initio configuration interaction calculations for light

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    nuclei | Argonne Leadership Computing Facility Large-scale ab initio configuration interaction calculations for light nuclei Authors: Pieter Maris, H Metin Aktulga, Mark A Caprio, Ümit V Çatalyürek, Esmond G Ng, Dossay Oryspayev, Hugh Potter, Erik Saule, Masha Sosonkina, James P Vary, Chao Yang Zheng Zhou In ab-initio Configuration Interaction calculations, the nuclear wavefunction is expanded in Slater determinants of single-nucleon wavefunctions and the many-body Schrodinger equation

  3. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect (OSTI)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  4. Computational Fluid Dynamics & Large-Scale Uncertainty Quantification for

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Wind Energy Fluid Dynamics & Large-Scale Uncertainty Quantification for Wind Energy - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery

  5. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect (OSTI)

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  6. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect (OSTI)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  7. Topology-Aware Mappings for Large-Scale Eigenvalue Problems | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Leadership Computing Facility Topology-Aware Mappings for Large-Scale Eigenvalue Problems Authors: Aktulga, H.M., Yang, C., Ng.,E.G., Maris, P., Vary, J.P. Obtaining highly accurate predictions for properties of light atomic nuclei using the Configuration Interaction (CI) approach requires computing the lowest eigenvalues and associated eigenvectors of a large many-body nuclear Hamiltonian matrix, H ˆ . Since H ˆ is a large sparse matrix, a parallel iterative eigensolver designed for

  8. Reducing Data Center Loads for a Large-scale, Low Energy Office...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Data Center Loads for a Large- scale, Low-energy Office Building: NREL's Research Support ... National Renewable Energy Laboratory Reducing Data Center Loads for a Large-Scale, ...

  9. HyLights -- Tools to Prepare the Large-Scale European Demonstration...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    HYLIGHTS - TOOLS TO PREPARE THE LARGE-SCALE EUROPEAN DEMONSTRATION PROJECTS ON HYDROGEN ... Assist the European Commission and European industry to plan the large-scale demonstration ...

  10. Large-Scale Algal Cultivation, Harvesting and Downstream Processing Workshop

    Office of Energy Efficiency and Renewable Energy (EERE)

    ATP3 (Algae Testbed Public-Private Partnership) is hosting the Large-Scale Algal Cultivation, Harvesting and Downstream Processing Workshop on November 2–6, 2015, at the Arizona Center for Algae Technology and Innovation in Mesa, Arizona. Topics will include practical applications of growing and managing microalgal cultures at production scale (such as methods for handling cultures, screening strains for desirable characteristics, identifying and mitigating contaminants, scaling up cultures for outdoor growth, harvesting and processing technologies, and the analysis of lipids, proteins, and carbohydrates). Related training will include hands-on laboratory and field opportunities.

  11. Large scale obscuration and related climate effects open literature bibliography

    SciTech Connect (OSTI)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  12. Large-scale anisotropy in stably stratified rotating flows

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  13. Detecting differential protein expression in large-scale population proteomics

    SciTech Connect (OSTI)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  14. The effective field theory of cosmological large scale structures

    SciTech Connect (OSTI)

    Carrasco, John Joseph M.; Hertzberg, Mark P.; Senatore, Leonardo

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ? 106c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations ?(k) for all the observables. As an example, we calculate the correction to the power spectrum at order ?(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ? 0.24h Mpc1.

  15. Presentation on the Large-Scale Renewable Energy Guide | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Presentation on the Large-Scale Renewable Energy Guide Presentation on the Large-Scale Renewable Energy Guide Presentation covers the Large-Scale RE Guide: Developing Renewable Energy Projects Larger than 10 MWs at Federal Facilities for the FUPWG Spring meeting, held on May 22, 2013, in San Francisco, California. Download FEMP's Large-Scale Renewable Energy Guide - Presented by Brad Gustafson (1.75 MB) More Documents & Publications Large-Scale Federal Renewable Energy Projects

  16. Nuclear-pumped lasers for large-scale applications

    SciTech Connect (OSTI)

    Anderson, R.E.; Leonard, E.M.; Shea, R.F.; Berggren, R.R.

    1989-05-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficiently short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system; to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to demonstrate the performance of large-scale optics and the beam quality that may be obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 8 figs., 5 tabs.

  17. Nuclear-pumped lasers for large-scale applications

    SciTech Connect (OSTI)

    Anderson, R.E.; Leonard, E.M.; Shea, R.E.; Berggren, R.R.

    1988-01-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficient short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system: to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to determine the performance of large-scale optics and the beam quality that may bo obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 7 figs., 5 tabs.

  18. Planning under uncertainty solving large-scale stochastic linear programs

    SciTech Connect (OSTI)

    Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  19. Performance Health Monitoring of Large-Scale Systems

    SciTech Connect (OSTI)

    Rajamony, Ram

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­‐scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  20. Large scale, urban decontamination; developments, historical examples and lessons learned

    SciTech Connect (OSTI)

    Demmer, R.L.

    2007-07-01

    Recent terrorist threats and actions have lead to a renewed interest in the technical field of large scale, urban environment decontamination. One of the driving forces for this interest is the prospect for the cleanup and removal of radioactive dispersal device (RDD or 'dirty bomb') residues. In response, the United States Government has spent many millions of dollars investigating RDD contamination and novel decontamination methodologies. The efficiency of RDD cleanup response will be improved with these new developments and a better understanding of the 'old reliable' methodologies. While an RDD is primarily an economic and psychological weapon, the need to cleanup and return valuable or culturally significant resources to the public is nonetheless valid. Several private companies, universities and National Laboratories are currently developing novel RDD cleanup technologies. Because of its longstanding association with radioactive facilities, the U. S. Department of Energy National Laboratories are at the forefront in developing and testing new RDD decontamination methods. However, such cleanup technologies are likely to be fairly task specific; while many different contamination mechanisms, substrate and environmental conditions will make actual application more complicated. Some major efforts have also been made to model potential contamination, to evaluate both old and new decontamination techniques and to assess their readiness for use. There are a number of significant lessons that can be gained from a look at previous large scale cleanup projects. Too often we are quick to apply a costly 'package and dispose' method when sound technological cleaning approaches are available. Understanding historical perspectives, advanced planning and constant technology improvement are essential to successful decontamination. (authors)

  1. High Fidelity Simulations of Large-Scale Wireless Networks

    SciTech Connect (OSTI)

    Onunkwo, Uzoma; Benz, Zachary

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  2. Ferroelectric opening switches for large-scale pulsed power drivers.

    SciTech Connect (OSTI)

    Brennecka, Geoffrey L.; Rudys, Joseph Matthew; Reed, Kim Warren; Pena, Gary Edward; Tuttle, Bruce Andrew; Glover, Steven Frank

    2009-11-01

    Fast electrical energy storage or Voltage-Driven Technology (VDT) has dominated fast, high-voltage pulsed power systems for the past six decades. Fast magnetic energy storage or Current-Driven Technology (CDT) is characterized by 10,000 X higher energy density than VDT and has a great number of other substantial advantages, but it has all but been neglected for all of these decades. The uniform explanation for neglect of CDT technology is invariably that the industry has never been able to make an effective opening switch, which is essential for the use of CDT. Most approaches to opening switches have involved plasma of one sort or another. On a large scale, gaseous plasmas have been used as a conductor to bridge the switch electrodes that provides an opening function when the current wave front propagates through to the output end of the plasma and fully magnetizes the plasma - this is called a Plasma Opening Switch (POS). Opening can be triggered in a POS using a magnetic field to push the plasma out of the A-K gap - this is called a Magnetically Controlled Plasma Opening Switch (MCPOS). On a small scale, depletion of electron plasmas in semiconductor devices is used to affect opening switch behavior, but these devices are relatively low voltage and low current compared to the hundreds of kilo-volts and tens of kilo-amperes of interest to pulsed power. This work is an investigation into an entirely new approach to opening switch technology that utilizes new materials in new ways. The new materials are Ferroelectrics and using them as an opening switch is a stark contrast to their traditional applications in optics and transducer applications. Emphasis is on use of high performance ferroelectrics with the objective of developing an opening switch that would be suitable for large scale pulsed power applications. Over the course of exploring this new ground, we have discovered new behaviors and properties of these materials that were here to fore unknown. Some of

  3. Large-Scale Spray Releases: Additional Aerosol Test Results

    SciTech Connect (OSTI)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  4. Large Scale Obscuration and Related Climate Effects Workshop: Proceedings

    SciTech Connect (OSTI)

    Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K.

    1994-05-01

    A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

  5. Large-scale BAO signatures of the smallest galaxies

    SciTech Connect (OSTI)

    Dalal, Neal; Pen, Ue-Li; Seljak, Uros E-mail: pen@cita.utoronto.ca

    2010-11-01

    Recent work has shown that at high redshift, the relative velocity between dark matter and baryonic gas is typically supersonic. This relative velocity suppresses the formation of the earliest baryonic structures like minihalos, and the suppression is modulated on large scales. This effect imprints a characteristic shape in the clustering power spectrum of the earliest structures, with significant power on ∼ 100 Mpc scales featuring highly pronounced baryon acoustic oscillations. The amplitude of these oscillations is orders of magnitude larger at z ∼ 20 than previously expected. This characteristic signature can allow us to distinguish the effects of minihalos on intergalactic gas at times preceding and during reionization. We illustrate this effect with the example of 21 cm emission and absorption from redshifts during and before reionization. This effect can potentially allow us to probe physics on kpc scales using observations on 100 Mpc scales. We present sensitivity forecasts for FAST and Arecibo. Depending on parameters, this enhanced structure may be detectable by Arecibo at z ∼ 15−20, and with appropriate instrumentation FAST could measure the BAO power spectrum with high precision. In principle, this effect could also pose a serious challenge for efforts to constrain dark energy using observations of the BAO feature at low redshift.

  6. Large scale electromechanical transistor with application in mass sensing

    SciTech Connect (OSTI)

    Jin, Leisheng; Li, Lijie

    2014-12-07

    Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibrationan external force has to be used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.

  7. Scalable parallel distance field construction for large-scale applications

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; Kolla, Hemanth; Chen, Jacqueline H.

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less

  8. Parallel Index and Query for Large Scale Data Analysis

    SciTech Connect (OSTI)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  9. LARGE SCALE METHOD FOR THE PRODUCTION AND PURIFICATION OF CURIUM

    DOE Patents [OSTI]

    Higgins, G.H.; Crane, W.W.T.

    1959-05-19

    A large-scale process for production and purification of Cm/sup 242/ is described. Aluminum slugs containing Am are irradiated and declad in a NaOH-- NaHO/sub 3/ solution at 85 to 100 deg C. The resulting slurry filtered and washed with NaOH, NH/sub 4/OH, and H/sub 2/O. Recovery of Cm from filtrate and washings is effected by an Fe(OH)/sub 3/ precipitation. The precipitates are then combined and dissolved ln HCl and refractory oxides centrifuged out. These oxides are then fused with Na/sub 2/CO/sub 3/ and dissolved in HCl. The solution is evaporated and LiCl solution added. The Cm, rare earths, and anionic impurities are adsorbed on a strong-base anfon exchange resin. Impurities are eluted with LiCl--HCl solution, rare earths and Cm are eluted by HCl. Other ion exchange steps further purify the Cm. The Cm is then precipitated as fluoride and used in this form or further purified and processed. (T.R.H.)

  10. ANALYSIS OF TURBULENT MIXING JETS IN LARGE SCALE TANK

    SciTech Connect (OSTI)

    Lee, S; Richard Dimenna, R; Robert Leishear, R; David Stefanko, D

    2007-03-28

    Flow evolution models were developed to evaluate the performance of the new advanced design mixer pump for sludge mixing and removal operations with high-velocity liquid jets in one of the large-scale Savannah River Site waste tanks, Tank 18. This paper describes the computational model, the flow measurements used to provide validation data in the region far from the jet nozzle, the extension of the computational results to real tank conditions through the use of existing sludge suspension data, and finally, the sludge removal results from actual Tank 18 operations. A computational fluid dynamics approach was used to simulate the sludge removal operations. The models employed a three-dimensional representation of the tank with a two-equation turbulence model. Both the computational approach and the models were validated with onsite test data reported here and literature data. The model was then extended to actual conditions in Tank 18 through a velocity criterion to predict the ability of the new pump design to suspend settled sludge. A qualitative comparison with sludge removal operations in Tank 18 showed a reasonably good comparison with final results subject to significant uncertainties in actual sludge properties.

  11. SF4601-C;Nonemployee's Expense Voucher

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    is required to be submitted with the Non-Employee Expense Voucher W-9 Form A W-8BEN form ... Check Electronic (Please complete form SF 9424-EFT) Wire Former Employee Retirement...

  12. Parallel I/O Software Infrastructure for Large-Scale Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel IO Software Infrastructure for Large-Scale Systems Parallel IO Software Infrastructure for Large-Scale Systems Choudhary.png An illustration of how MPI---IO file domain...

  13. The IR-resummed Effective Field Theory of Large Scale Structures...

    Office of Scientific and Technical Information (OSTI)

    IR-resummed Effective Field Theory of Large Scale Structures Citation Details In-Document Search Title: The IR-resummed Effective Field Theory of Large Scale Structures We present a ...

  14. I/O Performance of a Large-Scale, Interpreter-Driven Laser-Plasma...

    Office of Scientific and Technical Information (OSTI)

    Conference: IO Performance of a Large-Scale, Interpreter-Driven Laser-Plasma Interaction Code Citation Details In-Document Search Title: IO Performance of a Large-Scale, ...

  15. Comparison of the effects in the rock mass of large-scale chemical...

    Office of Scientific and Technical Information (OSTI)

    Comparison of the effects in the rock mass of large-scale chemical and nuclear explosions. ... Title: Comparison of the effects in the rock mass of large-scale chemical and nuclear ...

  16. Energy Department Awards $66.7 Million for Large-Scale Carbon...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    66.7 Million for Large-Scale Carbon Sequestration Project Energy Department Awards 66.7 Million for Large-Scale Carbon Sequestration Project December 18, 2007 - 4:58pm Addthis ...

  17. Large-Scale Deep Learning on the YFCC100M Dataset (Conference...

    Office of Scientific and Technical Information (OSTI)

    Conference: Large-Scale Deep Learning on the YFCC100M Dataset Citation Details In-Document Search Title: Large-Scale Deep Learning on the YFCC100M Dataset Authors: Ni, K ; Boakye, ...

  18. EERE Success Story-FEMP Helps Federal Facilities Develop Large-Scale

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Renewable Energy Projects | Department of Energy FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects EERE Success Story-FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects August 21, 2013 - 12:00am Addthis EERE's Federal Energy Management Program issued a new resource that provides best practices and helpful guidance for federal agencies developing large-scale renewable energy projects. The resource, Large-Scale Renewable Energy Guide:

  19. Large-Scale Spray Releases: Initial Aerosol Test Results

    SciTech Connect (OSTI)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and

  20. Ground movements associated with large-scale underground coal gasification

    SciTech Connect (OSTI)

    Siriwardane, H.J.; Layne, A.W.

    1989-09-01

    The primary objective of this work was to predict the surface and underground movement associated with large-scale multiwell burn sites in the Illinois Basin and Appalachian Basin by using the subsidence/thermomechanical model UCG/HEAT. This code is based on the finite element method. In particular, it can be used to compute (1) the temperature field around an underground cavity when the temperature variation of the cavity boundary is known, and (2) displacements and stresses associated with body forces (gravitational forces) and a temperature field. It is hypothesized that large Underground Coal Gasification (UCG) cavities generated during the line-drive process will be similar to those generated by longwall mining. If that is the case, then as a UCG process continues, the roof of the cavity becomes unstable and collapses. In the UCG/HEAT computer code, roof collapse is modeled using a simplified failure criterion (Lee 1985). It is anticipated that roof collapse would occur behind the burn front; therefore, forward combustion can be continued. As the gasification front propagates, the length of the cavity would become much larger than its width. Because of this large length-to-width ratio in the cavity, ground response behavior could be analyzed by considering a plane-strain idealization. In a plane-strain idealization of the UCG cavity, a cross-section perpendicular to the axis of propagation could be considered, and a thermomechanical analysis performed using a modified version of the two-dimensional finite element code UCG/HEAT. 15 refs., 9 figs., 3 tabs.

  1. A membrane-free lithium/polysulfide semi-liquid battery for large-scale energy storage

    SciTech Connect (OSTI)

    Yang, Yuan; Zheng, Guangyuan; Cui, Yi

    2013-01-01

    Large-scale energy storage represents a key challenge for renewable energy and new systems with low cost, high energy density and long cycle life are desired. In this article, we develop a new lithium/polysulfide (Li/PS) semi-liquid battery for large-scale energy storage, with lithium polysulfide (Li{sub 2}S{sub 8}) in ether solvent as a catholyte and metallic lithium as an anode. Unlike previous work on Li/S batteries with discharge products such as solid state Li{sub 2}S{sub 2} and Li{sub 2}S, the catholyte is designed to cycle only in the range between sulfur and Li{sub 2}S{sub 4}. Consequently all detrimental effects due to the formation and volume expansion of solid Li{sub 2}S{sub 2}/Li{sub 2}S are avoided. This novel strategy results in excellent cycle life and compatibility with flow battery design. The proof-of-concept Li/PS battery could reach a high energy density of 170 W h kg{sup -1} and 190 W h L{sup -1} for large scale storage at the solubility limit, while keeping the advantages of hybrid flow batteries. We demonstrated that, with a 5 M Li{sub 2}S{sub 8} catholyte, energy densities of 97 W h kg{sup -1} and 108 W h L{sup -1} can be achieved. As the lithium surface is well passivated by LiNO{sub 3} additive in ether solvent, internal shuttle effect is largely eliminated and thus excellent performance over 2000 cycles is achieved with a constant capacity of 200 mA h g{sup -1}. This new system can operate without the expensive ion-selective membrane, and it is attractive for large-scale energy storage.

  2. Cosmological implications of the CMB large-scale structure

    SciTech Connect (OSTI)

    Melia, Fulvio

    2015-01-01

    The Wilkinson Microwave Anisotropy Probe (WMAP) and Planck may have uncovered several anomalies in the full cosmic microwave background (CMB) sky that could indicate possible new physics driving the growth of density fluctuations in the early universe. These include an unusually low power at the largest scales and an apparent alignment of the quadrupole and octopole moments. In a ?CDM model where the CMB is described by a Gaussian Random Field, the quadrupole and octopole moments should be statistically independent. The emergence of these low probability features may simply be due to posterior selections from many such possible effects, whose occurrence would therefore not be as unlikely as one might naively infer. If this is not the case, however, and if these features are not due to effects such as foreground contamination, their combined statistical significance would be equal to the product of their individual significances. In the absence of such extraneous factors, and ignoring the biasing due to posterior selection, the missing large-angle correlations would have a probability as low as ?0.1% and the low-l multipole alignment would be unlikely at the ?4.9% level; under the least favorable conditions, their simultaneous observation in the context of the standard model could then be likely at only the ?0.005% level. In this paper, we explore the possibility that these features are indeed anomalous, and show that the corresponding probability of CMB multipole alignment in the R{sub h}=ct universe would then be ?710%, depending on the number of large-scale SachsWolfe induced fluctuations. Since the low power at the largest spatial scales is reproduced in this cosmology without the need to invoke cosmic variance, the overall likelihood of observing both of these features in the CMB is ?7%, much more likely than in ?CDM, if the anomalies are real. The key physical ingredient responsible for this difference is the existence in the former of a maximum fluctuation

  3. Microsoft PowerPoint - 2-A-3-OK-Real-Time Data Infrastructure for Large Scale Wind Fleets.pptx

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Real Time Data Infrastructure for Large Real-Time Data Infrastructure for Large Scale Wind Fleets - Return on Investment vs Fundamental Business Requirements Value now. Value over time. © Copyright 2011, OSIsoft, LLC All Rights Reserved. vs. Fundamental Business Requirements Reliability - 4 Ws and an H * What is reliability? - Uptime, OEE, profitable wind plants? (OEE Availability % * Production % * Quality %) * (OEE = Availability % * Production % * Quality %) * Why should money be spent to

  4. Large Scale Ice Water Path and 3-D Ice Water Content

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Liu, Guosheng

    2008-01-15

    Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.

  5. Large Scale Ice Water Path and 3-D Ice Water Content

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Liu, Guosheng

    Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.

  6. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    SciTech Connect (OSTI)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  7. Locations of Smart Grid Demonstration and Large-Scale Energy Storage

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects | Department of Energy Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Map of the United States showing the location of all projects created with funding from the Smart Grid Demonstration and Energy Storage Project, funded through the American Recovery and Reinvestment Act. Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects (90.94 KB) More Documents

  8. Large-Scale First-Principles Molecular Dynamics Simulations with Electrostatic Embedding: Application to Acetylcholinesterase Catalysis

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Fattebert, Jean-Luc; Lau, Edmond Y.; Bennion, Brian J.; Huang, Patrick; Lightstone, Felice C.

    2015-10-22

    Enzymes are complicated solvated systems that typically require many atoms to simulate their function with any degree of accuracy. We have recently developed numerical techniques for large scale First-Principles molecular dynamics simulations and applied them to study the enzymatic reaction catalyzed by acetylcholinesterase. We carried out Density functional theory calculations for a quantum mechanical (QM) sub- system consisting of 612 atoms with an O(N) complexity finite-difference approach. The QM sub-system is embedded inside an external potential field representing the electrostatic effect due to the environment. We obtained finite temperature sampling by First-Principles molecular dynamics for the acylation reaction of acetylcholinemore » catalyzed by acetylcholinesterase. Our calculations shows two energies barriers along the reaction coordinate for the enzyme catalyzed acylation of acetylcholine. In conclusion, the second barrier (8.5 kcal/mole) is rate-limiting for the acylation reaction and in good agreement with experiment.« less

  9. Overview of large scale experiments performed within the LBB project in the Czech Republic

    SciTech Connect (OSTI)

    Kadecka, P.; Lauerova, D.

    1997-04-01

    During several recent years NRI Rez has been performing the LBB analyses of safety significant primary circuit pipings of NPPs in Czech and Slovak Republics. The analyses covered the NPPs with reactors WWER 440 Type 230 and 213 and WWER 1000 Type 320. Within the relevant LBB projects undertaken with the aim to prove the fulfilling of the requirements of LBB, a series of large scale experiments were performed. The goal of these experiments was to verify the properties of the components selected, and to prove the quality and/or conservatism of assessments used in the LBB-analyses. In this poster, a brief overview of experiments performed in Czech Republic under guidance of NRI Rez is presented.

  10. Large-Scale First-Principles Molecular Dynamics Simulations with Electrostatic Embedding: Application to Acetylcholinesterase Catalysis

    SciTech Connect (OSTI)

    Fattebert, Jean-Luc; Lau, Edmond Y.; Bennion, Brian J.; Huang, Patrick; Lightstone, Felice C.

    2015-10-22

    Enzymes are complicated solvated systems that typically require many atoms to simulate their function with any degree of accuracy. We have recently developed numerical techniques for large scale First-Principles molecular dynamics simulations and applied them to study the enzymatic reaction catalyzed by acetylcholinesterase. We carried out Density functional theory calculations for a quantum mechanical (QM) sub- system consisting of 612 atoms with an O(N) complexity finite-difference approach. The QM sub-system is embedded inside an external potential field representing the electrostatic effect due to the environment. We obtained finite temperature sampling by First-Principles molecular dynamics for the acylation reaction of acetylcholine catalyzed by acetylcholinesterase. Our calculations shows two energies barriers along the reaction coordinate for the enzyme catalyzed acylation of acetylcholine. In conclusion, the second barrier (8.5 kcal/mole) is rate-limiting for the acylation reaction and in good agreement with experiment.

  11. GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations

    SciTech Connect (OSTI)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-06-11

    Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes them useful for physics simulations in many fields.

  12. GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-06-11

    Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes themmore » useful for physics simulations in many fields.« less

  13. Large-Scale Test of Dynamic Correlation Processors: Implications for Correlation-Based Seismic Pipelines

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Dodge, D. A.; Harris, D. B.

    2016-03-15

    Correlation detectors are of considerable interest to the seismic monitoring communities because they offer reduced detection thresholds and combine detection, location and identification functions into a single operation. They appear to be ideal for applications requiring screening of frequent repeating events. However, questions remain about how broadly empirical correlation methods are applicable. We describe the effectiveness of banks of correlation detectors in a system that combines traditional power detectors with correlation detectors in terms of efficiency, which we define to be the fraction of events detected by the correlators. This paper elaborates and extends the concept of a dynamic correlationmore » detection framework – a system which autonomously creates correlation detectors from event waveforms detected by power detectors; and reports observed performance on a network of arrays in terms of efficiency. We performed a large scale test of dynamic correlation processors on an 11 terabyte global dataset using 25 arrays in the single frequency band 1-3 Hz. The system found over 3.2 million unique signals and produced 459,747 screened detections. A very satisfying result is that, on average, efficiency grows with time and, after nearly 16 years of operation, exceeds 47% for events observed over all distance ranges and approaches 70% for near regional and 90% for local events. This observation suggests that future pipeline architectures should make extensive use of correlation detectors, principally for decluttering observations of local and near-regional events. Our results also suggest that future operations based on correlation detection will require commodity large-scale computing infrastructure, since the numbers of correlators in an autonomous system can grow into the hundreds of thousands.« less

  14. LARGE-SCALE HYDROGEN PRODUCTION FROM NUCLEAR ENERGY USING HIGH TEMPERATURE ELECTROLYSIS

    SciTech Connect (OSTI)

    James E. O'Brien

    2010-08-01

    Hydrogen can be produced from water splitting with relatively high efficiency using high-temperature electrolysis. This technology makes use of solid-oxide cells, running in the electrolysis mode to produce hydrogen from steam, while consuming electricity and high-temperature process heat. When coupled to an advanced high temperature nuclear reactor, the overall thermal-to-hydrogen efficiency for high-temperature electrolysis can be as high as 50%, which is about double the overall efficiency of conventional low-temperature electrolysis. Current large-scale hydrogen production is based almost exclusively on steam reforming of methane, a method that consumes a precious fossil fuel while emitting carbon dioxide to the atmosphere. Demand for hydrogen is increasing rapidly for refining of increasingly low-grade petroleum resources, such as the Athabasca oil sands and for ammonia-based fertilizer production. Large quantities of hydrogen are also required for carbon-efficient conversion of biomass to liquid fuels. With supplemental nuclear hydrogen, almost all of the carbon in the biomass can be converted to liquid fuels in a nearly carbon-neutral fashion. Ultimately, hydrogen may be employed as a direct transportation fuel in a hydrogen economy. The large quantity of hydrogen that would be required for this concept should be produced without consuming fossil fuels or emitting greenhouse gases. An overview of the high-temperature electrolysis technology will be presented, including basic theory, modeling, and experimental activities. Modeling activities include both computational fluid dynamics and large-scale systems analysis. We have also demonstrated high-temperature electrolysis in our laboratory at the 15 kW scale, achieving a hydrogen production rate in excess of 5500 L/hr.

  15. Large-scale Offshore Wind Power in the United States. Assessment of Opportunities and Barriers

    SciTech Connect (OSTI)

    Musial, Walter; Ram, Bonnie

    2010-09-01

    This report describes the benefits of and barriers to large-scale deployment of offshore wind energy systems in U.S. waters.

  16. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    researchLANSeventslistn Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based...

  17. DOE's Office of Science Seeks Proposals for Expanded Large-Scale...

    Office of Environmental Management (EM)

    Seeks Proposals for Expanded Large-Scale Scientific Computing DOE's Office of Science ... Successful proposers will be given the use of substantial computer time and data storage ...

  18. Large-scale delamination of multi-layers transition metal carbides...

    Office of Scientific and Technical Information (OSTI)

    Citation Details In-Document Search Title: Large-scale ... Herein we report on a general approach to delaminate ... Type: Accepted Manuscript Journal Name: Dalton Transactions ...

  19. A Large-Scale, High-Resolution Hydrological Model Parameter Data...

    Office of Scientific and Technical Information (OSTI)

    Large-Scale, High-Resolution Hydrological Model Parameter Data Set for Climate Change Impact Assessment for the Conterminous US Citation Details In-Document Search Title: A ...

  20. HyLights -- Tools to Prepare the Large-Scale European Demonstration...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects on Hydrogen for Transport HyLights -- Tools to Prepare the Large-Scale European Demonstration Projects on Hydrogen for Transport Presented at Refueling ...

  1. Development of fine-resolution analyses and expanded large-scale...

    Office of Scientific and Technical Information (OSTI)

    II: Scale-awareness and application to single-column model experiments Title: Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: ...

  2. Development of fine-resolution analyses and expanded large-scale...

    Office of Scientific and Technical Information (OSTI)

    I: Methodology and evaluation Citation Details In-Document Search Title: Development of fine-resolution analyses and expanded large-scale forcing properties. Part I: Methodology ...

  3. Biomass Energy for Transport and Electricity: Large scale utilization under low CO2 concentration scenarios

    SciTech Connect (OSTI)

    Luckow, Patrick; Wise, Marshall A.; Dooley, James J.; Kim, Son H.

    2010-01-25

    This paper examines the potential role of large scale, dedicated commercial biomass energy systems under global climate policies designed to stabilize atmospheric concentrations of CO2 at 400ppm and 450ppm. We use an integrated assessment model of energy and agriculture systems to show that, given a climate policy in which terrestrial carbon is appropriately valued equally with carbon emitted from the energy system, biomass energy has the potential to be a major component of achieving these low concentration targets. The costs of processing and transporting biomass energy at much larger scales than current experience are also incorporated into the modeling. From the scenario results, 120-160 EJ/year of biomass energy is produced by midcentury and 200-250 EJ/year by the end of this century. In the first half of the century, much of this biomass is from agricultural and forest residues, but after 2050 dedicated cellulosic biomass crops become the dominant source. A key finding of this paper is the role that carbon dioxide capture and storage (CCS) technologies coupled with commercial biomass energy can play in meeting stringent emissions targets. Despite the higher technology costs of CCS, the resulting negative emissions used in combination with biomass are a very important tool in controlling the cost of meeting a target, offsetting the venting of CO2 from sectors of the energy system that may be more expensive to mitigate, such as oil use in transportation. The paper also discusses the role of cellulosic ethanol and Fischer-Tropsch biomass derived transportation fuels and shows that both technologies are important contributors to liquid fuels production, with unique costs and emissions characteristics. Through application of the GCAM integrated assessment model, it becomes clear that, given CCS availability, bioenergy will be used both in electricity and transportation.

  4. Microsoft Word - The_Advanced_Networks_and_Services_Underpinning_Modern,Large-Scale_Science.SciDAC.v5.doc

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ESnet4: Advanced Networking and Services Supporting the Science Mission of DOE's Office of Science William E. Johnston ESnet Dept. Head and Senior Scientist Lawrence Berkeley National Laboratory May, 2007 1 Introduction In many ways, the dramatic achievements in scientific discovery through advanced computing and the discoveries of the increasingly large-scale instruments with their enormous data handling and remote collaboration requirements, have been made possible by accompanying

  5. Re-evaluation of the 1995 Hanford Large Scale Drum Fire Test Results

    SciTech Connect (OSTI)

    Yang, J M

    2007-05-02

    A large-scale drum performance test was conducted at the Hanford Site in June 1995, in which over one hundred (100) 55-gal drums in each of two storage configurations were subjected to severe fuel pool fires. The two storage configurations in the test were pallet storage and rack storage. The description and results of the large-scale drum test at the Hanford Site were reported in WHC-SD-WM-TRP-246, ''Solid Waste Drum Array Fire Performance,'' Rev. 0, 1995. This was one of the main references used to develop the analytical methodology to predict drum failures in WHC-SD-SQA-ANAL-501, 'Fire Protection Guide for Waste Drum Storage Array,'' September 1996. Three drum failure modes were observed from the test reported in WHC-SD-WM-TRP-246. They consisted of seal failure, lid warping, and catastrophic lid ejection. There was no discernible failure criterion that distinguished one failure mode from another. Hence, all three failure modes were treated equally for the purpose of determining the number of failed drums. General observations from the results of the test are as follows: {lg_bullet} Trash expulsion was negligible. {lg_bullet} Flame impingement was identified as the main cause for failure. {lg_bullet} The range of drum temperatures at failure was 600 C to 800 C. This is above the yield strength temperature for steel, approximately 540 C (1,000 F). {lg_bullet} The critical heat flux required for failure is above 45 kW/m{sup 2}. {lg_bullet} Fire propagation from one drum to the next was not observed. The statistical evaluation of the test results using, for example, the student's t-distribution, will demonstrate that the failure criteria for TRU waste drums currently employed at nuclear facilities are very conservative relative to the large-scale test results. Hence, the safety analysis utilizing the general criteria described in the five bullets above will lead to a technically robust and defensible product that bounds the potential consequences from postulated

  6. Large Scale Comparative Visualisation of Regulatory Networks with TRNDiff

    SciTech Connect (OSTI)

    Chua, Xin-Yi; Buckingham, Lawrence; Hogan, James M.; Novichkov, Pavel

    2015-06-01

    The advent of Next Generation Sequencing (NGS) technologies has seen explosive growth in genomic datasets, and dense coverage of related organisms, supporting study of subtle, strain-specific variations as a determinant of function. Such data collections present fresh and complex challenges for bioinformatics, those of comparing models of complex relationships across hundreds and even thousands of sequences. Transcriptional Regulatory Network (TRN) structures document the influence of regulatory proteins called Transcription Factors (TFs) on associated Target Genes (TGs). TRNs are routinely inferred from model systems or iterative search, and analysis at these scales requires simultaneous displays of multiple networks well beyond those of existing network visualisation tools [1]. In this paper we describe TRNDiff, an open source system supporting the comparative analysis and visualization of TRNs (and similarly structured data) from many genomes, allowing rapid identification of functional variations within species. The approach is demonstrated through a small scale multiple TRN analysis of the Fur iron-uptake system of Yersinia, suggesting a number of candidate virulence factors; and through a larger study exploiting integration with the RegPrecise database (http://regprecise.lbl.gov; [2]) - a collection of hundreds of manually curated and predicted transcription factor regulons drawn from across the entire spectrum of prokaryotic organisms.

  7. Key management for large scale end-to-end encryption

    SciTech Connect (OSTI)

    Witzke, E.L.

    1994-07-01

    Symmetric end-to-end encryption requires separate keys for each pair of communicating confidants. This is a problem of Order N{sup 2}. Other factors, such as multiple sessions per pair of confidants and multiple encryption points in the ISO Reference Model complicate key management by linear factors. Public-key encryption can reduce the number of keys managed to a linear problem which is good for scaleability of key management, but comes with complicating issues and performance penalties. Authenticity is the primary ingredient of key management. If each potential pair of communicating confidants can authenticate data from each other, then any number of public encryption keys of any type can be communicated with requisite integrity. These public encryption keys can be used with the corresponding private keys to exchange symmetric cryptovariables for high data rate privacy protection. The Digital Signature Standard (DSS), which has been adopted by the United States Government, has both public and private components, similar to a public-key cryptosystem. The Digital Signature Algorithm of the DSS is intended for authenticity but not for secrecy. In this paper, the authors will show how the use of the Digital Signature Algorithm combined with both symmetric and asymmetric (public-key) encryption techniques can provide a practical solution to key management scaleability problems, by reducing the key management complexity to a problem of order N, without sacrificing the encryption speed necessary to operate in high performance networks.

  8. Large Scale Comparative Visualisation of Regulatory Networks with TRNDiff

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Chua, Xin-Yi; Buckingham, Lawrence; Hogan, James M.; Novichkov, Pavel

    2015-06-01

    The advent of Next Generation Sequencing (NGS) technologies has seen explosive growth in genomic datasets, and dense coverage of related organisms, supporting study of subtle, strain-specific variations as a determinant of function. Such data collections present fresh and complex challenges for bioinformatics, those of comparing models of complex relationships across hundreds and even thousands of sequences. Transcriptional Regulatory Network (TRN) structures document the influence of regulatory proteins called Transcription Factors (TFs) on associated Target Genes (TGs). TRNs are routinely inferred from model systems or iterative search, and analysis at these scales requires simultaneous displays of multiple networks well beyond thosemore » of existing network visualisation tools [1]. In this paper we describe TRNDiff, an open source system supporting the comparative analysis and visualization of TRNs (and similarly structured data) from many genomes, allowing rapid identification of functional variations within species. The approach is demonstrated through a small scale multiple TRN analysis of the Fur iron-uptake system of Yersinia, suggesting a number of candidate virulence factors; and through a larger study exploiting integration with the RegPrecise database (http://regprecise.lbl.gov; [2]) - a collection of hundreds of manually curated and predicted transcription factor regulons drawn from across the entire spectrum of prokaryotic organisms.« less

  9. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    SciTech Connect (OSTI)

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  10. BREEDER: a microcomputer program for financial analysis of a large-scale prototype breeder reactor

    SciTech Connect (OSTI)

    Giese, R.F.

    1984-04-01

    This report describes a microcomputer-based, single-project financial analysis program: BREEDER. BREEDER is a user-friendly model designed to facilitate frequent and rapid analyses of the financial implications associated with alternative design and financing strategies for electric generating plants and large-scale prototype breeder (LSPB) reactors in particular. The model has proved to be a useful tool in establishing cost goals for LSPB reactors. The program is available on floppy disks for use on an IBM personal computer (or IBM look-a-like) running under PC-DOS or a Kaypro II transportable computer running under CP/M (and many other CP/M machines). The report documents version 1.5 of BREEDER and contains a user's guide. The report also includes a general overview of BREEDER, a summary of hardware requirements, a definition of all required program inputs, a description of all algorithms used in performing the construction-period and operation-period analyses, and a summary of all available reports. The appendixes contain a complete source-code listing, a cross-reference table, a sample interactive session, several sample runs, and additional documentation of the net-equity program option.

  11. Impact of Large Scale Energy Efficiency Programs On Consumer Tariffs and Utility Finances in India

    SciTech Connect (OSTI)

    Abhyankar, Nikit; Phadke, Amol

    2011-01-20

    Large-scale EE programs would modestly increase tariffs but reduce consumers' electricity bills significantly. However, the primary benefit of EE programs is a significant reduction in power shortages, which might make these programs politically acceptable even if tariffs increase. To increase political support, utilities could pursue programs that would result in minimal tariff increases. This can be achieved in four ways: (a) focus only on low-cost programs (such as replacing electric water heaters with gas water heaters); (b) sell power conserved through the EE program to the market at a price higher than the cost of peak power purchase; (c) focus on programs where a partial utility subsidy of incremental capital cost might work and (d) increase the number of participant consumers by offering a basket of EE programs to fit all consumer subcategories and tariff tiers. Large scale EE programs can result in consistently negative cash flows and significantly erode the utility's overall profitability. In case the utility is facing shortages, the cash flow is very sensitive to the marginal tariff of the unmet demand. This will have an important bearing on the choice of EE programs in Indian states where low-paying rural and agricultural consumers form the majority of the unmet demand. These findings clearly call for a flexible, sustainable solution to the cash-flow management issue. One option is to include a mechanism like FAC in the utility incentive mechanism. Another sustainable solution might be to have the net program cost and revenue loss built into utility's revenue requirement and thus into consumer tariffs up front. However, the latter approach requires institutionalization of EE as a resource. The utility incentive mechanisms would be able to address the utility disincentive of forgone long-run return but have a minor impact on consumer benefits. Fundamentally, providing incentives for EE programs to make them comparable to supply-side investments is a way

  12. A Semi-Analytical Solution for Large-Scale Injection-Induced...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: A Semi-Analytical Solution for Large-Scale Injection-Induced PressurePerturbation and Leakage in a Laterally Bounded Aquifer-AquitardSystem Citation Details ...

  13. DOE's Office of Science Seeks Proposals for Expanded Large-Scale Scientific Computing

    Broader source: Energy.gov [DOE]

    WASHINGTON, D.C. -- Secretary of Energy Samuel W. Bodman announced today that DOE’s Office of Science is seeking proposals to support innovative, large-scale computational science projects to...

  14. FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects

    Broader source: Energy.gov [DOE]

    FEMP developed a guide to help federal agencies, as well as the developers and financiers that work with them, to successfully install large-scale renewable energy projects at federal facilities.

  15. Development of fine-resolution analyses and expanded large-scale...

    Office of Scientific and Technical Information (OSTI)

    II: Scale-awareness and application to single-column model experiments Citation Details In-Document Search Title: Development of fine-resolution analyses and expanded large-scale ...

  16. HyLights -- Tools to Prepare the Large-Scale European Demonstration

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects on Hydrogen for Transport | Department of Energy HyLights -- Tools to Prepare the Large-Scale European Demonstration Projects on Hydrogen for Transport HyLights -- Tools to Prepare the Large-Scale European Demonstration Projects on Hydrogen for Transport Presented at Refueling Infrastructure for Alternative Fuel Vehicles: Lessons Learned for Hydrogen Conference, April 2-3, 2008, Sacramento, California buenger.pdf (1.96 MB) More Documents & Publications Santa Clara Valley

  17. Reducing Data Center Loads for a Large-Scale, Net Zero Office Building |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Reducing Data Center Loads for a Large-Scale, Net Zero Office Building Reducing Data Center Loads for a Large-Scale, Net Zero Office Building Document describes the design, implementation strategies, and continuous performance monitoring of the National Renewable Energy Laboratory's Research Support Facility data center. Download the case study. (3.03 MB) More Documents & Publications Top ECMs for Labs and Data Centers Best Practices Guide for Energy-Efficient Data

  18. Transport Induced by Large Scale Convective Structures in a Dipole-Confined Plasma

    SciTech Connect (OSTI)

    Grierson, B. A.; Mauel, M. E.; Worstell, M. W.; Klassen, M.

    2010-11-12

    Convective structures characterized by ExB motion are observed in a dipole-confined plasma. Particle transport rates are calculated from density dynamics obtained from multipoint measurements and the reconstructed electrostatic potential. The calculated transport rates determined from the large-scale dynamics and local probe measurements agree in magnitude, show intermittency, and indicate that the particle transport is dominated by large-scale convective structures.

  19. A First Step towards Large-Scale Plants to Plastics Engineering |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy A First Step towards Large-Scale Plants to Plastics Engineering A First Step towards Large-Scale Plants to Plastics Engineering November 9, 2010 - 1:56pm Addthis Brookhaven National Laboratory researches making plastics from plants. Niketa Kumar Niketa Kumar Public Affairs Specialist, Office of Public Affairs What does this mean for me? By optimizing the accumulation of particular fatty acids, a Brookhaven team of scientists are developing a method suitable for

  20. Large Scale GSHP as Alternative Energy for American Farmers | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Large Scale GSHP as Alternative Energy for American Farmers Large Scale GSHP as Alternative Energy for American Farmers Project objectives: 100% replacement of on-site fossil fuel in the poultry farm; Reduce heating cost by 70% through bar efficiency improvement, GSHP and solar applications; Reduce 4% of mortality through cooling effect of GSHP in summer. gshp_xu_gshp_farmers.pdf (276.4 KB) More Documents & Publications Analysis of Energy, Environmental and Life Cycle Cost

  1. 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Simulations 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale Simulations 'Sidecars' Pave the Way for Concurrent Analytics of Large-Scale Simulations Halo Finder Enhancement Puts Supercomputer Users in the Driver's Seat November 2, 2015 Contact: Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov Nyxfilamentsandreeberhalos In this Reeber halo finder simulation, the blueish haze is a volume rendering of the density field that Nyx calculates every time step. The light blue and

  2. Large-Scale Urban Decontamination; Developments, Historical Examples and Lessons Learned

    SciTech Connect (OSTI)

    Rick Demmer

    2007-02-01

    responses, has a sound approach for decontamination decision-making that has been applied several times. The anthrax contamination at the U. S. Hart Senate Office Building and numerous U. S. Post Office facilities are examples of employing novel technical responses. Decontamination of the Hart Office building required development of a new approach for high level decontamination of biological contamination as well as techniques for evaluating the technology effectiveness. The World Trade Center destruction also demonstrated the need for, and successful implementation of, appropriate cleanup methodologies. There are a number of significant lessons that can be gained from a look at previous large scale cleanup projects. Too often we are quick to apply a costly package and dispose method when sound technological cleaning approaches are available. Understanding historical perspectives, advanced planning and constant technology improvement are essential to successful decontamination.

  3. NV Energy Large-Scale Photovoltaic Integration Study: Intra-Hour Dispatch and AGC Simulation

    SciTech Connect (OSTI)

    Lu, Shuai; Etingov, Pavel V.; Meng, Da; Guo, Xinxin; Jin, Chunlian; Samaan, Nader A.

    2013-01-02

    The uncertainty and variability with photovoltaic (PV) generation make it very challenging to balance power system generation and load, especially under high penetration cases. Higher reserve requirements and more cycling of conventional generators are generally anticipated for large-scale PV integration. However, whether the existing generation fleet is flexible enough to handle the variations and how well the system can maintain its control performance are difficult to predict. The goal of this project is to develop a software program that can perform intra-hour dispatch and automatic generation control (AGC) simulation, by which the balancing operations of a system can be simulated to answer the questions posed above. The simulator, named Electric System Intra-Hour Operation Simulator (ESIOS), uses the NV Energy southern system as a study case, and models the system’s generator configurations, AGC functions, and operator actions to balance system generation and load. Actual dispatch of AGC generators and control performance under various PV penetration levels can be predicted by running ESIOS. With data about the load, generation, and generator characteristics, ESIOS can perform similar simulations and assess variable generation integration impacts for other systems as well. This report describes the design of the simulator and presents the study results showing the PV impacts on NV Energy real-time operations.

  4. Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.

    2010-01-01

    Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster headsmore » to minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less

  5. Building a Large Scale Climate Data System in Support of HPC Environment

    SciTech Connect (OSTI)

    Wang, Feiyi; Harney, John F; Shipman, Galen M

    2011-01-01

    The Earth System Grid Federation (ESG) is a large scale, multi-institutional, interdisciplinary project that aims to provide climate scientists and impact policy makers worldwide a web-based and client-based platform to publish, disseminate, compare and analyze ever increasing climate related data. This paper describes our practical experiences on the design, development and operation of such a system. In particular, we focus on the support of the data lifecycle from a high performance computing (HPC) perspective that is critical to the end-to-end scientific discovery process. We discuss three subjects that interconnect the consumer and producer of scientific datasets: (1) the motivations, complexities and solutions of deep storage access and sharing in a tightly controlled environment; (2) the importance of scalable and flexible data publication/population; and (3) high performance indexing and search of data with geospatial properties. These perceived corner issues collectively contributed to the overall user experience and proved to be as important as any other architectural design considerations. Although the requirements and challenges are rooted and discussed from a climate science domain context, we believe the architectural problems, ideas and solutions discussed in this paper are generally useful and applicable in a larger scope.

  6. Testing of Large-Scale ICV Glasses with Hanford LAW Simulant

    SciTech Connect (OSTI)

    Hrma, Pavel R.; Kim, Dong-Sang; Vienna, John D.; Matyas, Josef; Smith, Donald E.; Schweiger, Michael J.; Yeager, John D.

    2005-03-01

    Preliminary glass compositions for immobilizing Hanford low-activity waste (LAW) by the in-container vitrification (ICV) process were initially fabricated at crucible- and engineering-scale, including simulants and actual (radioactive) LAW. Glasses were characterized for vapor hydration test (VHT) and product consistency test (PCT) responses and crystallinity (both quenched and slow-cooled samples). Selected glasses were tested for toxicity characteristic leach procedure (TCLP) responses, viscosity, and electrical conductivity. This testing showed that glasses with LAW loading of 20 mass% can be made readily and meet all product constraints by a far margin. Glasses with over 22 mass% Na2O can be made to meet all other product quality and process constraints. Large-scale testing was performed at the AMEC, Geomelt Division facility in Richland. Three tests were conducted using simulated LAW with increasing loadings of 12, 17, and 20 mass% Na2O. Glass samples were taken from the test products in a manner to represent the full expected range of product performance. These samples were characterized for composition, density, crystalline and non-crystalline phase assemblage, and durability using the VHT, PCT, and TCLP tests. The results, presented in this report, show that the AMEC ICV product with meets all waste form requirements with a large margin. These results provide strong evidence that the Hanford LAW can be successfully vitrified by the ICV technology and can meet all the constraints related to product quality. The economic feasibility of the ICV technology can be further enhanced by subsequent optimization.

  7. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; Visser, Sid; Stevens, Rick L.; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  8. Test of the CLAS12 RICH large-scale prototype in the direct proximity focusing configuration

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Anefalos Pereira, S.; Baltzell, N.; Barion, L.; Benmokhtar, F.; Brooks, W.; Cisbani, E.; Contalbrigo, M.; El Alaoui, A.; Hafidi, K.; Hoek, M.; et al

    2016-02-11

    A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c up to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Laboratory. The adopted solution foresees a novel hybrid optics design based on aerogel radiator, composite mirrors and high-packed and high-segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). We report here the results of the tests of a large scale prototype of the RICH detector performed withmore » the hadron beam of the CERN T9 experimental hall for the direct detection configuration. As a result, the tests demonstrated that the proposed design provides the required pion-to-kaon rejection factor of 1:500 in the whole momentum range.« less

  9. Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities

    Broader source: Energy.gov [DOE]

    The Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities provides best practices and other helpful guidance for federal agencies developing large-scale renewable energy projects.

  10. What Will the Neighbors Think? Building Large-Scale Science Projects Around the World

    ScienceCinema (OSTI)

    Jones, Craig; Mrotzek, Christian; Toge, Nobu; Sarno, Doug

    2010-01-08

    Public participation is an essential ingredient for turning the International Linear Collider into a reality. Wherever the proposed particle accelerator is sited in the world, its neighbors -- in any country -- will have something to say about hosting a 35-kilometer-long collider in their backyards. When it comes to building large-scale physics projects, almost every laboratory has a story to tell. Three case studies from Japan, Germany and the US will be presented to examine how community relations are handled in different parts of the world. How do particle physics laboratories interact with their local communities? How do neighbors react to building large-scale projects in each region? How can the lessons learned from past experiences help in building the next big project? These and other questions will be discussed to engage the audience in an active dialogue about how a large-scale project like the ILC can be a good neighbor.

  11. Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet.

    SciTech Connect (OSTI)

    Adalsteinsson, Helgi; Armstrong, Robert C.; Chiang, Ken; Gentile, Ann C.; Lloyd, Levi; Minnich, Ronald G.; Vanderveen, Keith; Van Randwyk, Jamie A; Rudish, Don W.

    2008-10-01

    We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3

  12. Energy Department Loan Guarantee Would Support Large-Scale Rooftop Solar

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Power for U.S. Military Housing | Department of Energy Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S. Military Housing Energy Department Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S. Military Housing September 7, 2011 - 2:10pm Addthis Washington D.C. - U.S. Energy Secretary Steven Chu today announced the offer of a conditional commitment for a partial guarantee of a $344 million loan that will support the SolarStrong Project, which is expected

  13. Variability of Load and Net Load in Case of Large Scale Distributed Wind Power

    SciTech Connect (OSTI)

    Holttinen, H.; Kiviluoma, J.; Estanqueiro, A.; Gomez-Lazaro, E.; Rawn, B.; Dobschinski, J.; Meibom, P.; Lannoye, E.; Aigner, T.; Wan, Y. H.; Milligan, M.

    2011-01-01

    Large scale wind power production and its variability is one of the major inputs to wind integration studies. This paper analyses measured data from large scale wind power production. Comparisons of variability are made across several variables: time scale (10-60 minute ramp rates), number of wind farms, and simulated vs. modeled data. Ramp rates for Wind power production, Load (total system load) and Net load (load minus wind power production) demonstrate how wind power increases the net load variability. Wind power will also change the timing of daily ramps.

  14. Large scale magnetic fields and coherent structures in nonuniform unmagnetized plasma

    SciTech Connect (OSTI)

    Jucker, Martin; Andrushchenko, Zhanna N.; Pavlenko, Vladimir P.

    2006-07-15

    The properties of streamers and zonal magnetic structures in magnetic electron drift mode turbulence are investigated. The stability of such large scale structures is investigated in the kinetic and the hydrodynamic regime, for which an instability criterion similar to the Lighthill criterion for modulational instability is found. Furthermore, these large scale flows can undergo further nonlinear evolution after initial linear growth, which can lead to the formation of long-lived coherent structures consisting of self-bound wave packets between the surfaces of two different flow velocities with an expected modification of the anomalous electron transport properties.

  15. ARM - PI Product - Large Scale Ice Water Path and 3-D Ice Water Content

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ProductsLarge Scale Ice Water Path and 3-D Ice Water Content ARM Data Discovery Browse Data Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send PI Product : Large Scale Ice Water Path and 3-D Ice Water Content Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM

  16. DOE Awards $126.6 Million for Two More Large-Scale Carbon Sequestration

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects | Department of Energy 6.6 Million for Two More Large-Scale Carbon Sequestration Projects DOE Awards $126.6 Million for Two More Large-Scale Carbon Sequestration Projects May 6, 2008 - 11:30am Addthis Projects in California and Ohio Join Four Others in Effort to Drastically Reduce Greenhouse Gas Emissions WASHINGTON, DC - The U.S. Department of Energy (DOE) today announced awards of more than $126.6 million to the West Coast Regional Carbon Sequestration Partnership (WESTCARB) and

  17. Recent developments in large-scale finite-element Lagrangian hydrocode technology. [Dyna 20/dyna 30 computer code

    SciTech Connect (OSTI)

    Goudreau, G.L.; Hallquist, J.O.

    1981-10-01

    The state of Lagrangian hydrocodes for computing the large deformation dynamic response of inelastic continuua is reviewed in the context of engineering computation at the Lawrence Livermore National Laboratory, USA, and the DYNA2D/DYNA3D finite elements codes. The emphasis is on efficiency and computational cost. The simplest elements with explicit time integration. The two-dimensional four node quadrilateral and the three-dimensional hexahedron with one point quadrature are advocated as superior to other more expensive choices. Important auxiliary capabilities are a cheap but effective hourglass control, slidelines/planes with void opening/closure, and rezoning. Both strain measures and material formulation are seen as a homogeneous stress point problem and a flexible material subroutine interface admits both incremental and total strain formulation, dependent on internal energy or an arbitrary set of other internal variables. Vectorization on Class VI computers such as the CRAY-1 is a simple exercise for optimally organized primitive element formulations. Some examples of large scale computation are illustrated, including continuous tone graphic representation.

  18. Large-scale delamination of multi-layers transition metal carbides and carbonitrides “MXenes”

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Naguib, Michael; Unocic, Raymond R.; Armstrong, Beth L.; Nanda, Jagjit

    2015-04-17

    Herein we report on a general approach to delaminate multi-layered MXenes using an organic base to induce swelling that in turn weakens the bonds between the MX layers. Simple agitation or mild sonication of the swollen MXene in water resulted in the large-scale delamination of the MXene layers. The delamination method is demonstrated for vanadium carbide, and titanium carbonitrides MXenes.

  19. Large-Scale Delamination of Multi-Layers Transition Metal Carbides and Carbonitrides MXenes

    SciTech Connect (OSTI)

    Abdelmalak, Michael Naguib; Unocic, Raymond R; Armstrong, Beth L; Nanda, Jagjit

    2015-01-01

    Herein we report on a general approach to delaminate multi-layered MXenes using an organic base to induce swelling that in turn weakens the bonds between the MX layers. Simple agitation or mild sonication of the swollen MXene in water resulted in the large-scale delamination of the MXene layers. The delamination method is demonstrated for vanadium carbide, and titanium carbonitrides MXenes.

  20. First U.S. Large-Scale CO2 Storage Project Advances

    Broader source: Energy.gov [DOE]

    Drilling nears completion for the first large-scale carbon dioxide injection well in the United States for CO2 sequestration. This project will be used to demonstrate that CO2 emitted from industrial sources - such as coal-fired power plants - can be stored in deep geologic formations to mitigate large quantities of greenhouse gas emissions.

  1. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    SciTech Connect (OSTI)

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, but WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.

  2. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less

  3. Economic Impact of Large-Scale Deployment of Offshore Marine and Hydrokinetic Technology in Oregon Coastal Counties

    SciTech Connect (OSTI)

    Jimenez, T.; Tegen, S.; Beiter, P.

    2015-03-01

    To begin understanding the potential economic impacts of large-scale WEC technology, the Bureau of Ocean Energy Management (BOEM) commissioned the National Renewable Energy Laboratory (NREL) to conduct an economic impact analysis of largescale WEC deployment for Oregon coastal counties. This report follows a previously published report by BOEM and NREL on the jobs and economic impacts of WEC technology for the entire state (Jimenez and Tegen 2015). As in Jimenez and Tegen (2015), this analysis examined two deployment scenarios in the 2026-2045 timeframe: the first scenario assumed 13,000 megawatts (MW) of WEC technology deployed during the analysis period, and the second assumed 18,000 MW of WEC technology deployed by 2045. Both scenarios require major technology and cost improvements in the WEC devices. The study is on very large-scale deployment so readers can examine and discuss the potential of a successful and very large WEC industry. The 13,000-MW is used as the basis for the county analysis as it is the smaller of the two scenarios. Sensitivity studies examined the effects of a robust in-state WEC supply chain. The region of analysis is comprised of the seven coastal counties in Oregon—Clatsop, Coos, Curry, Douglas, Lane, Lincoln, and Tillamook—so estimates of jobs and other economic impacts are specific to this coastal county area.

  4. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    SciTech Connect (OSTI)

    RODRIGUEZ, MARKO A.; BOLLEN, JOHAN; VAN DE SOMPEL, HERBERT

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  5. Simultaneous effect of modified gravity and primordial non-Gaussianity in large scale structure observations

    SciTech Connect (OSTI)

    Mirzatuny, Nareg; Khosravi, Shahram; Baghram, Shant; Moshafi, Hossein E-mail: khosravi@mail.ipm.ir E-mail: hosseinmoshafi@iasbs.ac.ir

    2014-01-01

    In this work we study the simultaneous effect of primordial non-Gaussianity and the modification of the gravity in f(R) framework on large scale structure observations. We show that non-Gaussianity and modified gravity introduce a scale dependent bias and growth rate functions. The deviation from ΛCDM in the case of primordial non-Gaussian models is in large scales, while the growth rate deviates from ΛCDM in small scales for modified gravity theories. We show that the redshift space distortion can be used to distinguish positive and negative f{sub NL} in standard background, while in f(R) theories they are not easily distinguishable. The galaxy power spectrum is generally enhanced in presence of non-Gaussianity and modified gravity. We also obtain the scale dependence of this enhancement. Finally we define galaxy growth rate and galaxy growth rate bias as new observational parameters to constrain cosmology.

  6. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect (OSTI)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  7. Panel 1, Towards Sustainable Energy Systems: The Role of Large-Scale Hydrogen Storage in Germany

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Hanno Butsch | Head of International Cooperation NOW GmbH National Organization Hydrogen and Fuel Cell Technology Towards sustainable energy systems - The role of large scale hydrogen storage in Germany May 14th, 2014 | Sacramento Political background for the transition to renewable energies 2 * Climate protection: Global responsibility for the next generation. * Energy security: More independency from fossil fuels. * Securing the economy: Creating new markets and jobs through innovations. Three

  8. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect (OSTI)

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  9. DOE/NNSA Participates in Large-Scale CTBT On-Site Inspection Exercise in

    National Nuclear Security Administration (NNSA)

    Jordan | National Nuclear Security Administration | (NNSA) Participates in Large-Scale CTBT On-Site Inspection Exercise in Jordan Friday, November 28, 2014 - 9:05am Experts from U.S. Department of Energy National Laboratories, including Sandia National Laboratories, Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Pacific Northwest National Laboratory, are participating in the Comprehensive Nuclear-Test-Ban Treaty (CTBT) Integrated Field Exercise 2014 (IFE14), a

  10. Metal Catalyzed sp2 Bonded Carbon - Large-scale Graphene Synthesis and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Beyond | MIT-Harvard Center for Excitonics Metal Catalyzed sp2 Bonded Carbon - Large-scale Graphene Synthesis and Beyond December 1, 2009 at 3pm/36-428 Peter Sutter Center for Functional Nanomaterials sutter abstract: Carbon honeycomb lattices have shown a number of remarkable properties. When wrapped up into fullerenes, for instance, superconductivity with high transition temperatures can be induced by alkali intercalation. Rolling carbon sheets up into 1-dimensional nanotubes generates the

  11. Large-Scale Production of Marine Microalgae for Fuel and Feeds

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Bioenergy Technologies Office (BETO) 2015 Project Peer Review Large-Scale Production of Marine Microalgae for Fuel and Feeds March 24, 2015 Algae Platform Review Mark Huntley Cornell Marine Algal Biofuels Consortium This presentation does not contain any proprietary, confidential, or otherwise restricted information Goal Statement  BETO MYPP Goals (3) Demonstrate 1. Performance against clear cost goals and technical targets (Q4 2013) 2. Productivity of 1,500 gal/acre/yr algal oil (Q4 2014)

  12. PARTICLE ACCELERATION BY COLLISIONLESS SHOCKS CONTAINING LARGE-SCALE MAGNETIC-FIELD VARIATIONS

    SciTech Connect (OSTI)

    Guo, F.; Jokipii, J. R.; Kota, J. E-mail: jokipii@lpl.arizona.ed

    2010-12-10

    Diffusive shock acceleration at collisionless shocks is thought to be the source of many of the energetic particles observed in space. Large-scale spatial variations of the magnetic field have been shown to be important in understanding observations. The effects are complex, so here we consider a simple, illustrative model. Here we solve numerically the Parker transport equation for a shock in the presence of large-scale sinusoidal magnetic-field variations. We demonstrate that the familiar planar-shock results can be significantly altered as a consequence of large-scale, meandering magnetic lines of force. Because the perpendicular diffusion coefficient {kappa}{sub perpendicular} is generally much smaller than the parallel diffusion coefficient {kappa}{sub ||}, the energetic charged particles are trapped and preferentially accelerated along the shock front in the regions where the connection points of magnetic field lines intersecting the shock surface converge, and thus create the 'hot spots' of the accelerated particles. For the regions where the connection points separate from each other, the acceleration to high energies will be suppressed. Further, the particles diffuse away from the 'hot spot' regions and modify the spectra of downstream particle distribution. These features are qualitatively similar to the recent Voyager observations in the Heliosheath. These results are potentially important for particle acceleration at shocks propagating in turbulent magnetized plasmas as well as those which contain large-scale nonplanar structures. Examples include anomalous cosmic rays accelerated by the solar wind termination shock, energetic particles observed in propagating heliospheric shocks, galactic cosmic rays accelerated by supernova blast waves, etc.

  13. Large-Scale Simulation of Brain Tissue: Blue Brain Project, EPFL | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Leadership Computing Facility Digital reconstruction of pyramidal cells Digital reconstruction of pyramidal cells. Blue Brain Project, Ecole Polytechnique Federale de Lausanne Large-Scale Simulation of Brain Tissue: Blue Brain Project, EPFL PI Name: Fabien Delalondre PI Email: fabien.delalondre@epfl.ch Institution: Ecole Federale Polytechnique de Lausanne Allocation Program: ESP Year: 2015 Research Domain: Biological Sciences Tier 1 Science Project Science This ESP project will be used to

  14. Energy Department Awards $66.7 Million for Large-Scale Carbon...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    to permanently store CO2. ADM's ethanol plant in Decatur, IL, will serve as the source of CO2 for the project. ADM will cost share the expense of the CO2, which will come...

  15. Towards physics responsible for large-scale Lyman-α forest bias parameters

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Agnieszka M. Cieplak; Slosar, Anze

    2016-03-08

    Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density (bδ) and velocity gradient (bη) biases of the Lyman-α forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamically simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit of no thermalmore » broadening and linear redshift-space distortions. We also show that his bη formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of bη and the small scale flux PDF might be a sensitive probe of the thermal state of the IGM. Lastly, we find that large-scale biases derived from the smoothed total matter field are within 10–20% to those based on hydrodynamical quantities, in line with other measurements in the literature.« less

  16. A method of orbital analysis for large-scale first-principles simulations

    SciTech Connect (OSTI)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  17. High Fidelity Simulations of Large-Scale Wireless Networks (Plus-Up)

    SciTech Connect (OSTI)

    Onunkwo, Uzoma

    2015-11-01

    Sandia has built a strong reputation in scalable network simulation and emulation for cyber security studies to protect our nation’s critical information infrastructures. Georgia Tech has preeminent reputation in academia for excellence in scalable discrete event simulations, with strong emphasis on simulating cyber networks. Many of the experts in this field, such as Dr. Richard Fujimoto, Dr. George Riley, and Dr. Chris Carothers, have strong affiliations with Georgia Tech. The collaborative relationship that we intend to immediately pursue is in high fidelity simulations of practical large-scale wireless networks using ns-3 simulator via Dr. George Riley. This project will have mutual benefits in bolstering both institutions’ expertise and reputation in the field of scalable simulation for cyber-security studies. This project promises to address high fidelity simulations of large-scale wireless networks. This proposed collaboration is directly in line with Georgia Tech’s goals for developing and expanding the Communications Systems Center, the Georgia Tech Broadband Institute, and Georgia Tech Information Security Center along with its yearly Emerging Cyber Threats Report. At Sandia, this work benefits the defense systems and assessment area with promise for large-scale assessment of cyber security needs and vulnerabilities of our nation’s critical cyber infrastructures exposed to wireless communications.

  18. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect (OSTI)

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  19. Using an Energy Performance Based Design-Build Process to Procure a Large Scale Low-Energy Building: Preprint

    SciTech Connect (OSTI)

    Pless, S.; Torcellini, P.; Shelton, D.

    2011-05-01

    This paper will review a procurement, acquisition, and contract process of a large-scale replicable net zero energy (ZEB) office building. The owners developed and implemented an energy performance based design-build process to procure a 220,000 ft2 office building with contractual requirements to meet demand side energy and LEED goals. We will outline the key procurement steps needed to ensure achievement of our energy efficiency and ZEB goals. The development of a clear and comprehensive Request for Proposals (RFP) that includes specific and measurable energy use intensity goals is critical to ensure energy goals are met in a cost effective manner. The RFP includes a contractual requirement to meet an absolute demand side energy use requirement of 25 kBtu/ft2, with specific calculation methods on what loads are included, how to normalize the energy goal based on increased space efficiency and data center allocation, specific plug loads and schedules, and calculation details on how to account for energy used from the campus hot and chilled water supply. Additional advantages of integrating energy requirements into this procurement process include leveraging the voluntary incentive program, which is a financial incentive based on how well the owner feels the design-build team is meeting the RFP goals.

  20. An Inexpensive Aqueous Flow Battery for Large-Scale Electrical Energy Storage Based on Water-Soluble Organic Redox Couples

    SciTech Connect (OSTI)

    Yang, B; Hoober-Burkhardt, L; Wang, F; Prakash, GKS; Narayanan, SR

    2014-05-21

    We introduce a novel Organic Redox Flow Battery (ORBAT), for Meeting the demanding requirements of cost, eco-friendliness, and durability for large-scale energy storage. ORBAT employs two different water-soluble organic redox couples on the positive and negative side of a flow battery. Redox couples such as quinones are particularly attractive for this application. No precious metal catalyst is needed because of the fast proton-coupled electron transfer processes. Furthermore, in acid media, the quinones exhibit good chemical stability. These properties render quinone-based redox couples very attractive for high-efficiency metal-free rechargeable batteries. We demonstrate the rechargeability of ORBAT with anthraquinone-2-sulfonic acid or anthraquinone-2,6-disulfonic acid on the negative side, and 1,2-dihydrobenzoquinone- 3,5-disulfonic acid on the positive side. The ORBAT cell uses a membrane-electrode assembly configuration similar to that used in polymer electrolyte fuel cells. Such a battery can be charged and discharged multiple times at high faradaic efficiency without any noticeable degradation of performance. We show that solubility and mass transport properties of the reactants and products are paramount to achieving high current densities and high efficiency. The ORBAT configuration presents a unique opportunity for developing an inexpensive and sustainable metal-free rechargeable battery for large-scale electrical energy storage. (C) The Author(s) 2014. Published by ECS. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 License (CC BY, http://creativecommons.orgilicenses/by/4.0/), which permits unrestricted reuse of the work in any medium, provided the original work is properly cited. All rights reserved.

  1. Primordial non-Gaussianity in the bispectra of large-scale structure

    SciTech Connect (OSTI)

    Tasinato, Gianmassimo; Tellarini, Matteo; Ross, Ashley J.; Wands, David E-mail: matteo.tellarini@port.ac.uk E-mail: david.wands@port.ac.uk

    2014-03-01

    The statistics of large-scale structure in the Universe can be used to probe non-Gaussianity of the primordial density field, complementary to existing constraints from the cosmic microwave background. In particular, the scale dependence of halo bias, which affects the halo distribution at large scales, represents a promising tool for analyzing primordial non-Gaussianity of local form. Future observations, for example, may be able to constrain the trispectrum parameter g{sub NL} that is difficult to study and constrain using the CMB alone. We investigate how galaxy and matter bispectra can distinguish between the two non-Gaussian parameters f{sub NL} and g{sub NL}, whose effects give nearly degenerate contributions to the power spectra. We use a generalization of the univariate bias approach, making the hypothesis that the number density of halos forming at a given position is a function of the local matter density contrast and of its local higher-order statistics. Using this approach, we calculate the halo-matter bispectra and analyze their properties. We determine a connection between the sign of the halo bispectrum on large scales and the parameter g{sub NL}. We also construct a combination of halo and matter bispectra that is sensitive to f{sub NL}, with little contamination from g{sub NL}. We study both the case of single and multiple sources to the primordial gravitational potential, discussing how to extend the concept of stochastic halo bias to the case of bispectra. We use a specific halo mass-function to calculate numerically the bispectra in appropriate squeezed limits, confirming our theoretical findings.

  2. Materials Science and Materials Chemistry for Large Scale Electrochemical Energy Storage: From Transportation to Electrical Grid

    SciTech Connect (OSTI)

    Liu, Jun; Zhang, Jiguang; Yang, Zhenguo; Lemmon, John P.; Imhoff, Carl H.; Graff, Gordon L.; Li, Liyu; Hu, Jian Z.; Wang, Chong M.; Xiao, Jie; Xia, Guanguang; Viswanathan, Vilayanur V.; Baskaran, Suresh; Sprenkle, Vincent L.; Li, Xiaolin; Shao, Yuyan; Schwenzer, Birgit

    2013-02-15

    Large-scale electrical energy storage has become more important than ever for reducing fossil energy consumption in transportation and for the widespread deployment of intermittent renewable energy in electric grid. However, significant challenges exist for its applications. Here, the status and challenges are reviewed from the perspective of materials science and materials chemistry in electrochemical energy storage technologies, such as Li-ion batteries, sodium (sulfur and metal halide) batteries, Pb-acid battery, redox flow batteries, and supercapacitors. Perspectives and approaches are introduced for emerging battery designs and new chemistry combinations to reduce the cost of energy storage devices.

  3. Networks of silicon nanowires: A large-scale atomistic electronic structure analysis

    SciTech Connect (OSTI)

    Kele?, mit; Bulutay, Ceyhun; Liedke, Bartosz; Heinig, Karl-Heinz

    2013-11-11

    Networks of silicon nanowires possess intriguing electronic properties surpassing the predictions based on quantum confinement of individual nanowires. Employing large-scale atomistic pseudopotential computations, as yet unexplored branched nanostructures are investigated in the subsystem level as well as in full assembly. The end product is a simple but versatile expression for the bandgap and band edge alignments of multiply-crossing Si nanowires for various diameters, number of crossings, and wire orientations. Further progress along this line can potentially topple the bottom-up approach for Si nanowire networks to a top-down design by starting with functionality and leading to an enabling structure.

  4. NREL Offers an Open-Source Solution for Large-Scale Energy Data Collection

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Analysis - News Releases | NREL NREL Offers an Open-Source Solution for Large-Scale Energy Data Collection and Analysis June 18, 2013 The Energy Department's National Renewable Energy Laboratory (NREL) is launching an open-source system for storing, integrating, and aligning energy-related time-series data. NREL's Energy DataBus is used for tracking and analyzing energy use on its own campus. The system is applicable to other facilities-including anything from a single building to a

  5. Large-Scale Field Study of Landfill Covers at Sandia National Laboratories

    SciTech Connect (OSTI)

    Dwyer, S.F.

    1998-09-01

    A large-scale field demonstration comparing final landfill cover designs has been constructed and is currently being monitored at Sandia National Laboratories in Albuquerque, New Mexico. Two conventional designs (a RCRA Subtitle `D' Soil Cover and a RCRA Subtitle `C' Compacted Clay Cover) were constructed side-by-side with four alternative cover test plots designed for dry environments. The demonstration is intended to evaluate the various cover designs based on their respective water balance performance, ease and reliability of construction, and cost. This paper presents an overview of the ongoing demonstration.

  6. Testing the big bang: Light elements, neutrinos, dark matter and large-scale structure

    SciTech Connect (OSTI)

    Schramm, D.N. Fermi National Accelerator Lab., Batavia, IL )

    1991-06-01

    In this series of lectures, several experimental and observational tests of the standard cosmological model are examined. In particular, detailed discussion is presented regarding nucleosynthesis, the light element abundances and neutrino counting; the dark matter problems; and the formation of galaxies and large-scale structure. Comments will also be made on the possible implications of the recent solar neutrino experimental results for cosmology. An appendix briefly discusses the 17 keV thing'' and the cosmological and astrophysical constraints on it. 126 refs., 8 figs., 2 tabs.

  7. Technical and economical aspects of large-scale CO{sub 2} storage in deep oceans

    SciTech Connect (OSTI)

    Sarv, H.; John, J.

    2000-07-01

    The authors examined the technical and economical feasibility of two options for large-scale transportation and ocean sequestration of captured CO{sub 2} at depths of 3000 meters or greater. In one case, CO{sub 2} was pumped from a land-based collection center through six parallel-laid subsea pipelines. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating platform or a barge for vertical injection through a large-diameter pipe to the ocean floor. Based on the preliminary technical and economic analyses, tanker transportation and offshore injection through a large-diameter, 3,000-meter vertical pipeline from a floating structure appears to be the best method for delivering liquid CO{sub 2} to deep ocean floor depressions for distances greater than 400 km. Other benefits of offshore injection are high payload capability and ease of relocation. For shorter distances (less than 400 km), CO{sub 2} delivery by subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines or tankers were under 2 dollars per ton of stored CO{sub 2}. Their analyses also indicates that large-scale sequestration of captured CO{sub 2} in oceans is technologically feasible and has many commonalities with other strategies for deepsea natural gas and oil exploration installations.

  8. Large-scale structure in brane-induced gravity. I. Perturbation theory

    SciTech Connect (OSTI)

    Scoccimarro, Roman

    2009-11-15

    We study the growth of subhorizon perturbations in brane-induced gravity using perturbation theory. We solve for the linear evolution of perturbations taking advantage of the symmetry under gauge transformations along the extra-dimension to decouple the bulk equations in the quasistatic approximation, which we argue may be a better approximation at large scales than thought before. We then study the nonlinearities in the bulk and brane equations, concentrating on the workings of the Vainshtein mechanism by which the theory becomes general relativity (GR) at small scales. We show that at the level of the power spectrum, to a good approximation, the effect of nonlinearities in the modified gravity sector may be absorbed into a renormalization of the gravitational constant. Since the relation between the lensing potential and density perturbations is entirely unaffected by the extra physics in these theories, the modified gravity can be described in this approximation by a single function, an effective gravitational constant for nonrelativistic motion that depends on space and time. We develop a resummation scheme to calculate it, and provide predictions for the nonlinear power spectrum. At the level of the large-scale bispectrum, the leading order corrections are obtained by standard perturbation theory techniques, and show that the suppression of the brane-bending mode leads to characteristic signatures in the non-Gaussianity generated by gravity, generic to models that become GR at small scales through second-derivative interactions. We compare the predictions in this work to numerical simulations in a companion paper.

  9. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis

    SciTech Connect (OSTI)

    Chen, H.-W.; Chang, N.-B.; Chen, J.-C.; Tsai, S.-J.

    2010-07-15

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.

  10. On the possible origin of the large scale cosmic magnetic field

    SciTech Connect (OSTI)

    Coroniti, F. V.

    2014-01-10

    The possibility that the large scale cosmic magnetic field is directly generated at microgauss, equipartition levels during the reionization epoch by collisionless shocks that are forced to satisfy a downstream shear flow boundary condition is investigated through the development of two modelsthe accretion of an ionized plasma onto a weakly ionized cool galactic disk and onto a cool filament of the cosmic web. The dynamical structure and the physical parameters of the models are synthesized from recent cosmological simulations of the early reionization era after the formation of the first stars. The collisionless shock stands upstream of the disk and filament, and its dissipation is determined by ion inertial length Weibel turbulence. The downstream shear boundary condition is determined by the rotational neutral gas flow in the disk and the inward accretion flow along the filament. The shocked plasma is accelerated to the downstream shear flow velocity by the Weibel turbulence, and the relative shearing motion between the electrons and ions produces a strong, ion inertial scale current sheet that generates an equipartition strength, large scale downstream magnetic field, ?10{sup 6} G for the disk and ?6 10{sup 8} G for the filament. By assumption, hydrodynamic turbulence transports the shear-shock generated magnetic flux throughout the disk and filament volume.

  11. A High-Performance Rechargeable Iron Electrode for Large-Scale Battery-Based Energy Storage

    SciTech Connect (OSTI)

    Manohar, AK; Malkhandi, S; Yang, B; Yang, C; Prakash, GKS; Narayanan, SR

    2012-01-01

    Inexpensive, robust and efficient large-scale electrical energy storage systems are vital to the utilization of electricity generated from solar and wind resources. In this regard, the low cost, robustness, and eco-friendliness of aqueous iron-based rechargeable batteries are particularly attractive and compelling. However, wasteful evolution of hydrogen during charging and the inability to discharge at high rates have limited the deployment of iron-based aqueous batteries. We report here new chemical formulations of the rechargeable iron battery electrode to achieve a ten-fold reduction in the hydrogen evolution rate, an unprecedented charging efficiency of 96%, a high specific capacity of 0.3 Ah/g, and a twenty-fold increase in discharge rate capability. We show that modifying high-purity carbonyl iron by in situ electro-deposition of bismuth leads to substantial inhibition of the kinetics of the hydrogen evolution reaction. The in situ formation of conductive iron sulfides mitigates the passivation by iron hydroxide thereby allowing high discharge rates and high specific capacity to be simultaneously achieved. These major performance improvements are crucial to advancing the prospect of a sustainable large-scale energy storage solution based on aqueous iron-based rechargeable batteries. (C) 2012 The Electrochemical Society. [DOI: 10.1149/2.034208jes] All rights reserved.

  12. Nonlinear Seismic Correlation Analysis of the JNES/NUPEC Large-Scale Piping System Tests.

    SciTech Connect (OSTI)

    Nie,J.; DeGrassi, G.; Hofmayer, C.; Ali, S.

    2008-06-01

    The Japan Nuclear Energy Safety Organization/Nuclear Power Engineering Corporation (JNES/NUPEC) large-scale piping test program has provided valuable new test data on high level seismic elasto-plastic behavior and failure modes for typical nuclear power plant piping systems. The component and piping system tests demonstrated the strain ratcheting behavior that is expected to occur when a pressurized pipe is subjected to cyclic seismic loading. Under a collaboration agreement between the US and Japan on seismic issues, the US Nuclear Regulatory Commission (NRC)/Brookhaven National Laboratory (BNL) performed a correlation analysis of the large-scale piping system tests using derailed state-of-the-art nonlinear finite element models. Techniques are introduced to develop material models that can closely match the test data. The shaking table motions are examined. The analytical results are assessed in terms of the overall system responses and the strain ratcheting behavior at an elbow. The paper concludes with the insights about the accuracy of the analytical methods for use in performance assessments of highly nonlinear piping systems under large seismic motions.

  13. Large-scale structure evolution in axisymmetric, compressible free-shear layers

    SciTech Connect (OSTI)

    Aeschliman, D.P.; Baty, R.S.

    1997-05-01

    This paper is a description of work-in-progress. It describes Sandia`s program to study the basic fluid mechanics of large-scale mixing in unbounded, compressible, turbulent flows, specifically, the turbulent mixing of an axisymmetric compressible helium jet in a parallel, coflowing compressible air freestream. Both jet and freestream velocities are variable over a broad range, providing a wide range mixing layer Reynolds number. Although the convective Mach number, M{sub c}, range is currently limited by the present nozzle design to values of 0.6 and below, straightforward nozzle design changes would permit a wide range of convective Mach number, to well in excess of 1.0. The use of helium allows simulation of a hot jet due to the large density difference, and also aids in obtaining optical flow visualization via schlieren due to the large density gradient in the mixing layer. The work comprises a blend of analysis, experiment, and direct numerical simulation (DNS). There the authors discuss only the analytical and experimental efforts to observe and describe the evolution of the large-scale structures. The DNS work, used to compute local two-point velocity correlation data, will be discussed elsewhere.

  14. Selection of components for the IDEALHY preferred cycle for the large scale liquefaction of hydrogen

    SciTech Connect (OSTI)

    Quack, H.; Seemann, I.; Klaus, M.; Haberstroh, Ch.; Berstad, D.; Walnum, H. T.; Neksa, P.; Decker, L.

    2014-01-29

    In a future energy scenario, in which storage and transport of liquid hydrogen in large quantities will be used, the efficiency of the liquefaction of hydrogen will be of utmost importance. The goal of the IDEALHY working party is to identify the most promising process for a 50 t/d plant and to select the components, with which such a process can be realized. In the first stage the team has compared several processes, which have been proposed or realized in the past. Based on this information a process has been selected, which is thermodynamically most promising and for which it could be assumed that good components already exist or can be developed in the foreseeable future. Main features of the selected process are the compression of the feed stream to a relatively high pressure level, o-p conversion inside plate-fin heat exchangers and expansion turbines in the supercritical region. Precooling to a temperature between 150 and 100 K will be obtained from a mixed refrigerant cycle similar to the systems used successfully in natural gas liquefaction plants. The final cooling will be produced by two Brayton cycles, both having several expansion turbines in series. The selected overall process has still a number of parameters, which can be varied. The optimum, i.e. the final choice will depend mainly on the quality of the available components. Key components are the expansion turbines of the two Brayton cycles and the main recycle compressor, which may be common to both Brayton cycles. A six-stage turbo-compressor with intercooling between the stages is expected to be the optimum choice here. Each stage may consist of several wheels in series. To make such a high efficient and cost-effective compressor feasible, one has to choose a refrigerant, which has a higher molecular weight than helium. The present preferred choice is a mixture of helium and neon with a molecular weight of about 8 kg/kmol. Such an expensive refrigerant requires that the whole refrigeration loop

  15. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    SciTech Connect (OSTI)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  16. Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design

    SciTech Connect (OSTI)

    Liao, Ben-Shan; Bai, Zhaojun; Lee, Lie-Quan; Ko, Kwok; /SLAC

    2006-09-28

    A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.

  17. Large-scale production of anhydrous nitric acid and nitric acid solutions of dinitrogen pentoxide

    DOE Patents [OSTI]

    Harrar, Jackson E.; Quong, Roland; Rigdon, Lester P.; McGuire, Raymond R.

    2001-01-01

    A method and apparatus are disclosed for a large scale, electrochemical production of anhydrous nitric acid and N.sub.2 O.sub.5. The method includes oxidizing a solution of N.sub.2 O.sub.4 /aqueous-HNO.sub.3 at the anode, while reducing aqueous HNO.sub.3 at the cathode, in a flow electrolyzer constructed of special materials. N.sub.2 O.sub.4 is produced at the cathode and may be separated and recycled as a feedstock for use in the anolyte. The process is controlled by regulating the electrolysis current until the desired products are obtained. The chemical compositions of the anolyte and catholyte are monitored by measurement of the solution density and the concentrations of N.sub.2 O.sub.4.

  18. Large-Scale Computational Screening of Zeolites for Ethane/Ethene Separation

    SciTech Connect (OSTI)

    Kim, J; Lin, LC; Martin, RL; Swisher, JA; Haranczyk, M; Smit, B

    2012-08-14

    Large-scale computational screening of thirty thousand zeolite structures was conducted to find optimal structures for seperation of ethane/ethene mixtures. Efficient grand canonical Monte Carlo (GCMC) simulations were performed with graphics processing units (GPUs) to obtain pure component adsorption isotherms for both ethane and ethene. We have utilized the ideal adsorbed solution theory (LAST) to obtain the mixture isotherms, which were used to evaluate the performance of each zeolite structure based on its working capacity and selectivity. In our analysis, we have determined that specific arrangements of zeolite framework atoms create sites for the preferential adsorption of ethane over ethene. The majority of optimum separation materials can be identified by utilizing this knowledge and screening structures for the presence of this feature will enable the efficient selection of promising candidate materials for ethane/ethene separation prior to performing molecular simulations.

  19. Large-scale exploratory tests of sodium/limestone concrete interactions. [LMFBR

    SciTech Connect (OSTI)

    Randich, E.; Smaardyk, J.E.; Acton, R.U.

    1983-02-01

    Eleven large-scale tests examining the interaction of molten sodium and limestone (calcite) concrete were performed. The tests typically used between 100 and 200 kg of sodium at temperatures between 723 K and 973 K and a total sodium/concrete contact area of approx. 1.0m/sup 2/. The results show that energetic reactions can occur between sodium and limestone concrete. Delay times of less than 30 minutes were observed before the onset of the energetic phase. Not all tests exhibited energetic reactions and the results indicate that there is a sodium temperature threshold of 723 K to 773 K which is necessary to initiate the energetic phase. Maximum heat fluxes during the energetic phase were measured at 3.6 x 10/sup 5/ J/m/sup 2/-s. Maximum penetration rates were 4 mm/min. Total concrete erosion varied from 1 to 15 cm.

  20. Measurement of the large-scale anisotropy of the cosmic background radiation at 3mm

    SciTech Connect (OSTI)

    Epstein, G.L.

    1983-12-01

    A balloon-borne differential radiometer has measured the large-scale anisotropy of the cosmic background radiation (CBR) with high sensitivity. The antenna temperature dipole anistropy at 90 GHz (3 mm wavelength) is 2.82 +- 0.19 mK, corresponding to a thermodynamic anistropy of 3.48 +- mK for a 2.7 K blackbody CBR. The dipole direction, 11.3 +- 0.1 hours right ascension and -5.7/sup 0/ +- 1.8/sup 0/ declination, agrees well with measurements at other frequencies. Calibration error dominates magnitude uncertainty, with statistical errors on dipole terms being under 0.1 mK. No significant quadrupole power is found, placing a 90% confidence-level upper limit of 0.27 mK on the RMS thermodynamic quadrupolar anistropy. 22 figures, 17 tables.

  1. Detecting and mitigating abnormal events in large scale networks: budget constrained placement on smart grids

    SciTech Connect (OSTI)

    Santhi, Nandakishore; Pan, Feng

    2010-10-19

    Several scenarios exist in the modern interconnected world which call for an efficient network interdiction algorithm. Applications are varied, including various monitoring and load shedding applications on large smart energy grids, computer network security, preventing the spread of Internet worms and malware, policing international smuggling networks, and controlling the spread of diseases. In this paper we consider some natural network optimization questions related to the budget constrained interdiction problem over general graphs, specifically focusing on the sensor/switch placement problem for large-scale energy grids. Many of these questions turn out to be computationally hard to tackle. We present a particular form of the interdiction question which is practically relevant and which we show as computationally tractable. A polynomial-time algorithm will be presented for solving this problem.

  2. Infrared spectroscopy of large scale single layer graphene on self assembled organic monolayer

    SciTech Connect (OSTI)

    Woo Kim, Nak; Youn Kim, Joo; Lee, Chul; Choi, E. J.; Jin Kim, Sang; Hee Hong, Byung

    2014-01-27

    We study the effect of self-assembled monolayer (SAM) organic molecule substrate on large scale single layer graphene using infrared transmission measurement on Graphene/SAM/SiO{sub 2}/Si composite samples. From the Drude weight of the chemically inert CH{sub 3}-SAM, the electron-donating NH{sub 2}-SAM, and the SAM-less graphene, we determine the carrier density doped into graphene by the three sources—the SiO{sub 2} substrate, the gas-adsorption, and the functional group of the SAM's—separately. The SAM-treatment leads to the low carrier density N ∼ 4 × 10{sup 11} cm{sup −2} by blocking the dominant SiO{sub 2}- driven doping. The carrier scattering increases by the SAM-treatment rather than decreases. However, the transport mobility is nevertheless improved due to the reduced carrier doping.

  3. Efficient preconditioning of the electronic structure problem in large scale ab initio molecular dynamics simulations

    SciTech Connect (OSTI)

    Schiffmann, Florian; VandeVondele, Joost

    2015-06-28

    We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filtering small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.

  4. A review of large-scale LNG spills : experiment and modeling.

    SciTech Connect (OSTI)

    Luketa-Hanlin, Anay Josephine

    2005-04-01

    The prediction of the possible hazards associated with the storage and transportation of liquefied natural gas (LNG) by ship has motivated a substantial number of experimental and analytical studies. This paper reviews the experimental and analytical work performed to date on large-scale spills of LNG. Specifically, experiments on the dispersion of LNG, as well as experiments of LNG fires from spills on water and land are reviewed. Explosion, pool boiling, and rapid phase transition (RPT) explosion studies are described and discussed, as well as models used to predict dispersion and thermal hazard distances. Although there have been significant advances in understanding the behavior of LNG spills, technical knowledge gaps to improve hazard prediction are identified. Some of these gaps can be addressed with current modeling and testing capabilities. A discussion of the state of knowledge and recommendations to further improve the understanding of the behavior of LNG spills on water is provided.

  5. Drivers and barriers to e-invoicing adoption in Greek large scale manufacturing industries

    SciTech Connect (OSTI)

    Marinagi, Catherine E-mail: ptrivel@yahoo.com Trivellas, Panagiotis E-mail: ptrivel@yahoo.com Reklitis, Panagiotis E-mail: ptrivel@yahoo.com; Skourlas, Christos

    2015-02-09

    This paper attempts to investigate the drivers and barriers that large-scale Greek manufacturing industries experience in adopting electronic invoices (e-invoices), based on three case studies with organizations having international presence in many countries. The study focuses on the drivers that may affect the increase of the adoption and use of e-invoicing, including the customers demand for e-invoices, and sufficient know-how and adoption of e-invoicing in organizations. In addition, the study reveals important barriers that prevent the expansion of e-invoicing, such as suppliers’ reluctance to implement e-invoicing, and IT infrastructures incompatibilities. Other issues examined by this study include the observed benefits from e-invoicing implementation, and the financial priorities of the organizations assumed to be supported by e-invoicing.

  6. Aerosols released during large-scale integral MCCI tests in the ACE Program

    SciTech Connect (OSTI)

    Fink, J.K.; Thompson, D.H.; Spencer, B.W.; Sehgal, B.R.

    1992-04-01

    As part of the internationally sponsored Advanced Containment Experiments (ACE) program, seven large-scale experiments on molten core concrete interactions (MCCIs) have been performed at Argonne National Laboratory. One of the objectives of these experiments is to collect and characterize all the aerosols released from the MCCIs. Aerosols released from experiments using four types of concrete (siliceous, limestone/common sand, serpentine, and limestone/limestone) and a range of metal oxidation for both BWR and PWR reactor core material have been collected and characterized. Release fractions were determined for UO{sup 2}, Zr, the fission-products: BaO, SrO, La{sub 2}O{sub 3}, CeO{sub 2}, MoO{sub 2}, Te, Ru, and control materials: Ag, In, and B{sub 4}C. Release fractions of UO{sub 2} and the fission products other than Te were small in all tests. However, release of control materials was significant.

  7. Aerosols released during large-scale integral MCCI tests in the ACE Program

    SciTech Connect (OSTI)

    Fink, J.K.; Thompson, D.H.; Spencer, B.W. ); Sehgal, B.R. )

    1992-01-01

    As part of the internationally sponsored Advanced Containment Experiments (ACE) program, seven large-scale experiments on molten core concrete interactions (MCCIs) have been performed at Argonne National Laboratory. One of the objectives of these experiments is to collect and characterize all the aerosols released from the MCCIs. Aerosols released from experiments using four types of concrete (siliceous, limestone/common sand, serpentine, and limestone/limestone) and a range of metal oxidation for both BWR and PWR reactor core material have been collected and characterized. Release fractions were determined for UO{sup 2}, Zr, the fission-products: BaO, SrO, La{sub 2}O{sub 3}, CeO{sub 2}, MoO{sub 2}, Te, Ru, and control materials: Ag, In, and B{sub 4}C. Release fractions of UO{sub 2} and the fission products other than Te were small in all tests. However, release of control materials was significant.

  8. Molecular Dynamics Simulations from SNL's Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Plimpton, Steve; Thompson, Aidan; Crozier, Paul

    LAMMPS (http://lammps.sandia.gov/index.html) stands for Large-scale Atomic/Molecular Massively Parallel Simulator and is a code that can be used to model atoms or, as the LAMMPS website says, as a parallel particle simulator at the atomic, meso, or continuum scale. This Sandia-based website provides a long list of animations from large simulations. These were created using different visualization packages to read LAMMPS output, and each one provides the name of the PI and a brief description of the work done or visualization package used. See also the static images produced from simulations at http://lammps.sandia.gov/pictures.html The foundation paper for LAMMPS is: S. Plimpton, Fast Parallel Algorithms for Short-Range Molecular Dynamics, J Comp Phys, 117, 1-19 (1995), but the website also lists other papers describing contributions to LAMMPS over the years.

  9. Aerodynamic force measurement on a large-scale model in a short duration test facility

    SciTech Connect (OSTI)

    Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.

    2005-03-01

    A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3 m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350 {mu}s is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1 ms.

  10. Observed large-scale structures and diabatic heating and drying profiles during TWP-ICE

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Xie, Shaocheng; Hume, Timothy; Jakob, Christian; Klein, Stephen A.; McCoy, Renata B.; Zhang, Minghua

    2010-01-01

    This study documents the characteristics of the large-scale structures and diabatic heating and drying profiles observed during the Tropical Warm Pool–International Cloud Experiment (TWP-ICE), which was conducted in January–February 2006 in Darwin during the northern Australian monsoon season. The examined profiles exhibit significant variations between four distinct synoptic regimes that were observed during the experiment. The active monsoon period is characterized by strong upward motion and large advective cooling and moistening throughout the entire troposphere, while the suppressed and clear periods are dominated by moderate midlevel subsidence and significant low- to midlevel drying through horizontal advection. The midlevel subsidence andmore » horizontal dry advection are largely responsible for the dry midtroposphere observed during the suppressed period and limit the growth of clouds to low levels. During the break period, upward motion and advective cooling and moistening located primarily at midlevels dominate together with weak advective warming and drying (mainly from horizontal advection) at low levels. The variations of the diabatic heating and drying profiles with the different regimes are closely associated with differences in the large-scale structures, cloud types, and rainfall rates between the regimes. Strong diabatic heating and drying are seen throughout the troposphere during the active monsoon period while they are moderate and only occur above 700 hPa during the break period. The diabatic heating and drying tend to have their maxima at low levels during the suppressed periods. Furthermore, the diurnal variations of these structures between monsoon systems, continental/coastal, and tropical inland-initiated convective systems are also examined.« less