National Library of Energy BETA

Sample records for large-scale scientific computing

  1. Large Scale Computing and Storage Requirements for Advanced Scientific...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for ...

  2. (Sparsity in large scale scientific computation)

    SciTech Connect (OSTI)

    Ng, E.G.

    1990-08-20

    The traveler attended a conference organized by the 1990 IBM Europe Institute at Oberlech, Austria. The theme of the conference was on sparsity in large scale scientific computation. The conference featured many presentations and other activities of direct interest to ORNL research programs on sparse matrix computations and parallel computing, which are funded by the Applied Mathematical Sciences Subprogram of the DOE Office of Energy Research. The traveler presented a talk on his work at ORNL on the development of efficient algorithms for solving sparse nonsymmetric systems of linear equations. The traveler held numerous technical discussions on issues having direct relevance to the research programs on sparse matrix computations and parallel computing at ORNL.

  3. Large Scale Computing and Storage Requirements for Advanced Scientific

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Research: Target 2014 Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research An ASCR / NERSC Review January 5-6, 2011 Final Report Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research, Report of the Joint ASCR / NERSC Workshop conducted January 5-6, 2011 Goals This workshop is being

  4. DOE's Office of Science Seeks Proposals for Expanded Large-Scale Scientific Computing

    Broader source: Energy.gov [DOE]

    WASHINGTON, D.C. -- Secretary of Energy Samuel W. Bodman announced today that DOE’s Office of Science is seeking proposals to support innovative, large-scale computational science projects to...

  5. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 ... Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific ...

  6. Large Scale Production Computing and Storage Requirements for Fusion Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences" is organized by the Department of Energy's Office of Fusion Energy Sciences (FES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to

  7. Large Scale Production Computing and Storage Requirements for High Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 HEPlogo.jpg The NERSC Program Requirements Review "Large Scale Computing and Storage Requirements for High Energy Physics" is organized by the Department of Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to characterize

  8. Supporting large-scale computational science

    SciTech Connect (OSTI)

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  9. Large Scale Production Computing and Storage Requirements for Advanced

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scientific Computing Research: Target 2017 Large Scale Production Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2017 ASCRLogo.png This is an invitation-only review organized by the Department of Energy's Office of Advanced Scientific Computing Research (ASCR) and NERSC. The general goal is to determine production high-performance computing, storage, and services that will be needed for ASCR to achieve its science goals through 2017. A specific focus

  10. Large Scale Computing and Storage Requirements for Nuclear Physics: Target

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    2014 Large Scale Computing and Storage Requirements for Nuclear Physics: Target 2014 NPFrontcover.png May 26-27, 2011 Hyatt Regency Bethesda One Bethesda Metro Center (7400 Wisconsin Ave) Bethesda, Maryland, USA 20814 Final Report Large Scale Computing and Storage Requirements for Nuclear Physics Research, Report of the Joint NP / NERSC Workshop Conducted May 26-27, 2011 Bethesda, MD Sponsored by the U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing

  11. DOE's Office of Science Awards 18 Million Hours of Supercomputing Time to 15 Teams for Large-Scale Scientific Computing

    Office of Energy Efficiency and Renewable Energy (EERE)

    WASHINGTON, D.C. - Secretary of Energy Samuel W. Bodman announced today that DOE's Office of Science has awarded a total of 18.2 million hours of computing time on some of the world's most powerful...

  12. Large Scale Production Computing and Storage Requirements for Basic Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Sciences: Target 2017 Large Scale Production Computing and Storage Requirements for Basic Energy Sciences: Target 2017 BES-Montage.png This is an invitation-only review organized by the Department of Energy's Office of Basic Energy Sciences (BES), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The goal is to determine production high-performance computing, storage, and services that will be needed for BES to

  13. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Production Computing and Storage Requirements for Fusion Energy Sciences: Target 2017 The NERSC Program Requirements Review "Large Scale Production Computing and ...

  14. Large-Scale Computational Fluid Dynamics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large-Scale Computational Fluid Dynamics - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management

  15. Large Scale Production Computing and Storage Requirements for Nuclear

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for Nuclear Physics: Target 2017 NPicon.png This invitation-only review is organized by the Department of Energy's Offices of Nuclear Physics (NP) and Advanced Scientific Computing Research (ASCR) and by NERSC. The goal is to determine production high-performance computing, storage, and services that will be needed for NP to achieve its science goals through 2017. The review brings together DOE Program Managers,

  16. Large Scale Production Computing and Storage Requirements for Biological

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Environmental Research: Target 2017 Large Scale Production Computing and Storage Requirements for Biological and Environmental Research: Target 2017 BERmontage.gif September 11-12, 2012 Hilton Rockville Hotel and Executive Meeting Center 1750 Rockville Pike Rockville, MD, 20852-1699 TEL: 1-301-468-1100 Sponsored by: U.S. Department of Energy Office of Science Office of Advanced Scientific Computing Research (ASCR) Office of Biological and Environmental Research (BER) National Energy

  17. Large Scale Computing and Storage Requirements for Basic Energy Sciences:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Target 2014 Large Scale Computing and Storage Requirements for Basic Energy Sciences: Target 2014 BESFrontcover.png Final Report Large Scale Computing and Storage Requirements for Basic Energy Sciences, Report of the Joint BES/ ASCR / NERSC Workshop conducted February 9-10, 2010 Workshop Agenda The agenda for this workshop is presented here: including presentation times and speaker information. Read More » Workshop Presentations Large Scale Computing and Storage Requirements for Basic

  18. Large Scale Computing and Storage Requirements for High Energy Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for High Energy Physics HEPFrontcover.png Large Scale Computing and Storage Requirements for High Energy Physics An HEP / ASCR / NERSC Workshop November 12-13, 2009 Report Large Scale Computing and Storage Requirements for High Energy Physics, Report of the Joint HEP / ASCR / NERSC Workshop conducted Nov. 12-13, 2009 https://www.nersc.gov/assets/HPC-Requirements-for-Science/HEPFrontcover.png Goals This workshop was organized by the Department of

  19. Harvey Wasserman! Large Scale Computing and Storage Requirements...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for High Energy Physics Research: Target 2017 ...www.nersc.govsciencerequirementsHEP * Mid---morning a

  20. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  1. Avanced Large-scale Integrated Computational Environment

    Energy Science and Technology Software Center (OSTI)

    1998-10-27

    The ALICE Memory Snooper is a software applications programming interface (API) and library for use in implementing computational steering systems. It allows distributed memory parallel programs to publish variables in the computation that may be accessed over the Internet. In this way, users can examine and even change the variables in their running application remotely. The API and library ensure the consistency of the variables across the distributed memory system.

  2. Large Scale Computing and Storage Requirements for Biological and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Environmental Research: Target 2014 Large Scale Computing and Storage Requirements for Biological and Environmental Research: Target 2014 BERFrontcover.png A BER / ASCR / NERSC Workshop May 7-8, 2009 Final Report Large Scale Computing and Storage Requirements for Biological and Environmental Research, Report of the Joint BER / NERSC Workshop Conducted May 7-8, 2009 Rockville, MD Goals This workshop was jointly organized by the Department of Energy's Office of Biological & Environmental

  3. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  4. Parallel Tensor Compression for Large-Scale Scientific Data.

    SciTech Connect (OSTI)

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  5. Large Scale Computing and Storage Requirements for Fusion Energy Sciences:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Target 2014 High Energy Physics (HEP) Nuclear Physics (NP) Overview Published Reports Case Study FAQs NERSC HPC Achievement Awards Share Your Research User Submitted Research Citations NERSC Citations Home » Science at NERSC » HPC Requirements Reviews » Requirements Reviews: Target 2014 » Fusion Energy Sciences (FES) Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2014 FESFrontcover.png An FES / ASCR / NERSC Workshop August 3-4, 2010 Final Report Large

  6. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Requirements for Advanced Scientific Computing Research: Target 2017 ASCRLogo.png This is an invitation-only review organized by the Department of Energy's Office of Advanced ...

  7. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    This is an invitation-only review organized by the Department of Energy's Office of Basic Energy Sciences (BES), Office of Advanced Scientific Computing Research (ASCR), and the ...

  8. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect (OSTI)

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  9. Computational Fluid Dynamics & Large-Scale Uncertainty Quantification for

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Wind Energy Fluid Dynamics & Large-Scale Uncertainty Quantification for Wind Energy - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery

  10. Advanced Scientific Computing Research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, ... The DOE Office of Science's Advanced Scientific Computing Research (ASCR) program ...

  11. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    SciTech Connect (OSTI)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a brief summary of those relevant to issues

  12. Large-scale computations in analysis of structures

    SciTech Connect (OSTI)

    McCallen, D.B.; Goudreau, G.L.

    1993-09-01

    Computer hardware and numerical analysis algorithms have progressed to a point where many engineering organizations and universities can perform nonlinear analyses on a routine basis. Through much remains to be done in terms of advancement of nonlinear analysis techniques and characterization on nonlinear material constitutive behavior, the technology exists today to perform useful nonlinear analysis for many structural systems. In the current paper, a survey on nonlinear analysis technologies developed and employed for many years on programmatic defense work at the Lawrence Livermore National Laboratory is provided, and ongoing nonlinear numerical simulation projects relevant to the civil engineering field are described.

  13. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  14. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    SciTech Connect (OSTI)

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  15. Edison Electrifies Scientific Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

  16. Implementation of a multi-threaded framework for large-scale scientific applications

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Sexton-Kennedy, E.; Gartung, Patrick; Jones, C. D.; Lange, David

    2015-05-22

    The CMS experiment has recently completed the development of a multi-threaded capable application framework. In this paper, we will discuss the design, implementation and application of this framework to production applications in CMS. For the 2015 LHC run, this functionality is particularly critical for both our online and offline production applications, which depend on faster turn-around times and a reduced memory footprint relative to before. These applications are complex codes, each including a large number of physics-driven algorithms. While the framework is capable of running a mix of thread-safe and 'legacy' modules, algorithms running in our production applications need tomore » be thread-safe for optimal use of this multi-threaded framework at a large scale. Towards this end, we discuss the types of changes, which were necessary for our algorithms to achieve good performance of our multithreaded applications in a full-scale application. Lastly performance numbers for what has been achieved for the 2015 run are presented.« less

  17. Efficient Feature-Driven Visualization of Large-Scale Scientific Data

    SciTech Connect (OSTI)

    Lu, Aidong

    2012-12-12

    Very large, complex scientific data acquired in many research areas creates critical challenges for scientists to understand, analyze, and organize their data. The objective of this project is to expand the feature extraction and analysis capabilities to develop powerful and accurate visualization tools that can assist domain scientists with their requirements in multiple phases of scientific discovery. We have recently developed several feature-driven visualization methods for extracting different data characteristics of volumetric datasets. Our results verify the hypothesis in the proposal and will be used to develop additional prototype systems.

  18. Optimization and large scale computation of an entropy-based moment closure

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used asmore » test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.« less

  19. Optimization and large scale computation of an entropy-based moment closure

    SciTech Connect (OSTI)

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  20. Large-Scale Computational Screening of Zeolites for Ethane/Ethene Separation

    SciTech Connect (OSTI)

    Kim, J; Lin, LC; Martin, RL; Swisher, JA; Haranczyk, M; Smit, B

    2012-08-14

    Large-scale computational screening of thirty thousand zeolite structures was conducted to find optimal structures for seperation of ethane/ethene mixtures. Efficient grand canonical Monte Carlo (GCMC) simulations were performed with graphics processing units (GPUs) to obtain pure component adsorption isotherms for both ethane and ethene. We have utilized the ideal adsorbed solution theory (LAST) to obtain the mixture isotherms, which were used to evaluate the performance of each zeolite structure based on its working capacity and selectivity. In our analysis, we have determined that specific arrangements of zeolite framework atoms create sites for the preferential adsorption of ethane over ethene. The majority of optimum separation materials can be identified by utilizing this knowledge and screening structures for the presence of this feature will enable the efficient selection of promising candidate materials for ethane/ethene separation prior to performing molecular simulations.

  1. Advanced Scientific Computing Research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

  2. Large Scale Computing and Storage Requirements for Biological and Environmental Research

    SciTech Connect (OSTI)

    DOE Office of Science, Biological and Environmental Research Program Office ,

    2009-09-30

    In May 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of Biological and Environmental Research (BER) held a workshop to characterize HPC requirements for BER-funded research over the subsequent three to five years. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. Chief among them: scientific progress in BER-funded research is limited by current allocations of computational resources. Additionally, growth in mission-critical computing -- combined with new requirements for collaborative data manipulation and analysis -- will demand ever increasing computing, storage, network, visualization, reliability and service richness from NERSC. This report expands upon these key points and adds others. It also presents a number of"case studies" as significant representative samples of the needs of science teams within BER. Workshop participants were asked to codify their requirements in this"case study" format, summarizing their science goals, methods of solution, current and 3-5 year computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel,"multi-core" environment that is expected to dominate HPC architectures over the next few years.

  3. Scientific Cloud Computing Misconceptions

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scientific Cloud Computing Misconceptions Scientific Cloud Computing Misconceptions July 1, 2011 Part of the Magellan project was to understand both the possibilities and the limitations of cloud computing in the pursuit of science. At a recent conference, Magellan investigator Shane Canon outlined some persistent misconceptions about doing science in the cloud - and what Magellan has taught us about them. » Read the ISGTW story. » Download the slides (PDF, 4.1MB

  4. QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP),

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP), Bethesda MD, April 29-30, 2014 NY Center for Computational Science 2 Defining questions of nuclear physics research in US: Nuclear Science Advisory Committee (NSAC) "The Frontiers of Nuclear Science", 2007 Long Range Plan "What are the phases of strongly interacting matter and what roles do they play in the cosmos ?" "What does QCD predict for

  5. DOE's Office of Science Seeks Proposals for Expanded Large-Scale...

    Office of Environmental Management (EM)

    Seeks Proposals for Expanded Large-Scale Scientific Computing DOE's Office of Science ... Successful proposers will be given the use of substantial computer time and data storage ...

  6. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect (OSTI)

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  7. Efficient large-scale finite-element computations in a CRAY environment

    SciTech Connect (OSTI)

    Goudreau, G.L.; Bailey, R.A.; Hallquist, J.O.; Murray, R.C.; Sackett, S.J.

    1983-06-01

    The Lawrence Livermore National Laboratory engineering computational experience on the CRAY-1 is highlighted in the context of our large general purpose solid and structural mechanics codes. DYNA2D and DYNA3D are explicit large deformation inelastic Lagrangian codes with one point elements and hourglass control. NIKE2D and NIKE3D are implicit codes of comparable continuum formulation but use two point constant pressure elements and an optimized linear equation solver. NIKE3D has a finite rotation plastic resultant shell element. The new general purpose linear elastic structures code GEMINI is also illustrated for large static and eigenvalue analysis. 19 references.

  8. DFT modeling of adsorption onto uranium metal using large-scale parallel computing

    SciTech Connect (OSTI)

    Davis, N.; Rizwan, U.

    2013-07-01

    There is a dearth of atomistic simulations involving the surface chemistry of 7-uranium which is of interest as the key fuel component of a breeder-burner stage in future fuel cycles. Recent availability of high-performance computing hardware and software has rendered extended quantum chemical surface simulations involving actinides feasible. With that motivation, data for bulk and surface 7-phase uranium metal are calculated in the plane-wave pseudopotential density functional theory method. Chemisorption of atomic hydrogen and oxygen on several un-relaxed low-index faces of 7-uranium is considered. The optimal adsorption sites (calculated cohesive energies) on the (100), (110), and (111) faces are found to be the one-coordinated top site (8.8 eV), four-coordinated center site (9.9 eV), and one-coordinated top 1 site (7.9 eV) respectively, for oxygen; and the four-coordinated center site (2.7 eV), four-coordinated center site (3.1 eV), and three-coordinated top2 site (3.2 eV) for hydrogen. (authors)

  9. Recent developments in large-scale finite-element Lagrangian hydrocode technology. [Dyna 20/dyna 30 computer code

    SciTech Connect (OSTI)

    Goudreau, G.L.; Hallquist, J.O.

    1981-10-01

    The state of Lagrangian hydrocodes for computing the large deformation dynamic response of inelastic continuua is reviewed in the context of engineering computation at the Lawrence Livermore National Laboratory, USA, and the DYNA2D/DYNA3D finite elements codes. The emphasis is on efficiency and computational cost. The simplest elements with explicit time integration. The two-dimensional four node quadrilateral and the three-dimensional hexahedron with one point quadrature are advocated as superior to other more expensive choices. Important auxiliary capabilities are a cheap but effective hourglass control, slidelines/planes with void opening/closure, and rezoning. Both strain measures and material formulation are seen as a homogeneous stress point problem and a flexible material subroutine interface admits both incremental and total strain formulation, dependent on internal energy or an arbitrary set of other internal variables. Vectorization on Class VI computers such as the CRAY-1 is a simple exercise for optimally organized primitive element formulations. Some examples of large scale computation are illustrated, including continuous tone graphic representation.

  10. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    SciTech Connect (OSTI)

    Messer, Bronson; Sewell, Christopher; Heitmann, Katrin; Finkel, Dr. Hal J; Fasel, Patricia; Zagaris, George; Pope, Adrian; Habib, Salman; Parete-Koon, Suzanne T

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial in situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.

  11. Edison Electrifies Scientific Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Deployment of Edison was made possible in part by funding from DOE's Office of Science and the DARPA High Productivity Computing Systems program. DOE's Office of Science is the ...

  12. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Running Large Scale Jobs Running Large Scale Jobs Users face various challenges with running and scaling large scale jobs on peta-scale production systems. For example, certain applications may not have enough memory per core, the default environment variables may need to be adjusted, or I/O dominates run time. This page lists some available programming and run time tuning options and tips users can try on their large scale applications on Hopper for better performance. Try different compilers

  13. Running Large Scale Jobs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    try on their large scale applications on Hopper for better performance. Try different compilers and compiler options The available compilers on Hopper are PGI, Cray, Intel, GNU,...

  14. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect (OSTI)

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  15. NERSC National Energy Research Scientific Computing Center

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    National Energy Research Scientific Computing Center 2007 Annual Report National Energy Research Scientific Computing Center 2007 Annual Report Ernest Orlando Lawrence Berkeley National Laboratory 1 Cyclotron Road, Berkeley, CA 94720-8148 This work was supported by the Director, Office of Science, Office of Ad- vanced Scientific Computing Research of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. LBNL-1143E, October 2008 iii National Energy Research Scientific Computing

  16. National Energy Research Scientific Computing Center

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scientific Computing Center 2004 annual report Cover image: Visualization based on a simulation of the density of a fuel pellet after it is injected into a tokamak fusion reactor. See page 40 for more information. National Energy Research Scientific Computing Center 2004 annual report Ernest Orlando Lawrence Berkeley National Laboratory * University of California * Berkeley, California 94720 This work was supported by the Director, Office of Science, Office of Advanced Scientific Computing

  17. Fermilab | Science | Particle Physics | Scientific Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scientific Computing Feynman Computing Center State-of-the-art computing facilities and expertise drive successful research in experimental and theoretical particle physics. Fermilab is a pioneer in managing "big data" and counts scientific computing as one of its core competencies. For scientists to understand the huge amounts of raw information coming from particle physics experiments, they must process, analyze and compare the information to simulations. To accomplish these feats,

  18. Advanced Scientific Computing Research (ASCR)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... ASCR's programs have helped establish computation as a third pillar of science along with theory and physical experiments. Sandia has extensive ASCR programs in Computer Science ...

  19. Large scale tracking algorithms.

    SciTech Connect (OSTI)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  20. PROCEEDINGS OF THE RIKEN BNL RESEARCH CENTER WORKSHOP ON LARGE SCALE COMPUTATIONS IN NUCLEAR PHYSICS USING THE QCDOC, SEPTEMBER 26 - 28, 2002.

    SciTech Connect (OSTI)

    AOKI,Y.; BALTZ,A.; CREUTZ,M.; GYULASSY,M.; OHTA,S.

    2002-09-26

    The massively parallel computer QCDOC (QCD On a Chip) of the RIKEN BNL Research Center (RI3RC) will provide ten-teraflop peak performance for lattice gauge calculations. Lattice groups from both Columbia University and RBRC, along with assistance from IBM, jointly handled the design of the QCDOC. RIKEN has provided $5 million in funding to complete the machine in 2003. Some fraction of this computer (perhaps as much as 10%) might be made available for large-scale computations in areas of theoretical nuclear physics other than lattice gauge theory. The purpose of this workshop was to investigate the feasibility and possibility of using a supercomputer such as the QCDOC for lattice, general nuclear theory, and other calculations. The lattice applications to nuclear physics that can be investigated with the QCDOC are varied: for example, the light hadron spectrum, finite temperature QCD, and kaon ({Delta}I = 1/2 and CP violation), and nucleon (the structure of the proton) matrix elements, to name a few. There are also other topics in theoretical nuclear physics that are currently limited by computer resources. Among these are ab initio calculations of nuclear structure for light nuclei (e.g. up to {approx}A = 8 nuclei), nuclear shell model calculations, nuclear hydrodynamics, heavy ion cascade and other transport calculations for RHIC, and nuclear astrophysics topics such as exploding supernovae. The physics topics were quite varied, ranging from simulations of stellar collapse by Douglas Swesty to detailed shell model calculations by David Dean, Takaharu Otsuka, and Noritaka Shimizu. Going outside traditional nuclear physics, James Davenport discussed molecular dynamics simulations and Shailesh Chandrasekharan presented a class of algorithms for simulating a wide variety of femionic problems. Four speakers addressed various aspects of theory and computational modeling for relativistic heavy ion reactions at RHIC. Scott Pratt and Steffen Bass gave general overviews of

  1. Scientific Computing at Los Alamos National Laboratory (Conference...

    Office of Scientific and Technical Information (OSTI)

    Scientific Computing at Los Alamos National Laboratory Citation Details In-Document Search Title: Scientific Computing at Los Alamos National Laboratory You are accessing a ...

  2. Can Cloud Computing Address the Scientific Computing Requirements...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    the ever-increasing computational needs of scientists, Department of Energy ... and as the largest funder of basic scientific research in the U.S., DOE was interested in ...

  3. Exploring HPCS Languages in Scientific Computing

    SciTech Connect (OSTI)

    Barrett, Richard F; Alam, Sadaf R; de Almeida, Valmor F; Bernholdt, David E; Elwasif, Wael R; Kuehn, Jeffery A; Poole, Stephen W; Shet, Aniruddha G

    2008-01-01

    As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else.

  4. Institute for Scientific Computing Research Fiscal Year 2002 Annual Report

    SciTech Connect (OSTI)

    Keyes, D E; McGraw, J R; Bodtker, L K

    2003-03-11

    The Institute for Scientific Computing Research (ISCR) at Lawrence Livermore National Laboratory is jointly administered by the Computing Applications and Research Department (CAR) and the University Relations Program (URP), and this joint relationship expresses its mission. An extensively externally networked ISCR cost-effectively expands the level and scope of national computational science expertise available to the Laboratory through CAR. The URP, with its infrastructure for managing six institutes and numerous educational programs at LLNL, assumes much of the logistical burden that is unavoidable in bridging the Laboratory's internal computational research environment with that of the academic community. As large-scale simulations on the parallel platforms of DOE's Advanced Simulation and Computing (ASCI) become increasingly important to the overall mission of LLNL, the role of the ISCR expands in importance, accordingly. Relying primarily on non-permanent staffing, the ISCR complements Laboratory research in areas of the computer and information sciences that are needed at the frontier of Laboratory missions. The ISCR strives to be the ''eyes and ears'' of the Laboratory in the computer and information sciences, in keeping the Laboratory aware of and connected to important external advances. It also attempts to be ''feet and hands, in carrying those advances into the Laboratory and incorporating them into practice. In addition to conducting research, the ISCR provides continuing education opportunities to Laboratory personnel, in the form of on-site workshops taught by experts on novel software or hardware technologies. The ISCR also seeks to influence the research community external to the Laboratory to pursue Laboratory-related interests and to train the workforce that will be required by the Laboratory. Part of the performance of this function is interpreting to the external community appropriate (unclassified) aspects of the Laboratory's own contributions

  5. A Model for Turbulent Combustion Simulation of Large Scale Hydrogen...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A Model for Turbulent Combustion Simulation of Large Scale Hydrogen Explosions Event Sponsor: Argonne Leadership Computing Facility Seminar Start Date: Oct 6 2015 - 10:00am...

  6. Large-Scale Information Systems

    SciTech Connect (OSTI)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  7. Magellan Explores Cloud Computing for DOE's Scientific Mission

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Explores Cloud Computing for DOE's Scientific Mission Magellan Explores Cloud Computing for DOE's Scientific Mission March 30, 2011 Cloud Control -This is a picture of the Magellan management and network control racks at NERSC. To test cloud computing for scientific capability, NERSC and the Argonne Leadership Computing Facility (ALCF) installed purpose-built testbeds for running scientific applications on the IBM iDataPlex cluster. (Photo Credit: Roy Kaltschmidt) Cloud computing is gaining

  8. Advanced Scientific Computing Research Network Requirements

    SciTech Connect (OSTI)

    Bacon, Charles; Bell, Greg; Canon, Shane; Dart, Eli; Dattoria, Vince; Goodwin, Dave; Lee, Jason; Hicks, Susan; Holohan, Ed; Klasky, Scott; Lauzon, Carolyn; Rogers, Jim; Shipman, Galen; Skinner, David; Tierney, Brian

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  9. Energy Department Requests Proposals for Advanced Scientific Computing

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Research | Department of Energy Advanced Scientific Computing Research Energy Department Requests Proposals for Advanced Scientific Computing Research December 27, 2005 - 4:55pm Addthis WASHINGTON, DC - The Department of Energy's Office of Science and the National Nuclear Security Administration (NNSA) have issued a joint Request for Proposals for advanced scientific computing research. DOE expects to fund $67 million annually for three years to five years under its Scientific Discovery

  10. NERSC Role in Advanced Scientific Computing Research Katherine Yelick

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Advanced Scientific Computing Research Katherine Yelick NERSC Director Requirements Workshop NERSC Mission The mission of the National Energy Research Scientific Computing Center (NERSC) is to accelerate the pace of scientific discovery by providing high performance computing, information, data, and communications services for all DOE Office of Science (SC) research. Sample Scientific Accomplishments at NERSC 3 Award-winning software uses massively-parallel supercomputing to map hydrocarbon

  11. The Cielo Petascale Capability Supercomputer: Providing Large-Scale

    Office of Scientific and Technical Information (OSTI)

    Computing for Stockpile Stewardship (Conference) | SciTech Connect Conference: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship Citation Details In-Document Search Title: The Cielo Petascale Capability Supercomputer: Providing Large-Scale Computing for Stockpile Stewardship Authors: Vigil, Benny Manuel [1] ; Doerfler, Douglas W. [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2013-03-11 OSTI Identifier:

  12. National Energy Research Scientific Computing Center | U.S. DOE...

    Office of Science (SC) Website

    National Labs, Profiles, and Contacts National Energy Research Scientific Computing ... Technology Transfer U.S. Department of Energy SC-29Germantown Building 1000 ...

  13. National Energy Research Scientific Computing Center NERSC Exceeds...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Scientific Computing Center NERSC Exceeds Reliability Standards With Tape-Based Active ... on the archive, NERSC's storage capacity and reliability requirements are significant. ...

  14. The National Energy Research Scientific Computing Center: Forty...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The National Energy Research Scientific Computing Center: Forty Years of Supercomputing ... discovery has been evident in both simulation and data analysis for many years. ...

  15. Advanced Scientific Computing Research (ASCR) Homepage | U.S...

    Office of Science (SC) Website

    Edison Dedication External link Users are invited to make heavy use of new computer as ... computing, including the need for a new scientific workflow.Read More .pdf file ...

  16. Large-Scale Computational Fluid Dynamics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing ... Heavy Duty Fuels DISI Combustion HCCISCCI Fundamentals Spray Combustion Modeling ...

  17. Energy Department Seeks Proposals to Use Scientific Computing Resources at

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Lawrence Berkeley, Oak Ridge National Laboratories | Department of Energy Proposals to Use Scientific Computing Resources at Lawrence Berkeley, Oak Ridge National Laboratories Energy Department Seeks Proposals to Use Scientific Computing Resources at Lawrence Berkeley, Oak Ridge National Laboratories June 29, 2005 - 1:50pm Addthis WASHINGTON, DC -- Secretary of Energy Samuel W. Bodman announced today that DOE's Office of Science is seeking proposals to support computational science projects

  18. Scalable parallel distance field construction for large-scale applications

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; Kolla, Hemanth; Chen, Jacqueline H.

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less

  19. Challenges and Opportunities in Using Automatic Differentiation with Object-Oriented Toolkits for Scientific Computing

    SciTech Connect (OSTI)

    Hovland, P; Lee, S; McInnes, L; Norris, B; Smith, B

    2001-04-17

    The increased use of object-oriented toolkits in large-scale scientific simulation presents new opportunities and challenges for the use of automatic (or algorithmic) differentiation (AD) techniques, especially in the context of optimization. Because object-oriented toolkits use well-defined interfaces and data structures, there is potential for simplifying the AD process. Furthermore, derivative computation can be improved by exploiting high-level information about numerical and computational abstractions. However, challenges to the successful use of AD with these toolkits also exist. Among the greatest challenges is balancing the desire to limit the scope of the AD process with the desire to minimize the work required of a user. They discuss their experiences in integrating AD with the PETSc, PVODE, and TAO toolkits and the plans for future research and development in this area.

  20. Large-Scale Renewable Energy Guide Webinar

    Broader source: Energy.gov [DOE]

    Webinar introduces the “Large Scale Renewable Energy Guide." The webinar will provide an overview of this important FEMP guide, which describes FEMP's approach to large-scale renewable energy projects and provides guidance to Federal agencies and the private sector on how to develop a common process for large-scale renewable projects.

  1. National Energy Research Scientific Computing Center

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... use include on-demand computing functionality for ... mega-electron volts per meter before the metal breaks down. ... been collaborating with earth scientists at Berkeley Lab ...

  2. Cosmological Simulations for Large-Scale Sky Surveys | Argonne Leadership

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Facility Cosmological Simulations for Large-Scale Sky Surveys PI Name: Salman Habib PI Email: habib@anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 100 Million Year: 2014 Research Domain: Physics The next generation of large-scale sky surveys aims to establish a new regime of cosmic discovery through fundamental measurements of the universe's geometry and the growth of structure. The aim of this project is to accurately

  3. What Are the Computational Keys to Future Scientific Discoveries?

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    What are the Computational Keys to Future Scientific Discoveries? What Are the Computational Keys to Future Scientific Discoveries? NERSC Develops a Data Intensive Pilot Program to Help Scientists Find Out August 23, 2012 Linda Vu,lvu@lbl.gov, +1 510 495 2402 ALS.jpg Advanced Light Source at the Lawrence Berkeley National Laboratory. (Photo by: Roy Kaltschmidt, Berkeley Lab) A new camera at the hard x-ray tomography beamline of Lawrence Berkeley National Laboratory's (Berkeley Lab's) Advanced

  4. NERSC, Cray Move Forward With Next-Generation Scientific Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NERSC, Cray Move Forward With Next-Generation Scientific Computing NERSC, Cray Move Forward With Next-Generation Scientific Computing New Cray XC40 will be first supercomputer in Berkeley Lab's new Computational Research and Theory facility April 22, 2015 Contact: Jon Bashor, jbashor@lbl.gov, 510-486-5849 NewCRT.jpg The Cori Phase 1 system will be the first supercomputer installed in the new Computational Research and Theory Facility now in the final stages of construction at Lawrence Berkeley

  5. Scientific computations section monthly report, November 1993

    SciTech Connect (OSTI)

    Buckner, M.R.

    1993-12-30

    This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

  6. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    SciTech Connect (OSTI)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  7. ASCR Cybersecurity for Scientific Computing Integrity

    SciTech Connect (OSTI)

    Piesert, Sean

    2015-02-27

    The Department of Energy (DOE) has the responsibility to address the energy, environmental, and nuclear security challenges that face our nation. Much of DOE’s enterprise involves distributed, collaborative teams; a signi¬cant fraction involves “open science,” which depends on multi-institutional, often international collaborations that must access or share signi¬cant amounts of information between institutions and over networks around the world. The mission of the Office of Science is the delivery of scienti¬c discoveries and major scienti¬c tools to transform our understanding of nature and to advance the energy, economic, and national security of the United States. The ability of DOE to execute its responsibilities depends critically on its ability to assure the integrity and availability of scienti¬c facilities and computer systems, and of the scienti¬c, engineering, and operational software and data that support its mission.

  8. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Subcommittee Report on Scientific and Technical Information

    SciTech Connect (OSTI)

    Hey, Tony; Agarwal, Deborah; Borgman, Christine; Cartaro, Concetta; Crivelli, Silvia; Van Dam, Kerstin Kleese; Luce, Richard; Arjun, Shankar; Trefethen, Anne; Wade, Alex; Williams, Dean

    2015-09-04

    The Advanced Scientific Computing Advisory Committee (ASCAC) was charged to form a standing subcommittee to review the Department of Energy’s Office of Scientific and Technical Information (OSTI) and to begin by assessing the quality and effectiveness of OSTI’s recent and current products and services and to comment on its mission and future directions in the rapidly changing environment for scientific publication and data. The Committee met with OSTI staff and reviewed available products, services and other materials. This report summaries their initial findings and recommendations.

  9. Microsoft Word - The_Advanced_Networks_and_Services_Underpinning_Modern,Large-Scale_Science.SciDAC.v5.doc

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ESnet4: Advanced Networking and Services Supporting the Science Mission of DOE's Office of Science William E. Johnston ESnet Dept. Head and Senior Scientist Lawrence Berkeley National Laboratory May, 2007 1 Introduction In many ways, the dramatic achievements in scientific discovery through advanced computing and the discoveries of the increasingly large-scale instruments with their enormous data handling and remote collaboration requirements, have been made possible by accompanying

  10. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    researchLANSeventslistn Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based...

  11. Advanced Scientific Computing Advisory Committee (ASCAC) Homepage | U.S.

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    DOE Office of Science (SC) ASCAC Home Advanced Scientific Computing Advisory Committee (ASCAC) ASCAC Home Meetings Members Charges/Reports ASCAC Charter 2015 - signed .pdf file (134KB) ASCR Committees of Visitors Federal Advisory Committees ASCR Home Exascale Advisory Committee Report .pdf file (2.1MB) The Opportunities and Challenges of Exascale Computing The Exascale initiative will be significant and transformative for Department of Energy missions. The ASCAC Subcommitte report is

  12. Large-Scale Liquid Hydrogen Handling Equipment

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    8, 2007 Jerry Gillette Large-Scale Liquid Hydrogen Handling Equipment Hydrogen Delivery Analysis Meeting Argonne National Laboratory Some Delivery Pathways Will Necessitate the Use of Large- Scale Liquid Hydrogen Handling Equipment „ Potential Scenarios include: - Production plant shutdowns - Summer-peak storage „ Equipment Needs include: - Storage tanks - Liquid Pumps - Vaporizers - Ancillaries 2 1 Concern is that Scaling up from Small Units Could Significantly Underestimate Costs of Larger

  13. Large-Scale PCA for Climate

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large-Scale PCA for Climate Large-Scale PCA for Climate The most widely used tool for extracting important patterns from the measurements of atmospheric and oceanic variables is the Empirical Orthogonal Function (EOF) technique. EOFs are popular because of their simplicity and their ability to reduce the dimensionality of large nonlinear, high-dimensional systems into fewer dimensions while preserving the most important patterns of variations in the measurements. Because EOFs are a particular

  14. UNIVERSITY OF CALIFORNIA The Future of Large Scale Visual Data

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CALIFORNIA The Future of Large Scale Visual Data Analysis Joint Facilities User Forum on Data Intensive Computing Oakland, CA E. Wes Bethel Lawrence Berkeley National Laboratory 16 June 2014 16 June 2014 The World that Was: Computational Architectures * Machine architectures - Single CPU, single core - Vector, then single-core MPPs - "Large" SMP platforms - Relatively well balanced: memory, FLOPS,I/O 16 June 2014 The World that Was: Software Architecture * Data Analysis and

  15. A Computing Environment to Support Repeatable Scientific Big Data Experimentation of World-Wide Scientific Literature

    SciTech Connect (OSTI)

    Schlicher, Bob G; Kulesz, James J; Abercrombie, Robert K; Kruse, Kara L

    2015-01-01

    A principal tenant of the scientific method is that experiments must be repeatable and relies on ceteris paribus (i.e., all other things being equal). As a scientific community, involved in data sciences, we must investigate ways to establish an environment where experiments can be repeated. We can no longer allude to where the data comes from, we must add rigor to the data collection and management process from which our analysis is conducted. This paper describes a computing environment to support repeatable scientific big data experimentation of world-wide scientific literature, and recommends a system that is housed at the Oak Ridge National Laboratory in order to provide value to investigators from government agencies, academic institutions, and industry entities. The described computing environment also adheres to the recently instituted digital data management plan mandated by multiple US government agencies, which involves all stages of the digital data life cycle including capture, analysis, sharing, and preservation. It particularly focuses on the sharing and preservation of digital research data. The details of this computing environment are explained within the context of cloud services by the three layer classification of Software as a Service , Platform as a Service , and Infrastructure as a Service .

  16. Computing Frontier: Distributed Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Frontier: Distributed Computing and Facility Infrastructures Conveners: Kenneth Bloom 1 , Richard Gerber 2 1 Department of Physics and Astronomy, University of Nebraska-Lincoln 2 National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory 1.1 Introduction The field of particle physics has become increasingly reliant on large-scale computing resources to address the challenges of analyzing large datasets, completing specialized computations and

  17. Sensitivity technologies for large scale simulation.

    SciTech Connect (OSTI)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    order approximation of the Euler equations and used as a preconditioner. In comparison to other methods, the AD preconditioner showed better convergence behavior. Our ultimate target is to perform shape optimization and hp adaptivity using adjoint formulations in the Premo compressible fluid flow simulator. A mathematical formulation for mixed-level simulation algorithms has been developed where different physics interact at potentially different spatial resolutions in a single domain. To minimize the implementation effort, explicit solution methods can be considered, however, implicit methods are preferred if computational efficiency is of high priority. We present the use of a partial elimination nonlinear solver technique to solve these mixed level problems and show how these formulation are closely coupled to intrusive optimization approaches and sensitivity analyses. Production codes are typically not designed for sensitivity analysis or large scale optimization. The implementation of our optimization libraries into multiple production simulation codes in which each code has their own linear algebra interface becomes an intractable problem. In an attempt to streamline this task, we have developed a standard interface between the numerical algorithm (such as optimization) and the underlying linear algebra. These interfaces (TSFCore and TSFCoreNonlin) have been adopted by the Trilinos framework and the goal is to promote the use of these interfaces especially with new developments. Finally, an adjoint based a posteriori error estimator has been developed for discontinuous Galerkin discretization of Poisson's equation. The goal is to investigate other ways to leverage the adjoint calculations and we show how the convergence of the forward problem can be improved by adapting the grid using adjoint-based error estimates. Error estimation is usually conducted with continuous adjoints but if discrete adjoints are available it may be possible to reuse the discrete version

  18. Large-scale ab initio configuration interaction calculations for light

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    nuclei | Argonne Leadership Computing Facility Large-scale ab initio configuration interaction calculations for light nuclei Authors: Pieter Maris, H Metin Aktulga, Mark A Caprio, Ümit V Çatalyürek, Esmond G Ng, Dossay Oryspayev, Hugh Potter, Erik Saule, Masha Sosonkina, James P Vary, Chao Yang Zheng Zhou In ab-initio Configuration Interaction calculations, the nuclear wavefunction is expanded in Slater determinants of single-nucleon wavefunctions and the many-body Schrodinger equation

  19. Large-Scale Renewable Energy Guide | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Renewable Energy Guide Large-Scale Renewable Energy Guide Presentation covers the Large-scale RE Guide: Developing Renewable Energy Projects Larger than 10 MWs at...

  20. Determination of Large-Scale Cloud Ice Water Concentration by...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Determination of Large-Scale Cloud Ice Water Concentration by Combining ... Title: Determination of Large-Scale Cloud Ice Water Concentration by Combining Surface ...

  1. Large-Scale Renewable Energy Guide: Developing Renewable Energy...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities Large-Scale Renewable Energy Guide: Developing Renewable Energy ...

  2. Large-Scale Residential Energy Efficiency Programs Based on CFLs...

    Open Energy Info (EERE)

    Large-Scale Residential Energy Efficiency Programs Based on CFLs Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Large-Scale Residential Energy Efficiency Programs Based...

  3. The Effective Field Theory of Cosmological Large Scale Structures...

    Office of Scientific and Technical Information (OSTI)

    The Effective Field Theory of Cosmological Large Scale Structures Citation Details In-Document Search Title: The Effective Field Theory of Cosmological Large Scale Structures...

  4. Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives Large-Scale Manufacturing of Nanoparticle-Based Lubrication Additives PDF icon nanoparticulate-basedlubricati...

  5. Creating Large Scale Database Servers (Technical Report) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    Creating Large Scale Database Servers Citation Details In-Document Search Title: Creating Large Scale Database Servers The BaBar experiment at the Stanford Linear Accelerator ...

  6. Rapid Software Prototyping Into Large Scale Control Systems ...

    Office of Scientific and Technical Information (OSTI)

    Rapid Software Prototyping Into Large Scale Control Systems Citation Details In-Document Search Title: Rapid Software Prototyping Into Large Scale Control Systems Authors: Fishler, ...

  7. DLFM library tools for large scale dynamic applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    applications DLFM library tools for large scale dynamic applications Large scale Python and other dynamic applications may spend huge time at startup. The DLFM library,...

  8. ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and Analyses of Automotive Engines Title ACCOLADES: A Scalable Workflow Framework for Large-Scale Simulation and...

  9. Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect (OSTI)

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their

  10. Scientific and Computational Challenges of the Fusion Simulation Program (FSP)

    SciTech Connect (OSTI)

    William M. Tang

    2011-02-09

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  11. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect (OSTI)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  12. Large-Scale PV Integration Study

    SciTech Connect (OSTI)

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  13. Batteries for Large Scale Energy Storage

    SciTech Connect (OSTI)

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  14. DOE Advanced Scientific Computing Advisory Committee (ASCAC) Report: Exascale Computing Initiative Review

    SciTech Connect (OSTI)

    Reed, Daniel; Berzins, Martin; Pennington, Robert; Sarkar, Vivek; Taylor, Valerie

    2015-08-01

    On November 19, 2014, the Advanced Scientific Computing Advisory Committee (ASCAC) was charged with reviewing the Department of Energy’s conceptual design for the Exascale Computing Initiative (ECI). In particular, this included assessing whether there are significant gaps in the ECI plan or areas that need to be given priority or extra management attention. Given the breadth and depth of previous reviews of the technical challenges inherent in exascale system design and deployment, the subcommittee focused its assessment on organizational and management issues, considering technical issues only as they informed organizational or management priorities and structures. This report presents the observations and recommendations of the subcommittee.

  15. Eighth SIAM conference on parallel processing for scientific computing: Final program and abstracts

    SciTech Connect (OSTI)

    1997-12-31

    This SIAM conference is the premier forum for developments in parallel numerical algorithms, a field that has seen very lively and fruitful developments over the past decade, and whose health is still robust. Themes for this conference were: combinatorial optimization; data-parallel languages; large-scale parallel applications; message-passing; molecular modeling; parallel I/O; parallel libraries; parallel software tools; parallel compilers; particle simulations; problem-solving environments; and sparse matrix computations.

  16. Parallel Index and Query for Large Scale Data Analysis

    SciTech Connect (OSTI)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  17. Stimulated forward Raman scattering in large scale-length laser...

    Office of Scientific and Technical Information (OSTI)

    in large scale-length laser-produced plasmas Citation Details In-Document Search Title: Stimulated forward Raman scattering in large scale-length laser-produced plasmas You ...

  18. Locations of Smart Grid Demonstration and Large-Scale Energy...

    Office of Environmental Management (EM)

    Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Map of the United States ...

  19. Planning under uncertainty solving large-scale stochastic linear programs

    SciTech Connect (OSTI)

    Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  20. SimFS: A Large Scale Parallel File System Simulator

    Energy Science and Technology Software Center (OSTI)

    2011-08-30

    The software provides both framework and tools to simulate a large-scale parallel file system such as Lustre.

  1. DLFM library tools for large scale dynamic applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    DLFM library tools for large scale dynamic applications DLFM library tools for large scale dynamic applications Large scale Python and other dynamic applications may spend huge time at startup. The DLFM library, developed by Mike Davis at Cray, Inc., is a set of functions that can be incorporated into a dynamically-linked application to provide improved performance during the loading of dynamic libraries when running the application at large scale on Edison. To access this library, do module

  2. Scientific Application Requirements for Leadership Computing at the Exascale

    SciTech Connect (OSTI)

    Ahern, Sean; Alam, Sadaf R; Fahey, Mark R; Hartman-Baker, Rebecca J; Barrett, Richard F; Kendall, Ricky A; Kothe, Douglas B; Mills, Richard T; Sankaran, Ramanan; Tharrington, Arnold N; White III, James B

    2007-12-01

    The Department of Energy s Leadership Computing Facility, located at Oak Ridge National Laboratory s National Center for Computational Sciences, recently polled scientific teams that had large allocations at the center in 2007, asking them to identify computational science requirements for future exascale systems (capable of an exaflop, or 1018 floating point operations per second). These requirements are necessarily speculative, since an exascale system will not be realized until the 2015 2020 timeframe, and are expressed where possible relative to a recent petascale requirements analysis of similar science applications [1]. Our initial findings, which beg further data collection, validation, and analysis, did in fact align with many of our expectations and existing petascale requirements, yet they also contained some surprises, complete with new challenges and opportunities. First and foremost, the breadth and depth of science prospects and benefits on an exascale computing system are striking. Without a doubt, they justify a large investment, even with its inherent risks. The possibilities for return on investment (by any measure) are too large to let us ignore this opportunity. The software opportunities and challenges are enormous. In fact, as one notable computational scientist put it, the scale of questions being asked at the exascale is tremendous and the hardware has gotten way ahead of the software. We are in grave danger of failing because of a software crisis unless concerted investments and coordinating activities are undertaken to reduce and close this hardwaresoftware gap over the next decade. Key to success will be a rigorous requirement for natural mapping of algorithms to hardware in a way that complements (rather than competes with) compilers and runtime systems. The level of abstraction must be raised, and more attention must be paid to functionalities and capabilities that incorporate intent into data structures, are aware of memory hierarchy

  3. Building a Large Scale Climate Data System in Support of HPC Environment

    SciTech Connect (OSTI)

    Wang, Feiyi; Harney, John F; Shipman, Galen M

    2011-01-01

    The Earth System Grid Federation (ESG) is a large scale, multi-institutional, interdisciplinary project that aims to provide climate scientists and impact policy makers worldwide a web-based and client-based platform to publish, disseminate, compare and analyze ever increasing climate related data. This paper describes our practical experiences on the design, development and operation of such a system. In particular, we focus on the support of the data lifecycle from a high performance computing (HPC) perspective that is critical to the end-to-end scientific discovery process. We discuss three subjects that interconnect the consumer and producer of scientific datasets: (1) the motivations, complexities and solutions of deep storage access and sharing in a tightly controlled environment; (2) the importance of scalable and flexible data publication/population; and (3) high performance indexing and search of data with geospatial properties. These perceived corner issues collectively contributed to the overall user experience and proved to be as important as any other architectural design considerations. Although the requirements and challenges are rooted and discussed from a climate science domain context, we believe the architectural problems, ideas and solutions discussed in this paper are generally useful and applicable in a larger scope.

  4. ANALYSIS OF TURBULENT MIXING JETS IN LARGE SCALE TANK

    SciTech Connect (OSTI)

    Lee, S; Richard Dimenna, R; Robert Leishear, R; David Stefanko, D

    2007-03-28

    Flow evolution models were developed to evaluate the performance of the new advanced design mixer pump for sludge mixing and removal operations with high-velocity liquid jets in one of the large-scale Savannah River Site waste tanks, Tank 18. This paper describes the computational model, the flow measurements used to provide validation data in the region far from the jet nozzle, the extension of the computational results to real tank conditions through the use of existing sludge suspension data, and finally, the sludge removal results from actual Tank 18 operations. A computational fluid dynamics approach was used to simulate the sludge removal operations. The models employed a three-dimensional representation of the tank with a two-equation turbulence model. Both the computational approach and the models were validated with onsite test data reported here and literature data. The model was then extended to actual conditions in Tank 18 through a velocity criterion to predict the ability of the new pump design to suspend settled sludge. A qualitative comparison with sludge removal operations in Tank 18 showed a reasonably good comparison with final results subject to significant uncertainties in actual sludge properties.

  5. Energy Department Applauds Nation's First Large-Scale Industrial Carbon

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Capture and Storage Facility | Department of Energy Nation's First Large-Scale Industrial Carbon Capture and Storage Facility Energy Department Applauds Nation's First Large-Scale Industrial Carbon Capture and Storage Facility August 24, 2011 - 6:23pm Addthis Washington, D.C. - The U.S. Department of Energy issued the following statement in support of today's groundbreaking for construction of the nation's first large-scale industrial carbon capture and storage (ICCS) facility in Decatur,

  6. Large-Scale Federal Renewable Energy Projects | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Federal Renewable Energy Projects Large-Scale Federal Renewable Energy Projects Renewable energy projects larger than 10 megawatts (MW), also known as utility-scale projects, are complex and typically require private-sector financing. The Federal Energy Management Program (FEMP) developed a guide to help federal agencies, and the developers and financiers that work with them, to successfully install these projects at federal facilities. FEMP's Large-Scale Renewable Energy Guide,

  7. Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction August 24, 2011 - 1:00pm Addthis Washington, DC - Construction activities have begun at an Illinois ethanol plant that will demonstrate carbon capture and storage. The project, sponsored by the U.S. Department of Energy's Office of Fossil Energy, is the first large-scale integrated carbon capture and storage (CCS) demonstration

  8. Large Scale Computing and Storage Requirements for Basic Energy...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    SciencesAn BES ASCR NERSC WorkshopFebruary 9-10, 2010... Read More Workshop Logistics Workshop location, directions, and registration information are included here......

  9. Computational Fluid Dynamics & Large-Scale Uncertainty Quantification...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... (CFD) simulations and uncertainty analyses. The project developed new mathematical uncertainty quantification techniques and applied them, in combination with high-fidelity CFD ...

  10. Large Scale Computing Requirements for Basic Energy Sciences...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Acoustic Waves ). ( ) , , , ( 1 2 2 2 2 2 2 2 2 2 t s t z y x p z y x t v ... Starting Models - Test Different Noise Assumptions * Scale Problem Up to Ever ...

  11. Large Scale Computing and Storage Requirements for High Energy...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    for High Energy Physics Accelerator Physics P. Spentzouris, Fermilab Motivation ... Project-X http:www.er.doe.govhepHEPAPreportsP5Report%2006022008.pdf ComPASS The SciDAC2 ...

  12. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    11-12, 2012 Hilton Rockville Hotel and Executive Meeting Center 1750 Rockville Pike Rockville, MD, 20852-1699 TEL: 1-301-468-1100 Sponsored by: U.S. Department of Energy...

  13. Initial explorations of ARM processors for scientific computing...

    Office of Scientific and Technical Information (OSTI)

    DOE Contract Number: AC02-07CH11359 Resource Type: Conference Resource Relation: Conference: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics ...

  14. The implications of spatial locality on scientific computing...

    Office of Scientific and Technical Information (OSTI)

    Research Org: Sandia National Laboratories Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 97 MATHEMATICAL METHODS AND COMPUTING; BENCHMARKS; ...

  15. Scientific Computing at Los Alamos National Laboratory (Conference...

    Office of Scientific and Technical Information (OSTI)

    States Research Org: Los Alamos National Laboratory (LANL) Sponsoring Org: DOELANL Country of Publication: United States Language: English Subject: Mathematics & Computing(97

  16. Overcoming the Barrier to Achieving Large-Scale Production -...

    Broader source: Energy.gov (indexed) [DOE]

    Semprius Confidential 1 Overcoming the Barriers to Achieving Large-Scale Production - A ... August 31, 2011 Semprius Confidential 2 Semprius Overview Background Company: * Leading ...

  17. Optimizing Cluster Heads for Energy Efficiency in Large-Scale...

    Office of Scientific and Technical Information (OSTI)

    clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy...

  18. Stimulated forward Raman scattering in large scale-length laser...

    Office of Scientific and Technical Information (OSTI)

    Stimulated forward Raman scattering in large scale-length laser-produced plasmas Citation Details In-Document Search Title: Stimulated forward Raman scattering in large ...

  19. Strategies to Finance Large-Scale Deployment of Renewable Energy...

    Open Energy Info (EERE)

    to Finance Large-Scale Deployment of Renewable Energy Projects: An Economic Development and Infrastructure Approach Jump to: navigation, search Tool Summary LAUNCH TOOL Name:...

  20. Understanding large scale HPC systems through scalable monitoring...

    Office of Scientific and Technical Information (OSTI)

    HPC systems through scalable monitoring and analysis. Citation Details In-Document Search Title: Understanding large scale HPC systems through scalable monitoring and analysis. ...

  1. FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy...

    Broader source: Energy.gov (indexed) [DOE]

    jobs, and advancing national goals for energy security. The guide describes the fundamentals of deploying financially attractive, large-scale renewable energy projects and...

  2. Optimizing Cluster Heads for Energy Efficiency in Large-Scale...

    Office of Scientific and Technical Information (OSTI)

    Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks Gu, Yi; Wu, Qishi; Rao, Nageswara S. V. Hindawi Publishing Corporation None...

  3. Energy Department Applauds Nation's First Large-Scale Industrial...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... News Media Contact: 202-586-4940 Addthis Related Articles Large-Scale Industrial Carbon ... designed National Sequestration Education Center, located at Richland Community ...

  4. Large-Scale Hydropower Basics | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Renewable Energy » Hydropower » Large-Scale Hydropower Basics Large-Scale Hydropower Basics August 14, 2013 - 3:11pm Addthis Large-scale hydropower plants are generally developed to produce electricity for government or electric utility projects. These plants are more than 30 megawatts (MW) in size, and there is more than 80,000 MW of installed generation capacity in the United States today. Most large-scale hydropower projects use a dam and a reservoir to retain water from a river. When the

  5. High Fidelity Simulations of Large-Scale Wireless Networks

    SciTech Connect (OSTI)

    Onunkwo, Uzoma; Benz, Zachary

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  6. Multicore Challenges and Benefits for High Performance Scientific Computing

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Nielsen, Ida M.B.; Janssen, Curtis L.

    2008-01-01

    Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less

  7. Energy Department Seeks Proposals to Use Scientific Computing...

    Office of Environmental Management (EM)

    ... machines, as well as five percent of the computer time at DOE's Argonne and Pacific ... DOE's Office of Science is the single largest supporter of basic research in the physical ...

  8. Data-aware distributed scientific computing for big-data problems...

    Office of Scientific and Technical Information (OSTI)

    big-data problems in bio-surveillance Citation Details In-Document Search Title: Data-aware distributed scientific computing for big-data problems in bio-surveillance You are ...

  9. Laboratory Directed Research & Development Page National Energy Research Scientific Computing Center

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Directed Research & Development Page National Energy Research Scientific Computing Center T3E Individual Node Optimization Michael Stewart, SGI/Cray, 4/9/98 * Introduction * T3E Processor * T3E Local Memory * Cache Structure * Optimizing Codes for Cache Usage * Loop Unrolling * Other Useful Optimization Options * References 1 Laboratory Directed Research & Development Page National Energy Research Scientific Computing Center Introduction * Primary topic will be single processor

  10. PNNL pushing scientific discovery through data intensive computing breakthroughs

    ScienceCinema (OSTI)

    Deborah Gracio; David Koppenaal; Ruby Leung

    2012-12-31

    The Pacific Northwest National Laboratorys approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architectures, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.

  11. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    SciTech Connect (OSTI)

    Margaret Riley; Merry Buckley

    2009-01-01

    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin

  12. Large-Scale First-Principles Molecular Dynamics Simulations on...

    Office of Scientific and Technical Information (OSTI)

    for large-scale parallel platforms such as BlueGeneL. Strong scaling tests for a Materials Science application show an 86% scaling efficiency between 1024 and 32,768 CPUs. ...

  13. Self-consistency tests of large-scale dynamics parameterizations...

    Office of Scientific and Technical Information (OSTI)

    In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to ...

  14. ARM - Evaluation Product - Vertical Air Motion during Large-Scale...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ProductsVertical Air Motion during Large-Scale Stratiform Rain ARM Data Discovery Browse ... Send us a note below or call us at 1-888-ARM-DATA. Send Evaluation Product : Vertical Air ...

  15. Towards a Large-Scale Recording System: Demonstration of Polymer...

    Office of Scientific and Technical Information (OSTI)

    of Polymer-Based Penetrating Array for Chronic Neural Recording Citation Details In-Document Search Title: Towards a Large-Scale Recording System: Demonstration of Polymer-Based ...

  16. How Three Retail Buyers Source Large-Scale Solar Electricity

    Broader source: Energy.gov [DOE]

    Large-scale, non-utility solar power purchase agreements (PPAs) are still a rarity despite the growing popularity of PPAs across the country. In this webinar, participants will learn more about how...

  17. Cosmological Simulations for Large-Scale Sky Surveys | Argonne...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The focus of cosmology today is on its two mysterious pillars, dark matter and dark energy. Large-scale sky surveys are the current drivers of precision cosmology and have been ...

  18. Cosmological Simulations for Large-Scale Sky Surveys | Argonne...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The focus of cosmology today revolves around two mysterious pillars, dark matter and dark energy. Large-scale sky surveys are the current drivers of precision cosmology and have ...

  19. COLLOQUIUM: Liquid Metal Batteries for Large-scale Energy Storage...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    June 22, 2016, 4:15pm to 5:30pm Colloquia MBG Auditorium, PPPL (284 cap.) COLLOQUIUM: Liquid Metal Batteries for Large-scale Energy Storage Dr. Hojong Kim Pennsylvania State ...

  20. Report of the Workshop on Petascale Systems Integration for LargeScale Facilities

    SciTech Connect (OSTI)

    Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin

    2007-10-01

    There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.

  1. Revised Environmental Assessment Large-Scale, Open-Air Explosive

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Environmental Assessment Large-Scale, Open-Air Explosive Detonation, DIVINE STRAKE, at the Nevada Test Site May 2006 Prepared by Department of Energy National Nuclear Security Administration Nevada Site Office Environmental Assessment May 2006 Large-Scale, Open-Air Explosive Detonation, DIVINE STRAKE, at the Nevada Test Site TABLE OF CONTENTS 1.0 PURPOSE AND NEED FOR ACTION.....................................................1-1 1.1 Introduction and

  2. Breakthrough Large-Scale Industrial Project Begins Carbon Capture and

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Utilization | Department of Energy Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization January 25, 2013 - 12:00pm Addthis Washington, DC - A breakthrough carbon capture, utilization, and storage (CCUS) project in Texas has begun capturing carbon dioxide (CO2) and piping it to an oilfield for use in enhanced oil recovery (EOR). Read the project factsheet The project at Air Products

  3. COLLOQUIUM: Large Scale Superconducting Magnets for Variety of Applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    | Princeton Plasma Physics Lab October 15, 2014, 4:00pm to 5:30pm Colloquia MBG Auditorium COLLOQUIUM: Large Scale Superconducting Magnets for Variety of Applications Professor Joseph Minervini Massachusetts Institute of Technology Presentation: PDF icon Superconducting_Magnet_Technology_for_Fusion_and_Large_Scale_Applications.pdf Over the past several decades the U. S. magnetic confinement fusion program, working in collaboration with international partners, has developed superconductor and

  4. Ground movements associated with large-scale underground coal gasification

    SciTech Connect (OSTI)

    Siriwardane, H.J.; Layne, A.W.

    1989-09-01

    The primary objective of this work was to predict the surface and underground movement associated with large-scale multiwell burn sites in the Illinois Basin and Appalachian Basin by using the subsidence/thermomechanical model UCG/HEAT. This code is based on the finite element method. In particular, it can be used to compute (1) the temperature field around an underground cavity when the temperature variation of the cavity boundary is known, and (2) displacements and stresses associated with body forces (gravitational forces) and a temperature field. It is hypothesized that large Underground Coal Gasification (UCG) cavities generated during the line-drive process will be similar to those generated by longwall mining. If that is the case, then as a UCG process continues, the roof of the cavity becomes unstable and collapses. In the UCG/HEAT computer code, roof collapse is modeled using a simplified failure criterion (Lee 1985). It is anticipated that roof collapse would occur behind the burn front; therefore, forward combustion can be continued. As the gasification front propagates, the length of the cavity would become much larger than its width. Because of this large length-to-width ratio in the cavity, ground response behavior could be analyzed by considering a plane-strain idealization. In a plane-strain idealization of the UCG cavity, a cross-section perpendicular to the axis of propagation could be considered, and a thermomechanical analysis performed using a modified version of the two-dimensional finite element code UCG/HEAT. 15 refs., 9 figs., 3 tabs.

  5. Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software

    SciTech Connect (OSTI)

    Lee, J.R.

    1998-11-01

    As computing assumes a more central role in managing the nuclear stockpile, the consequences of an erroneous computer simulation could be severe. Computational failures are common in other endeavors and have caused project failures, significant economic loss, and loss of life. This report examines the causes of software failure and proposes steps to mitigate them. A formal verification and validation program for scientific software is recommended and described.

  6. National facility for advanced computational science: A sustainable path to scientific discovery

    SciTech Connect (OSTI)

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  7. Topology-Aware Mappings for Large-Scale Eigenvalue Problems | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Leadership Computing Facility Topology-Aware Mappings for Large-Scale Eigenvalue Problems Authors: Aktulga, H.M., Yang, C., Ng.,E.G., Maris, P., Vary, J.P. Obtaining highly accurate predictions for properties of light atomic nuclei using the Configuration Interaction (CI) approach requires computing the lowest eigenvalues and associated eigenvectors of a large many-body nuclear Hamiltonian matrix, H ˆ . Since H ˆ is a large sparse matrix, a parallel iterative eigensolver designed for

  8. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect (OSTI)

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ?CDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ?. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ?, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ?. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  9. ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop

    SciTech Connect (OSTI)

    Peisert, Sean; Potok, Thomas E.; Jones, Todd

    2015-06-03

    At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues included research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three

  10. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  11. Large-Scale Industrial CCS Projects Selected for Continued Testing |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy CCS Projects Selected for Continued Testing Large-Scale Industrial CCS Projects Selected for Continued Testing June 10, 2010 - 1:00pm Addthis Washington, DC - Three Recovery Act funded projects have been selected by the U.S. Department of Energy (DOE) to continue testing large-scale carbon capture and storage (CCS) from industrial sources. The projects - located in Texas, Illinois, and Louisiana - were initially selected for funding in October 2009 as part of a $1.4

  12. DOE Completes Large-Scale Carbon Sequestration Project Awards | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy Large-Scale Carbon Sequestration Project Awards DOE Completes Large-Scale Carbon Sequestration Project Awards November 17, 2008 - 4:58pm Addthis Regional Partner to Demonstrate Safe and Permanent Storage of 2 Million Tons of CO2 at Wyoming Site WASHINGTON, DC - Completing a series of awards through its Regional Carbon Sequestration Partnership Program, the U.S. Department of Energy (DOE) today awarded $66.9 million to the Big Sky Regional Carbon Sequestration Partnership for the

  13. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect (OSTI)

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  14. Parallel computing works

    SciTech Connect (OSTI)

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  15. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect (OSTI)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  16. Large-scale seismic waveform quality metric calculation using Hadoop

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Magana-Zook, Steven; Gaylord, Jessie M.; Knapp, Douglas R.; Dodge, Douglas A.; Ruppert, Stanley D.

    2016-05-27

    Here in this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/Omore » performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. We conducted these experiments multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will

  17. Development of high performance scientific components for interoperability of computing packages

    SciTech Connect (OSTI)

    Gulabani, Teena Pratap

    2008-12-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  18. Large-Scale Simulation of Brain Tissue: Blue Brain Project, EPFL | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Leadership Computing Facility Digital reconstruction of pyramidal cells Digital reconstruction of pyramidal cells. Blue Brain Project, Ecole Polytechnique Federale de Lausanne Large-Scale Simulation of Brain Tissue: Blue Brain Project, EPFL PI Name: Fabien Delalondre PI Email: fabien.delalondre@epfl.ch Institution: Ecole Federale Polytechnique de Lausanne Allocation Program: ESP Year: 2015 Research Domain: Biological Sciences Tier 1 Science Project Science This ESP project will be used to

  19. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    SciTech Connect (OSTI)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  20. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    SciTech Connect (OSTI)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-01-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  1. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    SciTech Connect (OSTI)

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem

  2. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    SciTech Connect (OSTI)

    Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  3. Electron drift in a large scale solid xenon

    SciTech Connect (OSTI)

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  4. Electron drift in a large scale solid xenon

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  5. LARGE-SCALE MOTIONS IN THE PERSEUS GALAXY CLUSTER

    SciTech Connect (OSTI)

    Simionescu, A.; Werner, N.; Urban, O.; Allen, S. W.; Fabian, A. C.; Sanders, J. S.; Mantz, A.; Nulsen, P. E. J.; Takei, Y.

    2012-10-01

    By combining large-scale mosaics of ROSAT PSPC, XMM-Newton, and Suzaku X-ray observations, we present evidence for large-scale motions in the intracluster medium of the nearby, X-ray bright Perseus Cluster. These motions are suggested by several alternating and interleaved X-ray bright, low-temperature, low-entropy arcs located along the east-west axis, at radii ranging from {approx}10 kpc to over a Mpc. Thermodynamic features qualitatively similar to these have previously been observed in the centers of cool-core clusters, and were successfully modeled as a consequence of the gas sloshing/swirling motions induced by minor mergers. Our observations indicate that such sloshing/swirling can extend out to larger radii than previously thought, on scales approaching the virial radius.

  6. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect (OSTI)

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  7. Robust, Multifunctional Joint for Large Scale Power Production Stacks -

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Energy Innovation Portal Robust, Multifunctional Joint for Large Scale Power Production Stacks Lawrence Berkeley National Laboratory Contact LBL About This Technology DIAGRAM OF BERKELEY LAB'S MULTIFUNCTIONAL JOINT DIAGRAM OF BERKELEY LAB'S MULTIFUNCTIONAL JOINT Technology Marketing SummaryBerkeley Lab scientists have developed a multifunctional joint for metal supported, tubular SOFCs that divides various joint functions so that materials and methods optimizing each function can be chosen

  8. Relic vector field and CMB large scale anomalies

    SciTech Connect (OSTI)

    Chen, Xingang; Wang, Yi E-mail: yw366@cam.ac.uk

    2014-10-01

    We study the most general effects of relic vector fields on the inflationary background and density perturbations. Such effects are observable if the number of inflationary e-folds is close to the minimum requirement to solve the horizon problem. We show that this can potentially explain two CMB large scale anomalies: the quadrupole-octopole alignment and the quadrupole power suppression. We discuss its effect on the parity anomaly. We also provide analytical template for more detailed data comparison.

  9. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect (OSTI)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  10. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect (OSTI)

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  11. National Energy Research Scientific Computing Center | U.S. DOE Office of

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Science (SC) National Labs, Profiles, and Contacts » National Energy Research Scientific Computing Center (NERSC) Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) SBIR/STTR Home About Funding Opportunity Announcements (FOAs) Applicant and Awardee Resources Quick Links DOE SBIR Online Learning Center External link DOE Phase 0 Small Business Assistance External link Protecting your Trade Secrets, Commercial, and Financial Information Preparing and

  12. ADVANCED SCIENTIFIC COMPUTING ADVISORY COMMITTEE April 4, 2016 | U.S. DOE

    Office of Science (SC) Website

    Office of Science (SC) 16 Advanced Scientific Computing Advisory Committee (ASCAC) ASCAC Home Meetings September 2016 April 2016 December 2015 July 2015 March 2015 November 2014 March 2014 November 2013 March 2013 October 2012 August 2012 March 2012 November 2011 August 2011 March 2011 November 2010 August 2010 March 2010 November 2009 August 2009 March 2009 October 2008 August 2008 February 2008 November 2007 August 2007 February 2007 November 2006 August 2006 March 2006 April 2004 March

  13. DOE Advanced Scientific Computing Advisory Subcommittee (ASCAC) Report: Top Ten Exascale Research Challenges

    SciTech Connect (OSTI)

    Lucas, Robert; Ang, James; Bergman, Keren; Borkar, Shekhar; Carlson, William; Carrington, Laura; Chiu, George; Colwell, Robert; Dally, William; Dongarra, Jack; Geist, Al; Haring, Rud; Hittinger, Jeffrey; Hoisie, Adolfy; Klein, Dean Micron; Kogge, Peter; Lethin, Richard; Sarkar, Vivek; Schreiber, Robert; Shalf, John; Sterling, Thomas; Stevens, Rick; Bashor, Jon; Brightwell, Ron; Coteus, Paul; Debenedictus, Erik; Hiller, Jon; Kim, K. H.; Langston, Harper; Murphy, Richard Micron; Webster, Clayton; Wild, Stefan; Grider, Gary; Ross, Rob; Leyffer, Sven; Laros III, James

    2014-02-10

    Exascale computing systems are essential for the scientific fields that will transform the 21st century global economy, including energy, biotechnology, nanotechnology, and materials science. Progress in these fields is predicated on the ability to perform advanced scientific and engineering simulations, and analyze the deluge of data. On July 29, 2013, ASCAC was charged by Patricia Dehmer, the Acting Director of the Office of Science, to assemble a subcommittee to provide advice on exascale computing. This subcommittee was directed to return a list of no more than ten technical approaches (hardware and software) that will enable the development of a system that achieves the Department's goals for exascale computing. Numerous reports over the past few years have documented the technical challenges and the non¬-viability of simply scaling existing computer designs to reach exascale. The technical challenges revolve around energy consumption, memory performance, resilience, extreme concurrency, and big data. Drawing from these reports and more recent experience, this ASCAC subcommittee has identified the top ten computing technology advancements that are critical to making a capable, economically viable, exascale system.

  14. Scientific Grand Challenges: Crosscutting Technologies for Computing at the Exascale - February 2-4, 2010, Washington, D.C.

    SciTech Connect (OSTI)

    Khaleel, Mohammad A.

    2011-02-06

    The goal of the "Scientific Grand Challenges - Crosscutting Technologies for Computing at the Exascale" workshop in February 2010, jointly sponsored by the U.S. Department of Energy’s Office of Advanced Scientific Computing Research and the National Nuclear Security Administration, was to identify the elements of a research and development agenda that will address these challenges and create a comprehensive exascale computing environment. This exascale computing environment will enable the science applications identified in the eight previously held Scientific Grand Challenges Workshop Series.

  15. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    SciTech Connect (OSTI)

    RODRIGUEZ, MARKO A.; BOLLEN, JOHAN; VAN DE SOMPEL, HERBERT

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  16. Reducing Data Center Loads for a Large-scale, Low Energy Office...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Data Center Loads for a Large- scale, Low-energy Office Building: NREL's Research Support ... National Renewable Energy Laboratory Reducing Data Center Loads for a Large-Scale, ...

  17. HyLights -- Tools to Prepare the Large-Scale European Demonstration...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    HYLIGHTS - TOOLS TO PREPARE THE LARGE-SCALE EUROPEAN DEMONSTRATION PROJECTS ON HYDROGEN ... Assist the European Commission and European industry to plan the large-scale demonstration ...

  18. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect (OSTI)

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  19. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    SciTech Connect (OSTI)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  20. Large-Scale Algal Cultivation, Harvesting and Downstream Processing Workshop

    Office of Energy Efficiency and Renewable Energy (EERE)

    ATP3 (Algae Testbed Public-Private Partnership) is hosting the Large-Scale Algal Cultivation, Harvesting and Downstream Processing Workshop on November 2–6, 2015, at the Arizona Center for Algae Technology and Innovation in Mesa, Arizona. Topics will include practical applications of growing and managing microalgal cultures at production scale (such as methods for handling cultures, screening strains for desirable characteristics, identifying and mitigating contaminants, scaling up cultures for outdoor growth, harvesting and processing technologies, and the analysis of lipids, proteins, and carbohydrates). Related training will include hands-on laboratory and field opportunities.

  1. Large scale obscuration and related climate effects open literature bibliography

    SciTech Connect (OSTI)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  2. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    SciTech Connect (OSTI)

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank; Ma, Kwan-Liu; Geveci, Berk; Meredith, Jeremy

    2015-12-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  3. A method of orbital analysis for large-scale first-principles simulations

    SciTech Connect (OSTI)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  4. Large-scale anisotropy in stably stratified rotating flows

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  5. Detecting differential protein expression in large-scale population proteomics

    SciTech Connect (OSTI)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  6. The effective field theory of cosmological large scale structures

    SciTech Connect (OSTI)

    Carrasco, John Joseph M.; Hertzberg, Mark P.; Senatore, Leonardo

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ? 106c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations ?(k) for all the observables. As an example, we calculate the correction to the power spectrum at order ?(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ? 0.24h Mpc1.

  7. Fortran Transformational Tools in Support of Scientific Application Development for Petascale Computer Architectures

    SciTech Connect (OSTI)

    Sottille, Matthew

    2013-09-12

    This document is the final report for a multi-year effort building infrastructure to support tool development for Fortran programs. We also investigated static analysis and code transformation methods relevant to scientific programmers who are writing Fortran programs for petascale-class high performance computing systems. This report details our accomplishments, technical approaches, and provides information on where the research results and code may be obtained from an open source software repository. The report for the first year of the project that was performed at the University of Oregon prior to the PI moving to Galois, Inc. is included as an appendix.

  8. Presentation on the Large-Scale Renewable Energy Guide | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy Presentation on the Large-Scale Renewable Energy Guide Presentation on the Large-Scale Renewable Energy Guide Presentation covers the Large-Scale RE Guide: Developing Renewable Energy Projects Larger than 10 MWs at Federal Facilities for the FUPWG Spring meeting, held on May 22, 2013, in San Francisco, California. Download FEMP's Large-Scale Renewable Energy Guide - Presented by Brad Gustafson (1.75 MB) More Documents & Publications Large-Scale Federal Renewable Energy Projects

  9. Networks of silicon nanowires: A large-scale atomistic electronic structure analysis

    SciTech Connect (OSTI)

    Kele?, mit; Bulutay, Ceyhun; Liedke, Bartosz; Heinig, Karl-Heinz

    2013-11-11

    Networks of silicon nanowires possess intriguing electronic properties surpassing the predictions based on quantum confinement of individual nanowires. Employing large-scale atomistic pseudopotential computations, as yet unexplored branched nanostructures are investigated in the subsystem level as well as in full assembly. The end product is a simple but versatile expression for the bandgap and band edge alignments of multiply-crossing Si nanowires for various diameters, number of crossings, and wire orientations. Further progress along this line can potentially topple the bottom-up approach for Si nanowire networks to a top-down design by starting with functionality and leading to an enabling structure.

  10. Nuclear-pumped lasers for large-scale applications

    SciTech Connect (OSTI)

    Anderson, R.E.; Leonard, E.M.; Shea, R.F.; Berggren, R.R.

    1989-05-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficiently short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system; to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to demonstrate the performance of large-scale optics and the beam quality that may be obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 8 figs., 5 tabs.

  11. Nuclear-pumped lasers for large-scale applications

    SciTech Connect (OSTI)

    Anderson, R.E.; Leonard, E.M.; Shea, R.E.; Berggren, R.R.

    1988-01-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficient short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system: to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to determine the performance of large-scale optics and the beam quality that may bo obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 7 figs., 5 tabs.

  12. Large-Scale All-Dielectric Metamaterial Perfect Reflectors

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Moitra, Parikshit; Slovick, Brian A.; li, Wei; Kravchencko, Ivan I.; Briggs, Dayrl P.; Krishnamurthy, S.; Valentine, Jason

    2015-05-08

    All-dielectric metamaterials offer a potential low-loss alternative to plasmonic metamaterials at optical frequencies. In this paper, we take advantage of the low absorption loss as well as the simple unit cell geometry to demonstrate large-scale (centimeter-sized) all-dielectric metamaterial perfect reflectors made from silicon cylinder resonators. These perfect reflectors, operating in the telecommunications band, were fabricated using self-assembly based nanosphere lithography. In spite of the disorder originating from the self-assembly process, the average reflectance of the metamaterial perfect reflectors is 99.7% at 1530 nm, surpassing the reflectance of metallic mirrors. Moreover, the spectral separation of the electric and magnetic resonances canmore » be chosen to achieve the required reflection bandwidth while maintaining a high tolerance to disorder. Finally, the scalability of this design could lead to new avenues of manipulating light for low-loss and large-area photonic applications.« less

  13. Performance Health Monitoring of Large-Scale Systems

    SciTech Connect (OSTI)

    Rajamony, Ram

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­‐scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  14. Large scale, urban decontamination; developments, historical examples and lessons learned

    SciTech Connect (OSTI)

    Demmer, R.L.

    2007-07-01

    Recent terrorist threats and actions have lead to a renewed interest in the technical field of large scale, urban environment decontamination. One of the driving forces for this interest is the prospect for the cleanup and removal of radioactive dispersal device (RDD or 'dirty bomb') residues. In response, the United States Government has spent many millions of dollars investigating RDD contamination and novel decontamination methodologies. The efficiency of RDD cleanup response will be improved with these new developments and a better understanding of the 'old reliable' methodologies. While an RDD is primarily an economic and psychological weapon, the need to cleanup and return valuable or culturally significant resources to the public is nonetheless valid. Several private companies, universities and National Laboratories are currently developing novel RDD cleanup technologies. Because of its longstanding association with radioactive facilities, the U. S. Department of Energy National Laboratories are at the forefront in developing and testing new RDD decontamination methods. However, such cleanup technologies are likely to be fairly task specific; while many different contamination mechanisms, substrate and environmental conditions will make actual application more complicated. Some major efforts have also been made to model potential contamination, to evaluate both old and new decontamination techniques and to assess their readiness for use. There are a number of significant lessons that can be gained from a look at previous large scale cleanup projects. Too often we are quick to apply a costly 'package and dispose' method when sound technological cleaning approaches are available. Understanding historical perspectives, advanced planning and constant technology improvement are essential to successful decontamination. (authors)

  15. Analysis of long-term flows resulting from large-scale sodium-water reactions in an LMFBR secondary system

    SciTech Connect (OSTI)

    Shin, Y.W.; Chung, H.; Choi, U.S.; Wiedermann, A.H.; Ockert, C.E.

    1984-07-01

    Leaks in LMFBR steam generators cannot entirely be prevented; thus the steam generators and the intermediate heat transport system (IHTS) of an LMFBR must be designed to withstand the effects of the leaks. A large-scale leak which might result from a sudden break of a steam generator tube, and the resulting sodium-water reaction (SWR) can generate large pressure pulses that propagate through the IHTS and exert large forces on the piping supports. This paper discusses computer programs for analyzing long-term flow and thermal effects in an LMFBR secondary system resulting from large-scale steam generator leaks, and the status of the development of the codes.

  16. Detecting and mitigating abnormal events in large scale networks: budget constrained placement on smart grids

    SciTech Connect (OSTI)

    Santhi, Nandakishore; Pan, Feng

    2010-10-19

    Several scenarios exist in the modern interconnected world which call for an efficient network interdiction algorithm. Applications are varied, including various monitoring and load shedding applications on large smart energy grids, computer network security, preventing the spread of Internet worms and malware, policing international smuggling networks, and controlling the spread of diseases. In this paper we consider some natural network optimization questions related to the budget constrained interdiction problem over general graphs, specifically focusing on the sensor/switch placement problem for large-scale energy grids. Many of these questions turn out to be computationally hard to tackle. We present a particular form of the interdiction question which is practically relevant and which we show as computationally tractable. A polynomial-time algorithm will be presented for solving this problem.

  17. Load Balancing Scientific Applications

    SciTech Connect (OSTI)

    Pearce, Olga Tkachyshyn

    2014-12-01

    The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one at the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.

  18. Ferroelectric opening switches for large-scale pulsed power drivers.

    SciTech Connect (OSTI)

    Brennecka, Geoffrey L.; Rudys, Joseph Matthew; Reed, Kim Warren; Pena, Gary Edward; Tuttle, Bruce Andrew; Glover, Steven Frank

    2009-11-01

    Fast electrical energy storage or Voltage-Driven Technology (VDT) has dominated fast, high-voltage pulsed power systems for the past six decades. Fast magnetic energy storage or Current-Driven Technology (CDT) is characterized by 10,000 X higher energy density than VDT and has a great number of other substantial advantages, but it has all but been neglected for all of these decades. The uniform explanation for neglect of CDT technology is invariably that the industry has never been able to make an effective opening switch, which is essential for the use of CDT. Most approaches to opening switches have involved plasma of one sort or another. On a large scale, gaseous plasmas have been used as a conductor to bridge the switch electrodes that provides an opening function when the current wave front propagates through to the output end of the plasma and fully magnetizes the plasma - this is called a Plasma Opening Switch (POS). Opening can be triggered in a POS using a magnetic field to push the plasma out of the A-K gap - this is called a Magnetically Controlled Plasma Opening Switch (MCPOS). On a small scale, depletion of electron plasmas in semiconductor devices is used to affect opening switch behavior, but these devices are relatively low voltage and low current compared to the hundreds of kilo-volts and tens of kilo-amperes of interest to pulsed power. This work is an investigation into an entirely new approach to opening switch technology that utilizes new materials in new ways. The new materials are Ferroelectrics and using them as an opening switch is a stark contrast to their traditional applications in optics and transducer applications. Emphasis is on use of high performance ferroelectrics with the objective of developing an opening switch that would be suitable for large scale pulsed power applications. Over the course of exploring this new ground, we have discovered new behaviors and properties of these materials that were here to fore unknown. Some of

  19. Large-Scale Spray Releases: Additional Aerosol Test Results

    SciTech Connect (OSTI)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  20. Large Scale Obscuration and Related Climate Effects Workshop: Proceedings

    SciTech Connect (OSTI)

    Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K.

    1994-05-01

    A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

  1. Large-scale BAO signatures of the smallest galaxies

    SciTech Connect (OSTI)

    Dalal, Neal; Pen, Ue-Li; Seljak, Uros E-mail: pen@cita.utoronto.ca

    2010-11-01

    Recent work has shown that at high redshift, the relative velocity between dark matter and baryonic gas is typically supersonic. This relative velocity suppresses the formation of the earliest baryonic structures like minihalos, and the suppression is modulated on large scales. This effect imprints a characteristic shape in the clustering power spectrum of the earliest structures, with significant power on ∼ 100 Mpc scales featuring highly pronounced baryon acoustic oscillations. The amplitude of these oscillations is orders of magnitude larger at z ∼ 20 than previously expected. This characteristic signature can allow us to distinguish the effects of minihalos on intergalactic gas at times preceding and during reionization. We illustrate this effect with the example of 21 cm emission and absorption from redshifts during and before reionization. This effect can potentially allow us to probe physics on kpc scales using observations on 100 Mpc scales. We present sensitivity forecasts for FAST and Arecibo. Depending on parameters, this enhanced structure may be detectable by Arecibo at z ∼ 15−20, and with appropriate instrumentation FAST could measure the BAO power spectrum with high precision. In principle, this effect could also pose a serious challenge for efforts to constrain dark energy using observations of the BAO feature at low redshift.

  2. Large scale electromechanical transistor with application in mass sensing

    SciTech Connect (OSTI)

    Jin, Leisheng; Li, Lijie

    2014-12-07

    Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibrationan external force has to be used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.

  3. LARGE SCALE METHOD FOR THE PRODUCTION AND PURIFICATION OF CURIUM

    DOE Patents [OSTI]

    Higgins, G.H.; Crane, W.W.T.

    1959-05-19

    A large-scale process for production and purification of Cm/sup 242/ is described. Aluminum slugs containing Am are irradiated and declad in a NaOH-- NaHO/sub 3/ solution at 85 to 100 deg C. The resulting slurry filtered and washed with NaOH, NH/sub 4/OH, and H/sub 2/O. Recovery of Cm from filtrate and washings is effected by an Fe(OH)/sub 3/ precipitation. The precipitates are then combined and dissolved ln HCl and refractory oxides centrifuged out. These oxides are then fused with Na/sub 2/CO/sub 3/ and dissolved in HCl. The solution is evaporated and LiCl solution added. The Cm, rare earths, and anionic impurities are adsorbed on a strong-base anfon exchange resin. Impurities are eluted with LiCl--HCl solution, rare earths and Cm are eluted by HCl. Other ion exchange steps further purify the Cm. The Cm is then precipitated as fluoride and used in this form or further purified and processed. (T.R.H.)

  4. Parallel I/O Software Infrastructure for Large-Scale Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel IO Software Infrastructure for Large-Scale Systems Parallel IO Software Infrastructure for Large-Scale Systems Choudhary.png An illustration of how MPI---IO file domain...

  5. The IR-resummed Effective Field Theory of Large Scale Structures...

    Office of Scientific and Technical Information (OSTI)

    IR-resummed Effective Field Theory of Large Scale Structures Citation Details In-Document Search Title: The IR-resummed Effective Field Theory of Large Scale Structures We present a ...

  6. I/O Performance of a Large-Scale, Interpreter-Driven Laser-Plasma...

    Office of Scientific and Technical Information (OSTI)

    Conference: IO Performance of a Large-Scale, Interpreter-Driven Laser-Plasma Interaction Code Citation Details In-Document Search Title: IO Performance of a Large-Scale, ...

  7. Comparison of the effects in the rock mass of large-scale chemical...

    Office of Scientific and Technical Information (OSTI)

    Comparison of the effects in the rock mass of large-scale chemical and nuclear explosions. ... Title: Comparison of the effects in the rock mass of large-scale chemical and nuclear ...

  8. Energy Department Awards $66.7 Million for Large-Scale Carbon...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    66.7 Million for Large-Scale Carbon Sequestration Project Energy Department Awards 66.7 Million for Large-Scale Carbon Sequestration Project December 18, 2007 - 4:58pm Addthis ...

  9. Large-Scale Deep Learning on the YFCC100M Dataset (Conference...

    Office of Scientific and Technical Information (OSTI)

    Conference: Large-Scale Deep Learning on the YFCC100M Dataset Citation Details In-Document Search Title: Large-Scale Deep Learning on the YFCC100M Dataset Authors: Ni, K ; Boakye, ...

  10. Efficient preconditioning of the electronic structure problem in large scale ab initio molecular dynamics simulations

    SciTech Connect (OSTI)

    Schiffmann, Florian; VandeVondele, Joost

    2015-06-28

    We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filtering small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.

  11. EERE Success Story-FEMP Helps Federal Facilities Develop Large-Scale

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Renewable Energy Projects | Department of Energy FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects EERE Success Story-FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects August 21, 2013 - 12:00am Addthis EERE's Federal Energy Management Program issued a new resource that provides best practices and helpful guidance for federal agencies developing large-scale renewable energy projects. The resource, Large-Scale Renewable Energy Guide:

  12. Large-Scale Spray Releases: Initial Aerosol Test Results

    SciTech Connect (OSTI)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and

  13. Large-Scale Data Challenges in Future Power Grids

    SciTech Connect (OSTI)

    Yin, Jian; Sharma, Poorva; Gorton, Ian; Akyol, Bora A.

    2013-03-25

    This paper describes technical challenges in supporting large-scale real-time data analysis for future power grid systems and discusses various design options to address these challenges. Even though the existing U.S. power grid has served the nation remarkably well over the last 120 years, big changes are in the horizon. The widespread deployment of renewable generation, smart grid controls, energy storage, plug-in hybrids, and new conducting materials will require fundamental changes in the operational concepts and principal components. The whole system becomes highly dynamic and needs constant adjustments based on real time data. Even though millions of sensors such as phase measurement units (PMUs) and smart meters are being widely deployed, a data layer that can support this amount of data in real time is needed. Unlike the data fabric in cloud services, the data layer for smart grids must address some unique challenges. This layer must be scalable to support millions of sensors and a large number of diverse applications and still provide real time guarantees. Moreover, the system needs to be highly reliable and highly secure because the power grid is a critical piece of infrastructure. No existing systems can satisfy all the requirements at the same time. We examine various design options. In particular, we explore the special characteristics of power grid data to meet both scalability and quality of service requirements. Our initial prototype can improve performance by orders of magnitude over existing general-purpose systems. The prototype was demonstrated with several use cases from PNNL’s FPGI and was shown to be able to integrate huge amount of data from a large number of sensors and a diverse set of applications.

  14. ComputerIntegration.jpg | OSTI, US Dept of Energy Office of Scientific and

    Office of Scientific and Technical Information (OSTI)

    Technical Information ComputerIntegration

  15. Large-scale structure evolution in axisymmetric, compressible free-shear layers

    SciTech Connect (OSTI)

    Aeschliman, D.P.; Baty, R.S.

    1997-05-01

    This paper is a description of work-in-progress. It describes Sandia`s program to study the basic fluid mechanics of large-scale mixing in unbounded, compressible, turbulent flows, specifically, the turbulent mixing of an axisymmetric compressible helium jet in a parallel, coflowing compressible air freestream. Both jet and freestream velocities are variable over a broad range, providing a wide range mixing layer Reynolds number. Although the convective Mach number, M{sub c}, range is currently limited by the present nozzle design to values of 0.6 and below, straightforward nozzle design changes would permit a wide range of convective Mach number, to well in excess of 1.0. The use of helium allows simulation of a hot jet due to the large density difference, and also aids in obtaining optical flow visualization via schlieren due to the large density gradient in the mixing layer. The work comprises a blend of analysis, experiment, and direct numerical simulation (DNS). There the authors discuss only the analytical and experimental efforts to observe and describe the evolution of the large-scale structures. The DNS work, used to compute local two-point velocity correlation data, will be discussed elsewhere.

  16. Feeding a large-scale physics application to Python

    SciTech Connect (OSTI)

    Beazley, D.M.; Lomdahl, P.S.

    1997-10-01

    The authors describe their experiences using Python with the SPaSM molecular dynamics code at Los Alamos National Laboratory. Originally developed as a large monolithic application for massive parallel processing systems, they have used Python to transform their application into a flexible, highly modular, and extremely powerful system for performing simulation, data analysis, and visualization. In addition, they describe how Python has solved a number of important problems related to the development, debugging, deployment, and maintenance of scientific software.

  17. Cosmological implications of the CMB large-scale structure

    SciTech Connect (OSTI)

    Melia, Fulvio

    2015-01-01

    The Wilkinson Microwave Anisotropy Probe (WMAP) and Planck may have uncovered several anomalies in the full cosmic microwave background (CMB) sky that could indicate possible new physics driving the growth of density fluctuations in the early universe. These include an unusually low power at the largest scales and an apparent alignment of the quadrupole and octopole moments. In a ?CDM model where the CMB is described by a Gaussian Random Field, the quadrupole and octopole moments should be statistically independent. The emergence of these low probability features may simply be due to posterior selections from many such possible effects, whose occurrence would therefore not be as unlikely as one might naively infer. If this is not the case, however, and if these features are not due to effects such as foreground contamination, their combined statistical significance would be equal to the product of their individual significances. In the absence of such extraneous factors, and ignoring the biasing due to posterior selection, the missing large-angle correlations would have a probability as low as ?0.1% and the low-l multipole alignment would be unlikely at the ?4.9% level; under the least favorable conditions, their simultaneous observation in the context of the standard model could then be likely at only the ?0.005% level. In this paper, we explore the possibility that these features are indeed anomalous, and show that the corresponding probability of CMB multipole alignment in the R{sub h}=ct universe would then be ?710%, depending on the number of large-scale SachsWolfe induced fluctuations. Since the low power at the largest spatial scales is reproduced in this cosmology without the need to invoke cosmic variance, the overall likelihood of observing both of these features in the CMB is ?7%, much more likely than in ?CDM, if the anomalies are real. The key physical ingredient responsible for this difference is the existence in the former of a maximum fluctuation

  18. Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design

    SciTech Connect (OSTI)

    Liao, Ben-Shan; Bai, Zhaojun; Lee, Lie-Quan; Ko, Kwok; /SLAC

    2006-09-28

    A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.

  19. Final Report: Migration Mechanisms for Large-scale Parallel Applications

    SciTech Connect (OSTI)

    Jason Nieh

    2009-10-30

    Process migration is the ability to transfer a process from one machine to another. It is a useful facility in distributed computing environments, especially as computing devices become more pervasive and Internet access becomes more ubiquitous. The potential benefits of process migration, among others, are fault resilience by migrating processes off of faulty hosts, data access locality by migrating processes closer to the data, better system response time by migrating processes closer to users, dynamic load balancing by migrating processes to less loaded hosts, and improved service availability and administration by migrating processes before host maintenance so that applications can continue to run with minimal downtime. Although process migration provides substantial potential benefits and many approaches have been considered, achieving transparent process migration functionality has been difficult in practice. To address this problem, our work has designed, implemented, and evaluated new and powerful transparent process checkpoint-restart and migration mechanisms for desktop, server, and parallel applications that operate across heterogeneous cluster and mobile computing environments. A key aspect of this work has been to introduce lightweight operating system virtualization to provide processes with private, virtual namespaces that decouple and isolate processes from dependencies on the host operating system instance. This decoupling enables processes to be transparently checkpointed and migrated without modifying, recompiling, or relinking applications or the operating system. Building on this lightweight operating system virtualization approach, we have developed novel technologies that enable (1) coordinated, consistent checkpoint-restart and migration of multiple processes, (2) fast checkpointing of process and file system state to enable restart of multiple parallel execution environments and time travel, (3) process migration across heterogeneous

  20. Feasibility of Large-Scale Ocean CO2 Sequestration

    SciTech Connect (OSTI)

    Peter Brewer

    2008-08-31

    Scientific knowledge of natural clathrate hydrates has grown enormously over the past decade, with spectacular new findings of large exposures of complex hydrates on the sea floor, the development of new tools for examining the solid phase in situ, significant progress in modeling natural hydrate systems, and the discovery of exotic hydrates associated with sea floor venting of liquid CO{sub 2}. Major unresolved questions remain about the role of hydrates in response to climate change today, and correlations between the hydrate reservoir of Earth and the stable isotopic evidence of massive hydrate dissociation in the geologic past. The examination of hydrates as a possible energy resource is proceeding apace for the subpermafrost accumulations in the Arctic, but serious questions remain about the viability of marine hydrates as an economic resource. New and energetic explorations by nations such as India and China are quickly uncovering large hydrate findings on their continental shelves. In this report we detail research carried out in the period October 1, 2007 through September 30, 2008. The primary body of work is contained in a formal publication attached as Appendix 1 to this report. In brief we have surveyed the recent literature with respect to the natural occurrence of clathrate hydrates (with a special emphasis on methane hydrates), the tools used to investigate them and their potential as a new source of natural gas for energy production.

  1. Large scale condensed matter and fluid dynamics simulations | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Leadership Computing Facility , (a)Snapshots of the vorticity field of a UPO located in weakly turbulent flow with Re=371 and period equal to 26864 LB time steps. The quantity shown is the magnitude of vorticity above a given cut-off level. Red corresponds to large negative vorticity (clockwise rotation), and blue to large positive vorticity (counter-clockwise rotation). (b)Initial stucture of the large LDH-nucleic acid models, (a) System, at the start of the simulation. For clarity, water

  2. Locations of Smart Grid Demonstration and Large-Scale Energy Storage

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects | Department of Energy Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Map of the United States showing the location of all projects created with funding from the Smart Grid Demonstration and Energy Storage Project, funded through the American Recovery and Reinvestment Act. Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects (90.94 KB) More Documents

  3. DGDFT: A massively parallel method for large scale density functional theory calculations

    SciTech Connect (OSTI)

    Hu, Wei Yang, Chao; Lin, Lin

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  4. Large-Scale Continuous Subgraph Queries on Streams

    SciTech Connect (OSTI)

    Choudhury, Sutanay; Holder, Larry; Chin, George; Feo, John T.

    2011-11-30

    Graph pattern matching involves finding exact or approximate matches for a query subgraph in a larger graph. It has been studied extensively and has strong applications in domains such as computer vision, computational biology, social networks, security and finance. The problem of exact graph pattern matching is often described in terms of subgraph isomorphism which is NP-complete. The exponential growth in streaming data from online social networks, news and video streams and the continual need for situational awareness motivates a solution for finding patterns in streaming updates. This is also the prime driver for the real-time analytics market. Development of incremental algorithms for graph pattern matching on streaming inputs to a continually evolving graph is a nascent area of research. Some of the challenges associated with this problem are the same as found in continuous query (CQ) evaluation on streaming databases. This paper reviews some of the representative work from the exhaustively researched field of CQ systems and identifies important semantics, constraints and architectural features that are also appropriate for HPC systems performing real-time graph analytics. For each of these features we present a brief discussion of the challenge encountered in the database realm, the approach to the solution and state their relevance in a high-performance, streaming graph processing framework.

  5. Large-scale Offshore Wind Power in the United States. Assessment of Opportunities and Barriers

    SciTech Connect (OSTI)

    Musial, Walter; Ram, Bonnie

    2010-09-01

    This report describes the benefits of and barriers to large-scale deployment of offshore wind energy systems in U.S. waters.

  6. Large-scale delamination of multi-layers transition metal carbides...

    Office of Scientific and Technical Information (OSTI)

    Citation Details In-Document Search Title: Large-scale ... Herein we report on a general approach to delaminate ... Type: Accepted Manuscript Journal Name: Dalton Transactions ...

  7. A Large-Scale, High-Resolution Hydrological Model Parameter Data...

    Office of Scientific and Technical Information (OSTI)

    Large-Scale, High-Resolution Hydrological Model Parameter Data Set for Climate Change Impact Assessment for the Conterminous US Citation Details In-Document Search Title: A ...

  8. HyLights -- Tools to Prepare the Large-Scale European Demonstration...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Projects on Hydrogen for Transport HyLights -- Tools to Prepare the Large-Scale European Demonstration Projects on Hydrogen for Transport Presented at Refueling ...

  9. Development of fine-resolution analyses and expanded large-scale...

    Office of Scientific and Technical Information (OSTI)

    II: Scale-awareness and application to single-column model experiments Title: Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: ...

  10. Development of fine-resolution analyses and expanded large-scale...

    Office of Scientific and Technical Information (OSTI)

    I: Methodology and evaluation Citation Details In-Document Search Title: Development of fine-resolution analyses and expanded large-scale forcing properties. Part I: Methodology ...