National Library of Energy BETA

Sample records for high performance computing

  1. High Performance Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

  2. Sandia Energy - High Performance Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

  3. Introduction to High Performance Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Introduction to High Performance Computing Introduction to High Performance Computing June 10, 2013 Photo on 7 30 12 at 7.10 AM Downloads Download File Gerber-HPC-2.pdf...

  4. Software and High Performance Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Software and High Performance Computing Software and High Performance Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest Contact thumbnail of Kathleen McDonald Head of Intellectual Property, Business Development Executive Kathleen McDonald Richard P. Feynman Center for Innovation (505) 667-5844 Email Software Computational physics, computer science, applied mathematics, statistics and the

  5. Presentation: High Performance Computing Applications

    Energy.gov [DOE]

    A briefing to the Secretary's Energy Advisory Board on High Performance Computing Applications delivered by Frederick H. Streitz, Lawrence Livermore National Laboratory.

  6. Thrusts in High Performance Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    in HPC 1 Thrusts in High Performance Computing Science at Scale Petaflops to Exaflops Science through Volume Thousands to Millions of Simulations Science in Data Petabytes to ...

  7. Software and High Performance Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Computational physics, computer science, applied mathematics, statistics and the ... a fully operational supercomputing environment Providing Current Capability Scientific ...

  8. in High Performance Computing Computer System, Cluster, and Networking...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

  9. High Performance Computing Student Career Resources

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    HPC » Students High Performance Computing Student Career Resources Explore the multiple dimensions of a career at Los Alamos Lab: work with the best minds on the planet in an inclusive environment that is rich in intellectual vitality and opportunities for growth. Contact Us Student Liaison Mychael Parish Email High Performance Computing Capabilities The High Performance Computing (HPC) Division supports the Laboratory mission by managing world-class Supercomputing Centers. Our capabilities

  10. SciTech Connect: "high performance computing"

    Office of Scientific and Technical Information (OSTI)

    Advanced Search Term Search Semantic Search Advanced Search All Fields: "high performance computing" Semantic Semantic Term Title: Full Text: Bibliographic Data: Creator ...

  11. high performance computing | National Nuclear Security Administration

    National Nuclear Security Administration (NNSA)

    high performance computing A story of tech transfer success: prize-winning innovation for HPC Last month, NNSA's Technology Transfer Program Manager for the Office of Strategic ...

  12. Introduction to High Performance Computing Using GPUs

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    HPC Using GPUs Introduction to High Performance Computing Using GPUs July 11, 2013 NERSC, NVIDIA, and The Portland Group presented a one-day workshop "Introduction to High Performance Computing Using GPUs" on July 11, 2013 in Room 250 of Sutardja Dai Hall on the University of California, Berkeley, campus. Registration was free and open to all NERSC users; Berkeley Lab Researchers; UC students, faculty, and staff; and users of the Oak Ridge Leadership Computing Facility. This workshop

  13. High Performance Computing Data Center (Fact Sheet)

    SciTech Connect

    Not Available

    2014-08-01

    This two-page fact sheet describes the new High Performance Computing Data Center in the ESIF and talks about some of the capabilities and unique features of the center.

  14. High Performance Computing Data Center (Fact Sheet)

    SciTech Connect

    Not Available

    2012-08-01

    This two-page fact sheet describes the new High Performance Computing Data Center being built in the ESIF and talks about some of the capabilities and unique features of the center.

  15. Collaboration to advance high-performance computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Collaboration to advance high-performance computing Collaboration to advance high-performance computing LANL and EMC will enhance, design, build, test, and deploy new cutting-edge technologies to meet some of the most difficult information technology challenges. December 21, 2011 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy

  16. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  17. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  18. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  19. High Performance Computing at the Oak Ridge Leadership Computing Facility

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    High Performance Computing at the Oak Ridge Leadership Computing Facility Go to Menu Page 2 Outline * Our Mission * Computer Systems: Present, Past, Future * Challenges Along the Way * Resources for Users Go to Menu Page 3 Our Mission Go to Menu Page 4 * World's most powerful computing facility * Nation's largest concentration of open source materials research * $1.3B budget * 4,250 employees * 3,900 research guests annually * $350 million invested in modernization * Nation's most diverse energy

  20. High Performance Computing Data Center Metering Protocol

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    High Performance Computing Data Center Metering Protocol Prepared for: U.S. Department of Energy Office of Energy Efficiency and Renewable Energy Federal Energy Management Program Prepared by: Thomas Wenning Michael MacDonald Oak Ridge National Laboratory September 2010 ii Introduction Data centers in general are continually using more compact and energy intensive central processing units, but the total number and size of data centers continues to increase to meet progressive computing

  1. High Performance Computing CFRD -- Final Technial Report

    SciTech Connect

    Hope Forsmann; Kurt Hamman

    2003-01-01

    The Bechtel Waste Treatment Project (WTP), located in Richland, WA, is comprised of many processes containing complex physics. Accurate analyses of the underlying physics of these processes is needed to reduce the amount of added costs during and after construction that are due to unknown process behavior. The WTP will have tight operating margins in order to complete the treatment of the waste on schedule. The combination of tight operating constraints coupled with complex physical processes requires analysis methods that are more accurate than traditional approaches. This study is focused specifically on multidimensional computer aided solutions. There are many skills and tools required to solve engineering problems. Many physical processes are governed by nonlinear partial differential equations. These governing equations have few, if any, closed form solutions. Past and present solution methods require assumptions to reduce these equations to solvable forms. Computational methods take the governing equations and solve them directly on a computational grid. This ability to approach the equations in their exact form reduces the number of assumptions that must be made. This approach increases the accuracy of the solution and its applicability to the problem at hand. Recent advances in computer technology have allowed computer simulations to become an essential tool for problem solving. In order to perform computer simulations as quickly and accurately as possible, both hardware and software must be evaluated. With regards to hardware, the average consumer personal computers (PCs) are not configured for optimal scientific use. Only a few vendors create high performance computers to satisfy engineering needs. Software must be optimized for quick and accurate execution. Operating systems must utilize the hardware efficiently while supplying the software with seamless access to the computer’s resources. From the perspective of Bechtel Corporation and the Idaho

  2. High-performance computing for airborne applications

    SciTech Connect

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  3. High Performance Computing | Argonne National Laboratory

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    High Performance Computing A visualization of a simulated collision event in the ATLAS detector. This simulation, containing a Z boson and five hadronic jets, is an example of an event that is too complex to be simulated in bulk using ordinary PC-based computing grids. A visualization of a simulated collision event in the ATLAS detector. This simulation, containing a Z boson and five hadronic jets, is an example of an event that is too complex to be simulated in bulk using ordinary PC-based

  4. OSTIblog Articles in the High-performance computing Topic | OSTI...

    Office of Scientific and Technical Information (OSTI)

    Research, ASCR, climate change, earth systems modeling, High-performance computing, ... ORNL's National Center for Computational Sciences... Related Topics: High-performance ...

  5. Energy Efficiency Opportunities in Federal High Performance Computing...

    Office of Environmental Management (EM)

    Efficiency Opportunities in Federal High Performance Computing Data Centers Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Case study describes ...

  6. Nuclear Forces and High-Performance Computing: The Perfect Match...

    Office of Scientific and Technical Information (OSTI)

    Conference: Nuclear Forces and High-Performance Computing: The Perfect Match Citation Details In-Document Search Title: Nuclear Forces and High-Performance Computing: The Perfect ...

  7. Computational Performance of Ultra-High-Resolution Capability...

    Office of Scientific and Technical Information (OSTI)

    Computational Performance of Ultra-High-Resolution Capability in the Community Earth System Model Citation Details In-Document Search Title: Computational Performance of ...

  8. High Performance Computing Facility Operational Assessment, CY...

    Office of Scientific and Technical Information (OSTI)

    At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational ...

  9. OCIO Technology Summit: High Performance Computing | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    High Performance Computing OCIO Technology Summit: High Performance Computing January 16, 2015 - 12:51pm Addthis OCIO Technology Summit: High Performance Computing Peter J. Tseronis Peter J. Tseronis Former Chief Technology Officer Last week, the Office of the Chief Information Officer sponsored a Technology Summit on High Performance Computing (HPC), hosted by the Chief Technology Officer. This was the eleventh in a series showcasing federal innovation and transformation. The summit explored

  10. Department of Defense High Performance Computing Modernization...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Computational Chemistry, Biology & Materials Science - 387 Users Computational Electromagnetics & Acoustics - 310 Users Computational Fluid Dynamics - 1,664 Users Environmental ...

  11. High-performance computer system installed at Los Alamos National

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Laboratory High-performance computer system installed at Lab High-performance computer system installed at Los Alamos National Laboratory New high-performance computer system, called Wolf, will be used for unclassified research. June 17, 2014 The Wolf computer system modernizes mid-tier resources for Los Alamos scientists. The Wolf computer system modernizes mid-tier resources for Los Alamos scientists. Contact Nancy Ambrosiano Communications Office (505) 667-0471 Email "This machine

  12. Toward a new metric for ranking high performance computing systems.

    Office of Scientific and Technical Information (OSTI)

    (Technical Report) | SciTech Connect Toward a new metric for ranking high performance computing systems. Citation Details In-Document Search Title: Toward a new metric for ranking high performance computing systems. The High Performance Linpack (HPL), or Top 500, benchmark [1] is the most widely recognized and discussed metric for ranking high performance computing systems. However, HPL is increasingly unreliable as a true measure of system performance for a growing collection of important

  13. Benchmarking: More Aspects of High Performance Computing

    SciTech Connect

    Rahul Ravindrudu

    2004-12-19

    pattern for the left-looking factorization. The right-looking algorithm performs better for in-core data, but the left-looking will perform better for out-of-core data due to the reduced I/O operations. Hence the conclusion that out-of-core algorithms will perform better when designed from start. The out-of-core and thread based computation do not interact in this case, since I/O is not done by the threads. The performance of the thread based computation does not depend on I/O as the algorithms are in the BLAS algorithms which assumes all the data to be in memory. This is the reason the out-of-core results and OpenMP threads results were presented separately and no attempt to combine them was made. In general, the modified HPL performs better with larger block sizes, due to less I/O involved for out-of-core part and better cache utilization for the thread based computation.

  14. High-Performance Computing Data Center Metering Protocol | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Energy High-Performance Computing Data Center Metering Protocol High-Performance Computing Data Center Metering Protocol Guide details the methods for measurement in High-Performance Computing (HPC) data center facilities and documents system strategies that have been used in Department of Energy data centers to increase data center energy efficiency. Download the guide. (1.34 MB) More Documents & Publications Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance

  15. High-performance computer system installed at Los Alamos National

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Laboratory High-performance computer system installed at Los Alamos National Laboratory Alumni Link: Opportunities, News and Resources for Former Employees Latest Issue:September 2015 all issues All Issues » submit High-performance computer system installed at Los Alamos National Laboratory New high-performance computer system, called Wolf, will be used for unclassified research September 2, 2014 New insights to changing the atomic structure of metals The Wolf computer system modernizes

  16. NREL: Energy Systems Integration Facility - High-Performance Computing and

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Analytics High-Performance Computing and Analytics High-performance computing and analytic capabilities at the Energy Systems Integration Facility enable study and simulation of material properties, processes, and fully integrated systems that would otherwise be too expensive, too dangerous, or even impossible to study by direct experimentation. With state-of-the-art computational modeling and predictive simulation capabilities, the Energy System Integration Facility's high-performance

  17. Energy Efficiency Opportunities in Federal High Performance Computing Data

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Centers | Department of Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Case study describes an outline of energy efficiency opportunities in federal high-performance computing data centers. Download the case study. (1.05 MB) More Documents & Publications Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers Case Study: Innovative Energy

  18. High-Performance Computing for Advanced Smart Grid Applications...

    Office of Scientific and Technical Information (OSTI)

    Title: High-Performance Computing for Advanced Smart Grid Applications The power grid is becoming far more complex as a result of the grid evolution meeting an information ...

  19. NNSA Awards Contract for High-Performance Computers | National...

    National Nuclear Security Administration (NNSA)

    Awards Contract for High-Performance Computers October 02, 2007 Contract Highlights Efforts to Integrate Nuclear Weapons Complex WASHINGTON, D.C. -- The Department of Energy's ...

  20. High Performance Computing Richard F. BARRETT

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    ... particle push vs. particle Just the communication (halo exchange) per-core compute load ... (Green Flash) 8, Seismic Imaging (Green Wave) 13, and an automated co-tuning process ...

  1. Report from the Next Generation High Performance Computing Task Force

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Secretary of Energy Advisory Board Report of the Task Force on Next Generation High Performance Computing August 18, 2014 U.S. Department of Energy 1 Final Version for Approval Charge to the SEAB High Performance Computing Task Force ......................................................... 4 Executive Summary ...................................................................................................................... 4 Key Findings

  2. DOE ASSESSMENT SEAB Recommendations Related to High Performance Computing

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    of 10 DOE ASSESSMENT SEAB Recommendations Related to High Performance Computing 1. Introduction The Department of Energy (DOE) is planning to develop and deliver capable exascale computing systems by 2023-24. These systems are expected to have a one-hundred to one-thousand-fold increase in sustained performance over today's computing capabilities, capabilities critical to enabling the next-generation computing for national security, science, engineering, and large- scale data analytics needed to

  3. High Performance Computational Biology: A Distributed computing Perspective (2010 JGI/ANL HPC Workshop)

    ScienceCinema

    Konerding, David [Google, Inc

    2016-07-12

    David Konerding from Google, Inc. gives a presentation on "High Performance Computational Biology: A Distributed Computing Perspective" at the JGI/Argonne HPC Workshop on January 26, 2010.

  4. Simulation and High-Performance Computing | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Simulation and High-Performance Computing Simulation and High-Performance Computing October 29, 2010 - 12:22pm Addthis Former Under Secretary Koonin Former Under Secretary Koonin Director - NYU's Center for Urban Science & Progress and Former Under Secretary for Science What are the key facts? The Chinese's Tianhe-1A machine is now the world's most powerful computer, 40% faster than the fastest American machine located at Oak Ridge National Laboratory. Of the top 500 supercomputers in the

  5. High-Performance Computing at Los

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    High-Intensity Discharge Lighting Basics High-Intensity Discharge Lighting Basics August 15, 2013 - 5:59pm Addthis Illustration of a high-intensity discharge (HID) lIllustration amp. The lamp is a tall cylindrical shape, and a cutout of the outer tube shows the materials inside. A long, thin cylinder called the arc tube runs through the lamp between two electrodes. The space around the arc tube is labeled as a vacuum. High-intensity discharge (HID) lighting can provide high efficacy and long

  6. High-performance computing of electron microstructures

    SciTech Connect

    Bishop, A. [Los Alamos National Lab., NM (United States); Birnir, B.; Galdrikian, B.; Wang, L. [Univ. of California, Santa Barbara, CA (United States)

    1998-12-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The project was a collaboration between the Quantum Institute at the University of California-Santa Barbara (UCSB) and the Condensed Matter and Statistical Physics Group at LANL. The project objective, which was successfully accomplished, was to model quantum properties of semiconductor nanostructures that were fabricated and measured at UCSB using dedicated molecular-beam epitaxy and free-electron laser facilities. A nonperturbative dynamic quantum theory was developed for systems driven by time-periodic external fields. For such systems, dynamic energy spectra of electrons and photons and their corresponding wave functions were obtained. The results are in good agreement with experimental investigations. The algorithms developed are ideally suited for massively parallel computing facilities and provide a fundamental advance in the ability to predict quantum-well properties and guide their engineering. This is a definite step forward in the development of nonlinear optical devices.

  7. Intro - High Performance Computing for 2015 HPC Annual Report

    SciTech Connect

    Klitsner, Tom

    2015-10-01

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  8. High-Performance Computing and Visualization | Energy Systems Integration |

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    NREL High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a research tool by which scientists and engineers find new ways to tackle our nation's energy challenges-challenges that cannot be addressed through traditional experimentation alone. Photo of two men standing in front of a 3D visualization screen These research efforts will save time and money and significantly improve the likelihood of breakthroughs

  9. Continuous Monitoring And Cyber Security For High Performance Computing

    Office of Scientific and Technical Information (OSTI)

    (Conference) | SciTech Connect Conference: Continuous Monitoring And Cyber Security For High Performance Computing Citation Details In-Document Search Title: Continuous Monitoring And Cyber Security For High Performance Computing Authors: Malin, Alex B. [1] ; Van Heule, Graham K. [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2013-08-02 OSTI Identifier: 1089452 Report Number(s): LA-UR-13-21921 DOE Contract Number: AC52-06NA25396 Resource Type: Conference

  10. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  11. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  12. Software Systems for High-performance Quantum Computing

    SciTech Connect

    Humble, Travis S; Britt, Keith A

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  13. LANL installs high-performance computer system | National Nuclear Security

    National Nuclear Security Administration (NNSA)

    Administration | (NNSA) LANL installs high-performance computer system Friday, June 20, 2014 - 10:29am Los Alamos National Laboratory recently installed a new high-performance computer system, called Wolf, which will be used for unclassified research. Wolf will help modernize mid-tier resources available to the lab and can be used to advance many fields of science. Wolf, manufactured by Cray Inc., has 616 compute nodes, each with two 8-core 2.6 GHz Intel "Sandybridge" processors,

  14. High-Performance Computing for Advanced Smart Grid Applications

    SciTech Connect

    Huang, Zhenyu; Chen, Yousu

    2012-07-06

    The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

  15. High performance computing and communications: FY 1997 implementation plan

    SciTech Connect

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  16. The role of interpreters in high performance computing

    SciTech Connect

    Naumann, Axel; Canal, Philippe; /Fermilab

    2008-01-01

    Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. We show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.

  17. High performance computing and communications: FY 1996 implementation plan

    SciTech Connect

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  18. A Component Architecture for High-Performance Computing

    SciTech Connect

    Bernholdt, D E; Elwasif, W R; Kohl, J A; Epperly, T G W

    2003-01-21

    The Common Component Architecture (CCA) provides a means for developers to manage the complexity of large-scale scientific software systems and to move toward a ''plug and play'' environment for high-performance computing. The CCA model allows for a direct connection between components within the same process to maintain performance on inter-component calls. It is neutral with respect to parallelism, allowing components to use whatever means they desire to communicate within their parallel ''cohort.'' We will discuss in detail the importance of performance in the design of the CCA and will analyze the performance costs associated with features of the CCA.

  19. A directory service for configuring high-performance distributed computations

    SciTech Connect

    Fitzgerald, S.; Kesselman, C.; Foster, I.

    1997-08-01

    High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

  20. SC15 High Performance Computing (HPC) Transforms Batteries - Joint Center

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    for Energy Storage Research September 21, 2015, Videos SC15 High Performance Computing (HPC) Transforms Batteries A new breakthrough battery-one that has significantly higher energy, lasts longer, and is cheaper and safer-will likely be impossible without a new material discovery. Kristin Persson and other JCESR scientists at Lawrence Berkeley National Laboratory are taking some of the guesswork out of the discovery process with the Electrolyte Genome Project. Electrolyte Genome

  1. 100 supercomputers later, Los Alamos high-performance computing still

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    supports national security mission 100 supercomputers later Alumni Link: Opportunities, News and Resources for Former Employees Latest Issue:September 2015 all issues All Issues » submit 100 supercomputers later, Los Alamos high-performance computing still supports national security mission Los Alamos National Laboratory has deployed 100 supercomputers in the last 60 years January 1, 2015 1952 MANIAC-I supercomputer 1952 MANIAC-I supercomputer Contact Linda Anderman Email From the 1952

  2. Transforming Power Grid Operations via High Performance Computing

    SciTech Connect

    Huang, Zhenyu; Nieplocha, Jarek

    2008-07-31

    Past power grid blackout events revealed the adequacy of grid operations in responding to adverse situations partially due to low computational efficiency in grid operation functions. High performance computing (HPC) provides a promising solution to this problem. HPC applications in power grid computation also become necessary to take advantage of parallel computing platforms as the computer industry is undergoing a significant change from the traditional single-processor environment to an era for multi-processor computing platforms. HPC applications to power grid operations are multi-fold. HPC can improve todays grid operation functions like state estimation and contingency analysis and reduce the solution time from minutes to seconds, comparable to SCADA measurement cycles. HPC also enables the integration of dynamic analysis into real-time grid operations. Dynamic state estimation, look-ahead dynamic simulation and real-time dynamic contingency analysis can be implemented and would be three key dynamic functions in future control centers. HPC applications call for better decision support tools, which also need HPC support to handle large volume of data and large number of cases. Given the complexity of the grid and the sheer number of possible configurations, HPC is considered to be an indispensible element in the next generation control centers.

  3. High Performance Computing with Harness over InfiniBand

    SciTech Connect

    Valentini, Alessandro; Di Biagio, Christian; Batino, Fabrizio; Pennella, Guido; Palma, Fabrizio; Engelmann, Christian

    2009-01-01

    Harness is an adaptable and plug-in-based middleware framework able to support distributed parallel computing. By now, it is based on the Ethernet protocol which cannot guarantee high performance throughput and real time (determinism) performance. During last years, both, the research and industry environments have developed new network architectures (InfiniBand, Myrinet, iWARP, etc.) to avoid those limits. This paper concerns the integration between Harness and InfiniBand focusing on two solutions: IP over InfiniBand (IPoIB) and Socket Direct Protocol (SDP) technology. They allow the Harness middleware to take advantage of the enhanced features provided by the InfiniBand Architecture.

  4. High performance computing and communications: FY 1995 implementation plan

    SciTech Connect

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  5. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  6. Configurable Virtualized System Environments for High Performance Computing

    SciTech Connect

    Engelmann, Christian; Scott, Stephen L; Ong, Hong Hoe; Vallee, Geoffroy R; Naughton, III, Thomas J

    2007-01-01

    Existing challenges for current terascale high performance computing (HPC) systems are increasingly hampering the development and deployment efforts of system software and scientific applications for next-generation petascale systems. The expected rapid system upgrade interval toward petascale scientific computing demands an incremental strategy for the development and deployment of legacy and new large-scale scientific applications that avoids excessive porting. Furthermore, system software developers as well as scientific application developers require access to large-scale testbed environments in order to test individual solutions at scale. This paper proposes to address these issues at the system software level through the development of a virtualized system environment (VSE) for scientific computing. The proposed VSE approach enables ''plug-and-play'' supercomputing through desktop-to-cluster-to-petaflop computer system-level virtualization based on recent advances in hypervisor virtualization technologies. This paper describes the VSE system architecture in detail, discusses needed tools for VSE system management and configuration, and presents respective VSE use case scenarios.

  7. In the OSTI Collections: High-Performance Computing | OSTI, US...

    Office of Scientific and Technical Information (OSTI)

    ... approach to exascale-computing resilience, but choosing one approach now would ... opportunities for low-power, high-resilience technology, aiming for an early ...

  8. High-performance Computing Applied to Semantic Databases

    SciTech Connect

    Goodman, Eric L.; Jimenez, Edward; Mizell, David W.; al-Saffar, Sinan; Adolf, Robert D.; Haglin, David J.

    2011-06-02

    To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.

  9. High-performance computing applied to semantic databases.

    SciTech Connect

    al-Saffar, Sinan; Jimenez, Edward Steven, Jr.; Adolf, Robert; Haglin, David; Goodman, Eric L.; Mizell, David

    2010-12-01

    To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.

  10. Multicore Challenges and Benefits for High Performance Scientific Computing

    DOE PAGES [OSTI]

    Nielsen, Ida M. B.; Janssen, Curtis L.

    2008-01-01

    Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less

  11. Toward a new metric for ranking high performance computing systems...

    Office of Scientific and Technical Information (OSTI)

    as a true measure of system performance for a growing collection of important science and engineering applications. In this paper we describe a new high performance conjugate...

  12. Power/energy use cases for high performance computing.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M; Hammond, Steven; Elmore, Ryan; Munch, Kristin

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  13. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    SciTech Connect

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  14. Report of the Task Force on Next Generation High Performance Computing |

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Department of Energy Next Generation High Performance Computing Report of the Task Force on Next Generation High Performance Computing The SEAB Task Force on Next Generation High Performance Computing (TFHPC) was established by the Secretary of Energy on December 20, 2014 to review the mission and national capabilities related to next generation high performance computing. The Task Force's findings and recommendations are framed by three broad considerations including a "new"

  15. DOE Announces $3.8 for High Performance Computing Program | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Energy DOE Announces $3.8 for High Performance Computing Program DOE Announces $3.8 for High Performance Computing Program September 1, 2016 - 3:00pm Addthis AMO Partners Select Thirteen Projects for the High Performance Computing for Manufacturing The Energy Department this week, in partnership with Lawrence Livermore National Laboratory (LLNL), announced $3.8 million to be allocated across 13 projects to use high-performance computing resources at the Department's national laboratories to

  16. Arthur B. (Barney) Maccabe Computer Science Department Center for High Performance Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Linux never has been and never will be "Extreme" Arthur B. (Barney) Maccabe Computer Science Department Center for High Performance Computing The University of New Mexico Salishan April 23, 2003 Salishan April 23, 2003 1 This talk was prepared on a Debain Linux box http://www.debian.org using OpenOffice http://www.openoffice.org Salishan April 23, 2003 1 Outline ● My background: lightweight operating systems ● Linux and world domination ● Adapting to innovative technologies ●

  17. John Shalf Gives Talk at San Francisco High Performance Computing Meetup

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    John Shalf Gives Talk at San Francisco High Performance Computing Meetup John Shalf Gives Talk at San Francisco High Performance Computing Meetup September 17, 2014 XBD200503 00083 John Shalf In his role as NERSC's chief technology officer, John Shalf gave a talk on "Converging Interconnect Requirements for HPC and Warehouse Scale Computing at San Francisco High Performance Computing Meetup. The Sept 17 meeting was held at GeekdomSF in downtown San Francisco. The group, which describes

  18. Fermilab | Science at Fermilab | Computing | High-performance...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    In parallel computing, computations are divided up so that many computers can work on the same problem at once. Lattice Quantum Chromodynamics, or Lattice QCD, is one area of ...

  19. ALCF summer students gain experience with high-performance computing...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    of computing that my textbooks couldn't keep up with," said Brown, who is majoring in computer science and computer game design. "Getting exposed to many-core machines and...

  20. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    SciTech Connect

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  1. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    SciTech Connect

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  2. A secure communications infrastructure for high-performance distributed computing

    SciTech Connect

    Foster, I.; Koenig, G.; Tuecke, S.

    1997-08-01

    Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentially of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. The authors address these requirements via a security-enhanced version of the Nexus communication library; which they use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine-grained security/performance tradeoffs. The authors present performance results that quantify the performance of their infrastructure.

  3. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Bland, Arthur S Buddy; Hack, James J; Baker, Ann E; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; White, Julia C

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next

  4. Towards an Abstraction-Friendly Programming Model for High Productivity and High Performance Computing

    SciTech Connect

    Liao, C; Quinlan, D; Panas, T

    2009-10-06

    General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will

  5. DOE Science Showcase - High-Performance Computing | OSTI, US...

    Office of Scientific and Technical Information (OSTI)

    DOE Computing, Energy.gov DOE Office of Science Advanced Scientific Computing Research ... SciTech Connect National Library of EnergyBeta Science.gov Ciencia.Science.gov ...

  6. Webinar: High Performance Computing For Manufacturing Spring Solicitation, March 24, 2016

    Energy.gov [DOE]

    The Energy Department's Lawrence Livermore National Laboratory will be hosting an informational webinar on the High Performance Computing for Manufacturing (HPC4Mfg) spring solicitation on March...

  7. Webinar: High Performance Computing For Manufacturing Spring Solicitation, April 5, 2016

    Energy.gov [DOE]

    The Energy Department's Lawrence Livermore National Laboratory will be hosting an informational webinar on the High Performance Computing for Manufacturing (HPC4Mfg) spring solicitation on April...

  8. DOE Science Showcase - High-Performance Computing | OSTI, US...

    Office of Scientific and Technical Information (OSTI)

    Labs Lab Breakthrough: Supercomputing Power to Accelerate Fossil Energy Research video, Ben Dotson, DOE Computing, Energy.gov DOE Office of Science Advanced Scientific ...

  9. In the OSTI Collections: High-Performance Computing | OSTI, US...

    Office of Scientific and Technical Information (OSTI)

    "Global Simulation of Plasma Microturbulence at the Petascale & Beyond" "Petascale, Adaptive CFD" (i.e., Computational Fluid Dynamics) "Multiscale Molecular Simulations at the ...

  10. Introduction to High Performance Computers Richard Gerber NERSC User Services

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    What are the main parts of a computer? Merit Badge Requirements ... 4. Explain the following to your counselor: a. The five major parts of a computer. ... Boy Scouts of America Offer a Computers Merit Badge 5 What are the "5 major parts"? 6 Five Major Parts eHow.com Answers.com Fluther.com Yahoo! Wikipedia CPU CPU CPU CPU Motherboard RAM Monitor RAM RAM Power Supply Hard Drive Printer Storage Power Supply Removable Media Video Card Mouse Keyboard/ Mouse Video Card Secondary Storage

  11. High-performance, distributed computing software libraries and services

    Energy Science and Technology Software Center

    2002-01-24

    The Globus toolkit provides basic Grid software infrastructure (i.e. middleware), to facilitate the development of applications which securely integrate geographically separated resources, including computers, storage systems, instruments, immersive environments, etc.

  12. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.

  13. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES [OSTI]

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-11-01

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation packagemorecapable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).less

  14. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES [OSTI]

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  15. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-11-01

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).

  16. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    SciTech Connect

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  17. Scalable File Systems for High Performance Computing Final Report

    SciTech Connect

    Brandt, S A

    2007-10-03

    Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-state testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.

  18. High performance computing and communications grand challenges program

    SciTech Connect

    Solomon, J.E.; Barr, A.; Chandy, K.M.; Goddard, W.A., III; Kesselman, C.

    1994-10-01

    The so-called protein folding problem has numerous aspects, however it is principally concerned with the {ital de novo} prediction of three-dimensional (3D) structure from the protein primary amino acid sequence, and with the kinetics of the protein folding process. Our current project focuses on the 3D structure prediction problem which has proved to be an elusive goal of molecular biology and biochemistry. The number of local energy minima is exponential in the number of amino acids in the protein. All current methods of 3D structure prediction attempt to alleviate this problem by imposing various constraints that effectively limit the volume of conformational space which must be searched. Our Grand Challenge project consists of two elements: (1) a hierarchical methodology for 3D protein structure prediction; and (2) development of a parallel computing environment, the Protein Folding Workbench, for carrying out a variety of protein structure prediction/modeling computations. During the first three years of this project, we are focusing on the use of two proteins selected from the Brookhaven Protein Data Base (PDB) of known structure to provide validation of our prediction algorithms and their software implementation, both serial and parallel. Both proteins, protein L from {ital peptostreptococcus magnus}, and {ital streptococcal} protein G, are known to bind to IgG, and both have an {alpha} {plus} {beta} sandwich conformation. Although both proteins bind to IgG, they do so at different sites on the immunoglobin and it is of considerable biological interest to understand structurally why this is so. 12 refs., 1 fig.

  19. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Baker, Ann E; Bland, Arthur S Buddy; Hack, James J; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where

  20. Reliable High Performance Peta- and Exa-Scale Computing

    SciTech Connect

    Bronevetsky, G

    2012-04-02

    As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continue to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty

  1. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    SciTech Connect

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  2. A High Performance Computing Platform for Performing High-Volume Studies With Windows-based Power Grid Tools

    SciTech Connect

    Chen, Yousu; Huang, Zhenyu

    2014-08-31

    Serial Windows-based programs are widely used in power utilities. For applications that require high volume simulations, the single CPU runtime can be on the order of days or weeks. The lengthy runtime, along with the availability of low cost hardware, is leading utilities to seriously consider High Performance Computing (HPC) techniques. However, the vast majority of the HPC computers are still Linux-based and many HPC applications have been custom developed external to the core simulation engine without consideration for ease of use. This has created a technical gap for applying HPC-based tools to todays power grid studies. To fill this gap and accelerate the acceptance and adoption of HPC for power grid applications, this paper presents a prototype of generic HPC platform for running Windows-based power grid programs on Linux-based HPC environment. The preliminary results show that the runtime can be reduced from weeks to hours to improve work efficiency.

  3. DOE High Performance Computing for Manufacturing Program Seeks to Fund New Proposals to Advance Energy Technologies

    Energy.gov [DOE]

    The Energy Department’s Advanced Manufacturing Office today announced up to $3 million in available funding for manufacturers to use high-performance computing resources at the Department's national laboratories to tackle major manufacturing challenges.

  4. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

    ScienceCinema

    Oehmen, Chris [PNNL

    2016-07-12

    Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

  5. Vehicle Technologies Office Merit Review 2013: Accelerating Predictive Simulation of IC Engines with High Performance Computing

    Energy.gov [DOE]

    Presentation given by Oak Ridge National Laboratory at the 2013 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting about simulating internal combustion engines using high performance computing.

  6. Workshop on programming languages for high performance computing (HPCWPL): final report.

    SciTech Connect

    Murphy, Richard C.

    2007-05-01

    This report summarizes the deliberations and conclusions of the Workshop on Programming Languages for High Performance Computing (HPCWPL) held at the Sandia CSRI facility in Albuquerque, NM on December 12-13, 2006.

  7. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

    SciTech Connect

    Oehmen, Chris [PNNL] [PNNL

    2010-01-25

    Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

  8. An evaluation of Java's I/O capabilities for high-performance computing.

    SciTech Connect

    Dickens, P. M.; Thakur, R.

    2000-11-10

    Java is quickly becoming the preferred language for writing distributed applications because of its inherent support for programming on distributed platforms. In particular, Java provides compile-time and run-time security, automatic garbage collection, inherent support for multithreading, support for persistent objects and object migration, and portability. Given these significant advantages of Java, there is a growing interest in using Java for high-performance computing applications. To be successful in the high-performance computing domain, however, Java must have the capability to efficiently handle the significant I/O requirements commonly found in high-performance computing applications. While there has been significant research in high-performance I/O using languages such as C, C++, and Fortran, there has been relatively little research into the I/O capabilities of Java. In this paper, we evaluate the I/O capabilities of Java for high-performance computing. We examine several approaches that attempt to provide high-performance I/O--many of which are not obvious at first glance--and investigate their performance in both parallel and multithreaded environments. We also provide suggestions for expanding the I/O capabilities of Java to better support the needs of high-performance computing applications.

  9. NREL Selects Partners for New High Performance Computer Data Center - News

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Releases | NREL Selects Partners for New High Performance Computer Data Center NREL to work with HP and Intel to create one of the world's most energy efficient data centers. September 5, 2012 The U.S. Department of Energy's National Renewable Energy Laboratory (NREL) has selected HP and Intel to provide a new energy-efficient high performance computer (HPC) system dedicated to energy systems integration, renewable energy research, and energy efficiency technologies. The new center will

  10. High-Performance Computing at Los Alamos announces milestone for key/value

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    middleware High-Performance Computing announces milestone High-Performance Computing at Los Alamos announces milestone for key/value middleware Billion inserts-per-second data milestone reached for supercomputing tool. May 26, 2014 Billion inserts-per-second data milestone reached for supercomputing tool Billion inserts-per-second data milestone reached for supercomputing tool. Contact Nancy Ambrosiano Communications Office (505) 667-0471 Email "This milestone was achieved by a

  11. Webinar "Applying High Performance Computing to Engine Design Using

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Supercomputers" | Argonne National Laboratory Webinar "Applying High Performance Computing to Engine Design Using Supercomputers" Share Description Video from the February 25, 2016 Convergent Science/Argonne National Laboratory webinar "Applying High Performance Computing to Engine Design using Supercomputers," featuring Janardhan Kodavasal of Argonne National Laboratory Speakers Janardhan Kodavasal, Argonne National Laboratory Duration 52:26 Topic Energy Energy

  12. Process for selecting NEAMS applications for access to Idaho National Laboratory high performance computing resources

    SciTech Connect

    Michael Pernice

    2010-09-01

    INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

  13. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    SciTech Connect

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on high performance computing platforms.

  14. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect

    Baker, Ann E; Barker, Ashley D; Bland, Arthur S Buddy; Boudwin, Kathlyn J.; Hack, James J; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C; Hudson, Douglas L

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of

  15. Chapter 9: Enabling Capabilities for Science and Energy | High-Performance Computing Capabilities and Allocations Supplemental Information

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Capabilities and Allocations User Facility Statistics Examples and Case Studies ENERGY U.S. DEPARTMENT OF Quadrennial Technology Review 2015 1 Quadrennial Technology Review 2015 High Performance Computing Capabilities and Resource Allocations Chapter 9: Enabling Capabilities for Science and Energy High Performance Computing Capabilities The Department of Energy (DOE) laboratories integrate high performance computing (HPC) capabilities into their energy, science, and national security missions.

  16. DOE Greenbook - Needs and Directions in High-Performance Computing for the Office of Science

    SciTech Connect

    Rotman, D; Harding, P

    2002-04-01

    researchers. (1) High-Performance Computing Technology; (2) Advanced Software Technology and Algorithms; (3) Energy Sciences Network; and (4) Basic Research and Human Resources. In addition to the availability from the vendor community, these components determine the implementation and direction of the development of the supercomputing resources for the OS community. In this document we will identify scientific and computational needs from across the five Office of Science organizations: High Energy and Nuclear Physics, Basic Energy Sciences, Fusion Energy Science, Biological and Environmental Research, and Advanced Scientific Computing Research. We will also delineate the current suite of NERSC computational and human resources. Finally, we will provide a set of recommendations that will guide the utilization of current and future computational resources at the DOE NERSC.

  17. High performance computing in chemistry and massively parallel computers: A simple transition?

    SciTech Connect

    Kendall, R.A.

    1993-03-01

    A review of the various problems facing any software developer targeting massively parallel processing (MPP) systems is presented. Issues specific to computational chemistry application software will be also outlined. Computational chemistry software ported to and designed for the Intel Touchstone Delta Supercomputer will be discussed. Recommendations for future directions will also be made.

  18. Acts -- A collection of high performing software tools for scientific computing

    SciTech Connect

    Drummond, L.A.; Marques, O.A.

    2002-11-01

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Further, many new discoveries depend on high performance computer simulations to satisfy their demands for large computational resources and short response time. The Advanced CompuTational Software (ACTS) Collection brings together a number of general-purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS collection promotes code portability, reusability, reduction of duplicate efforts, and tool maturity. This paper presents a brief introduction to the functionality available in ACTS. It also highlight the tools that are in demand by Climate and Weather modelers.

  19. Failure detection in high-performance clusters and computers using chaotic map computations

    SciTech Connect

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  20. Investigating methods of supporting dynamically linked executables on high performance computing platforms.

    SciTech Connect

    Kelly, Suzanne Marie; Laros, James H., III; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2009-09-01

    Shared libraries have become ubiquitous and are used to achieve great resource efficiencies on many platforms. The same properties that enable efficiencies on time-shared computers and convenience on small clusters prove to be great obstacles to scalability on large clusters and High Performance Computing platforms. In addition, Light Weight operating systems such as Catamount have historically not supported the use of shared libraries specifically because they hinder scalability. In this report we will outline the methods of supporting shared libraries on High Performance Computing platforms using Light Weight kernels that we investigated. The considerations necessary to evaluate utility in this area are many and sometimes conflicting. While our initial path forward has been determined based on this evaluation we consider this effort ongoing and remain prepared to re-evaluate any technology that might provide a scalable solution. This report is an evaluation of a range of possible methods of supporting dynamically linked executables on capability class1 High Performance Computing platforms. Efforts are ongoing and extensive testing at scale is necessary to evaluate performance. While performance is a critical driving factor, supporting whatever method is used in a production environment is an equally important and challenging task.

  1. Energy Department's High Performance Computing for Manufacturing Program Seeks to Fund New Industry Proposals

    Energy.gov [DOE]

    The U.S. Department of Energy (DOE) is seeking concept proposals from qualified U.S. manufacturers to participate in short-term, collaborative projects. Selectees will be given access to High Performance Computing facilities and will work with experienced DOE National Laboratories staff in addressing challenges in U.S. manufacturing.

  2. High performance computing and communications: Advancing the frontiers of information technology

    SciTech Connect

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  3. An Overview of High Performance Computing and Challenges for the Future

    ScienceCinema

    Google Tech Talks

    2016-07-12

    In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and lgorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research

  4. Cloud object store for archive storage of high performance computing data using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  5. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  6. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    SciTech Connect

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  7. Opening Remarks from the Joint Genome Institute and Argonne Lab High Performance Computing Workshop (2010 JGI/ANL HPC Workshop)

    ScienceCinema

    Rubin, Eddy

    2016-07-12

    DOE JGI Director Eddy Rubin gives opening remarks at the JGI/Argonne High Performance Computing (HPC) Workshop on January 25, 2010.

  8. Opening Remarks from the Joint Genome Institute and Argonne Lab High Performance Computing Workshop (2010 JGI/ANL HPC Workshop)

    SciTech Connect

    Rubin, Eddy

    2010-01-25

    DOE JGI Director Eddy Rubin gives opening remarks at the JGI/Argonne High Performance Computing (HPC) Workshop on January 25, 2010.

  9. DOE High Performance Computing for Manufacturing Program Seeks to Fund New

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Proposals to Advance Energy Technologies | Department of Energy Program Seeks to Fund New Proposals to Advance Energy Technologies DOE High Performance Computing for Manufacturing Program Seeks to Fund New Proposals to Advance Energy Technologies September 12, 2016 - 4:46pm Addthis News release from DOE's Advanced Manufacturing Office, September 12, 2016. The Energy Department's Advanced Manufacturing Office today announced up to $3 million in available funding for manufacturers to use

  10. An ecosystem to support US manufacturing adoption of High Performance Computing

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    UCM#.ppt - Author - Meeting, Date An ecosystem to support US manufacturing adoption of High Performance Computing Peg Folta HPC4Mfg Director Energy Deputy Program Manager, LLNL CPS# 29332 This presentation does not contain any proprietary, confidential, or otherwise restricted information. 2 UCM#.ppt - Author - Meeting, Date US manufacturing is undergoing a technological revolution Transitioning from Dirty, Dark, Dangerous, Declining... 3 UCM#.ppt - Author - Meeting, Date US manufacturing is

  11. OSTIblog Articles in the High-performance computing Topic | OSTI, US Dept

    Office of Scientific and Technical Information (OSTI)

    of Energy Office of Scientific and Technical Information High-performance computing Topic ACME - Perfecting Earth System Models by Kathy Chambers 29 Oct, 2014 in Earth system modeling as we know it and how it benefits climate change research is about to transform with the newly launched Accelerated Climate Modeling for Energy (ACME) project sponsored by the Earth System Modeling program within the Department of Energy's (DOE) Office of Biological and Environmental Research. ACME is an

  12. High-Performance Computing for Alloy Development | netl.doe.gov

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    High-Performance Computing for Alloy Development alloy-development.jpg Tomorrow's fossil-fuel based power plants will achieve higher efficiencies by operating at higher pressures and temperatures and under harsher and more corrosive conditions. Unfortunately, conventional metals simply cannot withstand these extreme environments, so advanced alloys must be designed and fabricated to meet the needs of these advanced systems. The properties of metal alloys, which are mixtures of metallic elements,

  13. Towards Real-Time High Performance Computing For Power Grid Analysis

    SciTech Connect

    Hui, Peter SY; Lee, Barry; Chikkagoudar, Satish

    2012-11-16

    Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored. Imposing real-time constraints on a parallel (cluster) computing environment introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we give a motivating example to demonstrate the need for such a system--- an application to estimate the electromechanical states of the power grid--- and we introduce a formal method for performing verification of certain temporal properties within a system of parallel processes. We describe our work towards a full real-time implementation of the target application--- namely, our progress towards extracting a key mathematical kernel from the application, the formal process by which we analyze the intricate timing behavior of the processes on the cluster, as well as timing measurements taken on our test cluster to demonstrate use of these concepts.

  14. Matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    DOEpatents

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2013-11-05

    Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.

  15. Evaluating Performance, Power, and Cooling in High Performance Computing (HPC) Data Centers

    SciTech Connect

    Evans, Jeffrey; Sandeep, Gupta; Karavanic, Karen; Marquez, Andres; Varsamopoulos, Girogios

    2012-01-24

    This chapter explores current research focused on developing our understanding of the interrelationships involved with HPC performance and energy management. The first section explores data center instrumentation, measurement, and performance analysis techniques, followed by a section focusing on work in data center thermal management and resource allocation. This is followed by an exploration of emerging techniques to identify application behavioral attributes that can provide clues and advice to HPC resource and energy management systems for the purpose of balancing HPC performance and energy efficiency.

  16. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  17. Toward a Performance/Resilience Tool for Hardware/Software Co-Design of High-Performance Computing Systems

    SciTech Connect

    Engelmann, Christian; Naughton, III, Thomas J

    2013-01-01

    xSim is a simulation-based performance investigation toolkit that permits running high-performance computing (HPC) applications in a controlled environment with millions of concurrent execution threads, while observing application performance in a simulated extreme-scale system for hardware/software co-design. The presented work details newly developed features for xSim that permit the injection of MPI process failures, the propagation/detection/notification of such failures within the simulation, and their handling using application-level checkpoint/restart. These new capabilities enable the observation of application behavior and performance under failure within a simulated future-generation HPC system using the most common fault handling technique.

  18. High Performance Computing at TJNAF| U.S. DOE Office of Science...

    Office of Science (SC)

    Performance Computing at TJNAF Nuclear Physics (NP) NP Home About Research Facilities Science Highlights Benefits of NP Applications of Nuclear Science Applications of Nuclear Science ...

  19. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    SciTech Connect

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele; Timm, Steven; Kim, Hyun-Woo; Noh, Seo-Young; Raicu, Ioan

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56 virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).

  20. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    SciTech Connect

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  1. High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017

    SciTech Connect

    Gerber, Richard; Wasserman, Harvey

    2015-01-20

    In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review

  2. High-performance computational and geostatistical experiments for testing the capabilities of 3-d electrical tomography

    SciTech Connect

    Carle, S. F.; Daily, W. D.; Newmark, R. L.; Ramirez, A.; Tompson, A.

    1999-01-19

    This project explores the feasibility of combining geologic insight, geostatistics, and high-performance computing to analyze the capabilities of 3-D electrical resistance tomography (ERT). Geostatistical methods are used to characterize the spatial variability of geologic facies that control sub-surface variability of permeability and electrical resistivity Synthetic ERT data sets are generated from geostatistical realizations of alluvial facies architecture. The synthetic data sets enable comparison of the "truth" to inversion results, quantification of the ability to detect particular facies at particular locations, and sensitivity studies on inversion parameters

  3. GridPACK Toolkit for Developing Power Grid Simulations on High Performance Computing Platforms

    SciTech Connect

    Palmer, Bruce J.; Perkins, William A.; Glass, Kevin A.; Chen, Yousu; Jin, Shuangshuang; Callahan, Charles D.

    2013-11-30

    This paper describes the GridPACK framework, which is designed to help power grid engineers develop modeling software capable of running on todays high performance computers. The framework contains modules for setting up distributed power grid networks, assigning buses and branches with arbitrary behaviors to the network, creating distributed matrices and vectors, using parallel linear and non-linear solvers to solve algebraic equations, and mapping functionality to create matrices and vectors based on properties of the network. In addition, the framework contains additional functionality to support IO and to manage errors.

  4. Investigating Operating System Noise in Extreme-Scale High-Performance Computing Systems using Simulation

    SciTech Connect

    Engelmann, Christian

    2013-01-01

    Hardware/software co-design for future-generation high-performance computing (HPC) systems aims at closing the gap between the peak capabilities of the hardware and the performance realized by applications (application-architecture performance gap). Performance profiling of architectures and applications is a crucial part of this iterative process. The work in this paper focuses on operating system (OS) noise as an additional factor to be considered for co-design. It represents the first step in including OS noise in HPC hardware/software co-design by adding a noise injection feature to an existing simulation-based co-design toolkit. It reuses an existing abstraction for OS noise with frequency (periodic recurrence) and period (duration of each occurrence) to enhance the processor model of the Extreme-scale Simulator (xSim) with synchronized and random OS noise simulation. The results demonstrate this capability by evaluating the impact of OS noise on MPI_Bcast() and MPI_Reduce() in a simulated future-generation HPC system with 2,097,152 compute nodes.

  5. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  6. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    SciTech Connect

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K; Wagner, Robert M

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allow rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark

  7. Report of the Snowmass T7 working group on high performance computing

    SciTech Connect

    K. Ko; R. Ryne; P. Spentzouris

    2002-12-05

    The T7 Working Group on High Performance Computing (HPC) had more than 30 participants. During the three weeks at Snowmass there were about 30 presentations. This working group also had joint sessions with a number of other working groups, including E1 (Neutrino Factories and Muon Colliders), M1 (Muon Based Systems), M6 (High Intensity Proton Sources), T4 (Particle sources), T5 (Beam dynamics), and T8 (Advanced Accelerators). The topics that were discussed fall naturally into three areas: (1) HPC requirements for next-generation accelerator design, (2) state-of-the-art in HPC simulation of accelerator systems, and (3) applied mathematics and computer science activities related to the development of HPC tools that will be of use to the accelerator community (as well as other communities). This document summarizes the material mentioned above and includes recommendations for future HPC activities in the accelerator community. The relationship of those activities to the HENP/SciDAC project on 21st century accelerator simulation is also discussed.

  8. A Lightweight, High-performance I/O Management Package for Data-intensive Computing

    SciTech Connect

    Wang, Jun

    2011-06-22

    Our group has been working with ANL collaborators on the topic ??bridging the gap between parallel file system and local file system? during the course of this project period. We visited Argonne National Lab -- Dr. Robert Ross??s group for one week in the past summer 2007. We looked over our current project progress and planned the activities for the incoming years 2008-09. The PI met Dr. Robert Ross several times such as HEC FSIO workshop 08, SC??08 and SC??10. We explored the opportunities to develop a production system by leveraging our current prototype to (SOGP+PVFS) a new PVFS version. We delivered SOGP+PVFS codes to ANL PVFS2 group in 2008.We also talked about exploring a potential project on developing new parallel programming models and runtime systems for data-intensive scalable computing (DISC). The methodology is to evolve MPI towards DISC by incorporating some functions of Google MapReduce parallel programming model. More recently, we are together exploring how to leverage existing works to perform (1) coordination/aggregation of local I/O operations prior to movement over the WAN, (2) efficient bulk data movement over the WAN, (3) latency hiding techniques for latency-intensive operations. Since 2009, we start applying Hadoop/MapReduce to some HEC applications with LANL scientists John Bent and Salman Habib. Another on-going work is to improve checkpoint performance at I/O forwarding Layer for the Road Runner super computer with James Nuetz and Gary Gridder at LANL. Two senior undergraduates from our research group did summer internships about high-performance file and storage system projects in LANL since 2008 for consecutive three years. Both of them are now pursuing Ph.D. degree in our group and will be 4th year in the PhD program in Fall 2011 and go to LANL to advance two above-mentioned works during this winter break. Since 2009, we have been collaborating with several computer scientists (Gary Grider, John bent, Parks Fields, James Nunez

  9. High-Performance Computing for Real-Time Grid Analysis and Operation

    SciTech Connect

    Huang, Zhenyu; Chen, Yousu; Chavarría-Miranda, Daniel

    2013-10-31

    Power grids worldwide are undergoing an unprecedented transition as a result of grid evolution meeting information revolution. The grid evolution is largely driven by the desire for green energy. Emerging grid technologies such as renewable generation, smart loads, plug-in hybrid vehicles, and distributed generation provide opportunities to generate energy from green sources and to manage energy use for better system efficiency. With utility companies actively deploying these technologies, a high level of penetration of these new technologies is expected in the next 5-10 years, bringing in a level of intermittency, uncertainties, and complexity that the grid did not see nor design for. On the other hand, the information infrastructure in the power grid is being revolutionized with large-scale deployment of sensors and meters in both the transmission and distribution networks. The future grid will have two-way flows of both electrons and information. The challenge is how to take advantage of the information revolution: pull the large amount of data in, process it in real time, and put information out to manage grid evolution. Without addressing this challenge, the opportunities in grid evolution will remain unfulfilled. This transition poses grand challenges in grid modeling, simulation, and information presentation. The computational complexity of underlying power grid modeling and simulation will significantly increase in the next decade due to an increased model size and a decreased time window allowed to compute model solutions. High-performance computing is essential to enable this transition. The essential technical barrier is to vastly increase the computational speed so operation response time can be reduced from minutes to seconds and sub-seconds. The speed at which key functions such as state estimation and contingency analysis are conducted (typically every 3-5 minutes) needs to be dramatically increased so that the analysis of contingencies is both

  10. Subsurface Multiphase Flow and Multicomponent Reactive Transport Modeling using High-Performance Computing

    SciTech Connect

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

    2007-07-16

    Numerical modeling has become a critical tool to the U.S. Department of Energy for evaluating the environmental impact of alternative energy sources and remediation strategies for legacy waste sites. Unfortunately, the physical and chemical complexity of many sites overwhelms the capabilities of even most state of the art groundwater models. Of particular concern are the representation of highly-heterogeneous stratified rock/soil layers in the subsurface and the biological and geochemical interactions of chemical species within multiple fluid phases. Clearly, there is a need for higher-resolution modeling (i.e. more spatial, temporal, and chemical degrees of freedom) and increasingly mechanistic descriptions of subsurface physicochemical processes. We present SciDAC-funded research being performed in the development of PFLOTRAN, a parallel multiphase flow and multicomponent reactive transport model. Written in Fortran90, PFLOTRAN is founded upon PETSc data structures and solvers. We are employing PFLOTRAN in the simulation of uranium transport at the Hanford 300 Area, a contaminated site of major concern to the Department of Energy, the State of Washington, and other government agencies. By leveraging the billions of degrees of freedom available through high-performance computation using tens of thousands of processors, we can better characterize the release of uranium into groundwater and its subsequent transport to the Columbia River, and thereby better understand and evaluate the effectiveness of various proposed remediation strategies.

  11. Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center

    SciTech Connect

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew; Cader, Tahir; Fox, Kevin M.; Gustafson, William I.; Mundy, Christopher J.

    2013-06-30

    As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented, high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.

  12. A High Performance Computing Network and System Simulator for the Power Grid: NGNS^2

    SciTech Connect

    Villa, Oreste; Tumeo, Antonino; Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.

    2012-11-11

    Designing and planing next generation power grid sys- tems composed of large power distribution networks, monitoring and control networks, autonomous generators and consumers of power requires advanced simulation infrastructures. The objective is to predict and analyze in time the behavior of networks of systems for unexpected events such as loss of connectivity, malicious attacks and power loss scenarios. This ultimately allows one to answer questions such as: What could happen to the power grid if .... We want to be able to answer as many questions as possible in the shortest possible time for the largest possible systems. In this paper we present a new High Performance Computing (HPC) oriented simulation infrastructure named Next Generation Network and System Simulator (NGNS2 ). NGNS2 allows for the distribution of a single simulation among multiple computing elements by using MPI and OpenMP threads. NGNS2 provides extensive configuration, fault tolerant and load balancing capabilities needed to simulate large and dynamic systems for long periods of time. We show the preliminary results of the simulator running approximately two million simulated entities both on a 64-node commodity Infiniband cluster and a 48-core SMP workstation.

  13. Development of high performance scientific components for interoperability of computing packages

    SciTech Connect

    Gulabani, Teena Pratap

    2008-01-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  14. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing-Based Approach

    SciTech Connect

    Filippi, Anthony M; Bhaduri, Budhendra L; Naughton, III, Thomas J; King, Amy L; Scott, Stephen L; Guneralp, Inci

    2012-01-01

    Abstract For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent-and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required ~100 hours until termination, whereas a parallel approach required only ~2.5 hours (42 compute nodes) a 40x speed-up. Tools developed for this parallel execution are discussed.

  15. Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing Based Approach

    SciTech Connect

    Fillippi, Anthony; Bhaduri, Budhendra L; Naughton, III, Thomas J; King, Amy L; Scott, Stephen L; Guneralp, Inci

    2012-01-01

    For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent- and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required {approx}100 hours until termination, whereas a parallel approach required only {approx}2.5 hours (42 compute nodes) - a 40x speed-up. Tools developed for this parallel execution are discussed.

  16. Microsoft Word - The Essential Role of New Network Services for High Performance Distributed Computing - PARENG.CivilComp.2011.

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Second International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering 12-15 April 2011, Ajaccio - Corsica - France In "Trends in Parallel, Distributed, Grid and Cloud Computing for Engineering," Edited by: P. Iványi, B.H.V. Topping, Civil-Comp Press. Network Services for High Performance Distributed Computing and Data Management W. E. Johnston, C. Guok, J. Metzger, and B. Tierney ESnet and Lawrence Berkeley National Laboratory, Berkeley California, U.S.A

  17. National cyber defense high performance computing and analysis : concepts, planning and roadmap.

    SciTech Connect

    Hamlet, Jason R.; Keliiaa, Curtis M.

    2010-09-01

    There is a national cyber dilemma that threatens the very fabric of government, commercial and private use operations worldwide. Much is written about 'what' the problem is, and though the basis for this paper is an assessment of the problem space, we target the 'how' solution space of the wide-area national information infrastructure through the advancement of science, technology, evaluation and analysis with actionable results intended to produce a more secure national information infrastructure and a comprehensive national cyber defense capability. This cybersecurity High Performance Computing (HPC) analysis concepts, planning and roadmap activity was conducted as an assessment of cybersecurity analysis as a fertile area of research and investment for high value cybersecurity wide-area solutions. This report and a related SAND2010-4765 Assessment of Current Cybersecurity Practices in the Public Domain: Cyber Indications and Warnings Domain report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.

  18. Subsurface Multiphase Flow and Multicomponent Reactive Transport Modeling using High-Performance Computing

    SciTech Connect

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

    2007-08-01

    Numerical modeling has become a critical tool to the Department of Energy for evaluating the environmental impact of alternative energy sources and remediation strategies for legacy waste sites. Unfortunately, the physical and chemical complexity of many sites overwhelms the capabilities of even most state of the art groundwater models. Of particular concern are the representation of highly-heterogeneous stratified rock/soil layers in the subsurface and the biological and geochemical interactions of chemical species within multiple fluid phases. Clearly, there is a need for higher-resolution modeling (i.e. more spatial, temporal, and chemical degrees of freedom) and increasingly mechanistic descriptions of subsurface physicochemical processes. We present research being performed in the development of PFLOTRAN, a parallel multiphase flow and multicomponent reactive transport model. Written in Fortran90, PFLOTRAN is founded upon PETSc data structures and solvers and has exhibited impressive strong scalability on up to 4000 processors on the ORNL Cray XT3. We are employing PFLOTRAN in the simulation of uranium transport at the Hanford 300 Area, a contaminated site of major concern to the Department of Energy, the State of Washington, and other government agencies where overly-simplistic historical modeling erroneously predicted decade removal times for uranium by ambient groundwater flow. By leveraging the billions of degrees of freedom available through high-performance computation using tens of thousands of processors, we can better characterize the release of uranium into groundwater and its subsequent transport to the Columbia River, and thereby better understand and evaluate the effectiveness of various proposed remediation strategies.

  19. High-Performance Computer Modeling of the Cosmos-Iridium Collision

    SciTech Connect

    Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W

    2009-08-28

    This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

  20. iSSH v. Auditd: Intrusion Detection in High Performance Computing

    SciTech Connect

    Karns, David M.; Protin, Kathryn S.; Wolf, Justin G.

    2012-07-30

    The goal is to provide insight into intrusions in high performance computing, focusing on tracking intruders motions through the system. The current tools, such as pattern matching, do not provide sufficient tracking capabilities. We tested two tools: an instrumented version of SSH (iSSH) and Linux Auditing Framework (Auditd). First discussed is Instrumented Secure Shell (iSSH): a version of SSH developed at Lawrence Berkeley National Laboratory. The goal is to audit user activity within a computer system to increase security. Capabilities are: Keystroke logging, Records user names and authentication information, and Catching suspicious remote and local commands. Strengths for iSSH are: (1) Good for keystroke logging, making it easier to track malicious users by catching suspicious commands; (2) Works with Bro to send alerts; could be configured to send pages to systems administrators; and (3) Creates visibility into SSH sessions. Weaknesses are: (1) Relatively new, so not very well documented; and (2) No capabilities to see if files have been edited, moved, or copied within the system. Second we discuss Auditd, the user component of the Linux Auditing System. It creates logs of user behavior, and monitors systems calls and file accesses. Its goal is to improve system security by keeping track of users actions within the system. Strenghts of Auditd are: (1) Very thorough logs; (2) Wider variety of tracking abilities than iSSH; and (3) Older, so better documented. Weaknesses are: (1) Logs record everything, not just malicious behavior; (2) The size of the logs can lead to overflowing directories; and (3) This level of logging leads to a lot of false alarms. Auditd is better documented than iSSH, which would help administrators during set up and troubleshooting. iSSH has a cleaner notification system, but the logs are not as detailed as Auditd. From our performance testing: (1) File transfer speed using SCP is increased when using iSSH; and (2) Network benchmarks

  1. LIAR -- A computer program for the modeling and simulation of high performance linacs

    SciTech Connect

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm.

  2. High Performance Computing at TJNAF| U.S. DOE Office of Science (SC)

    Office of Science (SC)

    Performance Computing at TJNAF Nuclear Physics (NP) NP Home About Research Facilities Science Highlights Benefits of NP Applications of Nuclear Science Applications of Nuclear Science Archives Small Business Innovation Research / Small Business Technology Transfer Funding Opportunities Nuclear Science Advisory Committee (NSAC) Community Resources Contact Information Nuclear Physics U.S. Department of Energy SC-26/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301)

  3. HIGH PERFORMANCE COMPUTING A N N U A L R E P O R T

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    PERFORMANCE COMPUTING A N N U A L R E P O R T Scanning this code with an iPhone or iPad will provide access to SNLSimMagic; an augmented reality iOS application that can be downloaded to the device. You can also download directly onto your device from the Apple App store. Readers with the application can use their mobile devices to scan images in this document that show the Augmented Reality icon, and an associated movie clip will be played on their device. SNLSimMagic was developed at Sandia

  4. High performance systems

    SciTech Connect

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  5. Technologies and tools for high-performance distributed computing. Final report

    SciTech Connect

    Karonis, Nicholas T.

    2000-05-01

    In this project we studied the practical use of the MPI message-passing interface in advanced distributed computing environments. We built on the existing software infrastructure provided by the Globus Toolkit{trademark}, the MPICH portable implementation of MPI, and the MPICH-G integration of MPICH with Globus. As a result of this project we have replaced MPICH-G with its successor MPICH-G2, which is also an integration of MPICH with Globus. MPICH-G2 delivers significant improvements in message passing performance when compared to its predecessor MPICH-G and was based on superior software design principles resulting in a software base that was much easier to make the functional extensions and improvements we did. Using Globus services we replaced the default implementation of MPI's collective operations in MPICH-G2 with more efficient multilevel topology-aware collective operations which, in turn, led to the development of a new timing methodology for broadcasts [8]. MPICH-G2 was extended to include client/server functionality from the MPI-2 standard [23] to facilitate remote visualization applications and, through the use of MPI idioms, MPICH-G2 provided application-level control of quality-of-service parameters as well as application-level discovery of underlying Grid-topology information. Finally, MPICH-G2 was successfully used in a number of applications including an award-winning record-setting computation in numerical relativity. In the sections that follow we describe in detail the accomplishments of this project, we present experimental results quantifying the performance improvements, and conclude with a discussion of our applications experiences. This project resulted in a significant increase in the utility of MPICH-G2.

  6. Fair share on high performance computing systems : what does fair really mean?

    SciTech Connect

    Clearwater, Scott Harvey; Kleban, Stephen David

    2003-03-01

    We report on a performance evaluation of a Fair Share system at the ASCI Blue Mountain supercomputer cluster. We study the impacts of share allocation under Fair Share on wait times and expansion factor. We also measure the Service Ratio, a typical figure of merit for Fair Share systems, with respect to a number of job parameters. We conclude that Fair Share does little to alter important performance metrics such as expansion factor. This leads to the question of what Fair Share means on cluster machines. The essential difference between Fair Share on a uni-processor and a cluster is that the workload on a cluster is not fungible in space or time. We find that cluster machines must be highly utilized and support checkpointing in order for Fair Share to function more closely to the spirit in which it was originally developed.

  7. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    SciTech Connect

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  8. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    SciTech Connect

    Panda, Dhabaleswar Kumar; Beckman, Pete

    2011-07-28

    implementations on top of existing publish-subscribe tools. #15; We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. #15; Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. #15; Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.

  9. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation

    SciTech Connect

    Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn

    2014-11-14

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.

  10. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    SciTech Connect

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  11. Complex matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    DOEpatents

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2014-02-11

    Mechanisms for performing a complex matrix multiplication operation are provided. A vector load operation is performed to load a first vector operand of the complex matrix multiplication operation to a first target vector register. The first vector operand comprises a real and imaginary part of a first complex vector value. A complex load and splat operation is performed to load a second complex vector value of a second vector operand and replicate the second complex vector value within a second target vector register. The second complex vector value has a real and imaginary part. A cross multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the complex matrix multiplication operation. The partial product is accumulated with other partial products and a resulting accumulated partial product is stored in a result vector register.

  12. Harnessing the Department of Energy’s High-Performance Computing Expertise to Strengthen the U.S. Chemical Enterprise

    SciTech Connect

    Dixon, David A.; Dupuis, Michel; Garrett, Bruce C.; Neaton, Jeffrey B.; Plata, Charity; Tarr, Matthew A.; Tomb, Jean-Francois; Golab, Joseph T.

    2012-01-17

    High-performance computing (HPC) is one area where the DOE has developed extensive expertise and capability. However, this expertise currently is not properly shared with or used by the private sector to speed product development, enable industry to move rapidly into new areas, and improve product quality. Such use would lead to substantial competitive advantages in global markets and yield important economic returns for the United States. To stimulate the dissemination of DOE's HPC expertise, the Council for Chemical Research (CCR) and the DOE jointly held a workshop on this topic. Four important energy topic areas were chosen as the focus of the meeting: Biomass/Bioenergy, Catalytic Materials, Energy Storage, and Photovoltaics. Academic, industrial, and government experts in these topic areas participated in the workshop to identify industry needs, evaluate the current state of expertise, offer proposed actions and strategies, and forecast the expected benefits of implementing those strategies.

  13. High Performance Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    activities span repeated lifetimes of supercomputing systems and infrastructure: Defining Future Environments Communication and collaborations with industry and academia to follow...

  14. High Performance Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    when possible - Automatically using optimization methods. CONSTRUCTING REDUCED SCHEMES: GENETIC ALGORITHM PRICIPLE Initial population FITNESS EVALUATION of each individual F ...

  15. Application of High Performance Computing for Simulating Cycle-to-Cycle Variation in Dual-Fuel Combustion Engines

    DOE PAGES [OSTI]

    Jupudi, Ravichandra S.; Finney, Charles E.A.; Primus, Roy; Wijeyakulasuriya, Sameera; Klingbeil, Adam E.; Tamma, Bhaskar; Stoyanov, Miroslav K.

    2016-04-05

    Interest in operational cost reduction is driving engine manufacturers to consider lower-cost fuel substitution in heavy-duty diesel engines. These dual-fuel (DF) engines could be operated either in diesel-only mode or operated with premixed natural gas (NG) ignited by a pilot flame of compression-ignited direct-injected diesel fuel. One promising application is that of large-bore, medium-speed engines such as those used in locomotives. With realistic natural gas substitution levels in the fleet of locomotives currently in service, such fuel substitution could result in billions of dollars of savings annually in the US alone. However, under certain conditions, dual-fuel operation can result inmore » increased cycle-to-cycle variability (CCV) during combustion, resulting in variations in cylinder pressure and work extraction. In certain situations, the CCV of dual-fuel operation can be notably higher than that of diesel-only combustion under similar operating conditions. Excessive CCV can limit the NG substitution rate and operating range of a dual-fuel engine by increasing emissions and reducing engine stability, reliability and fuel efficiency via incomplete natural-gas combustion. Running multiple engine cycles in series to simulate CCV can be quite time consuming. Hence innovative modelling techniques and large computing resources are needed to investigate the factors affecting CCV in dual-fuel engines. This paper discusses the use of the High Performance Computing resource Titan, at the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory, to investigate cycle-to-cycle combustion variability of a dual-fuel engine. The CONVERGE CFD software was used to simulate multiple, parallel single cycles of dual-fuel combustion with perturbed operating parameters and boundary conditions. These perturbations are imposed according to a sparse grids sampling of the parameter space. The sampling scheme chosen is similar to a design of experiments method

  16. SU-E-T-531: Performance Evaluation of Multithreaded Geant4 for Proton Therapy Dose Calculations in a High Performance Computing Facility

    SciTech Connect

    Shin, J; Coss, D; McMurry, J; Farr, J [St. Jude Children's Research Hospital, Memphis, TN (United States); Faddegon, B [UC San Francisco, San Francisco, CA (United States)

    2014-06-01

    Purpose: To evaluate the efficiency of multithreaded Geant4 (Geant4-MT, version 10.0) for proton Monte Carlo dose calculations using a high performance computing facility. Methods: Geant4-MT was used to calculate 3D dose distributions in 111 mm3 voxels in a water phantom and patient's head with a 150 MeV proton beam covering approximately 55 cm2 in the water phantom. Three timestamps were measured on the fly to separately analyze the required time for initialization (which cannot be parallelized), processing time of individual threads, and completion time. Scalability of averaged processing time per thread was calculated as a function of thread number (1, 100, 150, and 200) for both 1M and 50 M histories. The total memory usage was recorded. Results: Simulations with 50 M histories were fastest with 100 threads, taking approximately 1.3 hours and 6 hours for the water phantom and the CT data, respectively with better than 1.0 % statistical uncertainty. The calculations show 1/N scalability in the event loops for both cases. The gains from parallel calculations started to decrease with 150 threads. The memory usage increases linearly with number of threads. No critical failures were observed during the simulations. Conclusion: Multithreading in Geant4-MT decreased simulation time in proton dose distribution calculations by a factor of 64 and 54 at a near optimal 100 threads for water phantom and patient's data respectively. Further simulations will be done to determine the efficiency at the optimal thread number. Considering the trend of computer architecture development, utilizing Geant4-MT for radiotherapy simulations is an excellent cost-effective alternative for a distributed batch queuing system. However, because the scalability depends highly on simulation details, i.e., the ratio of the processing time of one event versus waiting time to access for the shared event queue, a performance evaluation as described is recommended.

  17. DOE High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To Fund New Proposals To Jumpstart Energy Technologies

    Energy.gov [DOE]

    A new U.S. Department of Energy (DOE) program designed to spur the use of high performance supercomputers to advance U.S. manufacturing is now seeking a second round of proposals from industry to compete for approximately $3 million in new funding.

  18. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    SciTech Connect

    Clouse, C. J.; Edwards, M. J.; McCoy, M. G.; Marinak, M. M.; Verdon, C. P.

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  19. Using High Performance Computing to Understand Roles of Labile and Nonlabile U(VI) on Hanford 300 Area Plume Longevity

    SciTech Connect

    Lichtner, Peter C.; Hammond, Glenn E.

    2012-07-28

    Evolution of a hexavalent uranium [U(VI)] plume at the Hanford 300 Area bordering the Columbia River is investigated to evaluate the roles of labile and nonlabile forms of U(VI) on the longevity of the plume. A high fidelity, three-dimensional, field-scale, reactive flow and transport model is used to represent the system. Richards equation coupled to multicomponent reactive transport equations are solved for times up to 100 years taking into account rapid fluctuations in the Columbia River stage resulting in pulse releases of U(VI) into the river. The peta-scale computer code PFLOTRAN developed under a DOE SciDAC-2 project is employed in the simulations and executed on ORNL's Cray XT5 supercomputer Jaguar. Labile U(VI) is represented in the model through surface complexation reactions and its nonlabile form through dissolution of metatorbernite used as a surrogate mineral. Initial conditions are constructed corresponding to the U(VI) plume already in place to avoid uncertainties associated with the lack of historical data for the waste stream. The cumulative U(VI) flux into the river is compared for cases of equilibrium and multirate sorption models and for no sorption. The sensitivity of the U(VI) flux into the river on the initial plume configuration is investigated. The presence of nonlabile U(VI) was found to be essential in explaining the longevity of the U(VI) plume and the prolonged high U(VI) concentrations at the site exceeding the EPA MCL for uranium.

  20. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

    SciTech Connect

    Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

    2011-02-01

    Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

  1. Geant4 Computing Performance Benchmarking and Monitoring

    DOE PAGES [OSTI]

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

  2. Geant4 Computing Performance Benchmarking and Monitoring

    SciTech Connect

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  3. High Performance Sustainable Buildings

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    science and bioscience capabiities. Occupational Medicine will become a High Performance Sustainable Building in 2013. On the former County landfill, a photovoltaic array field...

  4. High Performance Network Monitoring

    SciTech Connect

    Martinez, Jesse E

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  5. Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation and Completion of Episodic Information.

    SciTech Connect

    Aimone, James Bradley; Bernard, Michael Lewis; Vineyard, Craig Michael; Verzi, Stephen Joseph.

    2014-10-01

    Adult neurogenesis in the hippocampus region of the brain is a neurobiological process that is believed to contribute to the brain's advanced abilities in complex pattern recognition and cognition. Here, we describe how realistic scale simulations of the neurogenesis process can offer both a unique perspective on the biological relevance of this process and confer computational insights that are suggestive of novel machine learning techniques. First, supercomputer based scaling studies of the neurogenesis process demonstrate how a small fraction of adult-born neurons have a uniquely larger impact in biologically realistic scaled networks. Second, we describe a novel technical approach by which the information content of ensembles of neurons can be estimated. Finally, we illustrate several examples of broader algorithmic impact of neurogenesis, including both extending existing machine learning approaches and novel approaches for intelligent sensing.

  6. System Software and Tools for High Performance Computing Environments: A report on the findings of the Pasadena Workshop, April 14--16, 1992

    SciTech Connect

    Sterling, T.; Messina, P.; Chen, M.

    1993-04-01

    The Pasadena Workshop on System Software and Tools for High Performance Computing Environments was held at the Jet Propulsion Laboratory from April 14 through April 16, 1992. The workshop was sponsored by a number of Federal agencies committed to the advancement of high performance computing (HPC) both as a means to advance their respective missions and as a national resource to enhance American productivity and competitiveness. Over a hundred experts in related fields from industry, academia, and government were invited to participate in this effort to assess the current status of software technology in support of HPC systems. The overall objectives of the workshop were to understand the requirements and current limitations of HPC software technology and to contribute to a basis for establishing new directions in research and development for software technology in HPC environments. This report includes reports written by the participants of the workshop`s seven working groups. Materials presented at the workshop are reproduced in appendices. Additional chapters summarize the findings and analyze their implications for future directions in HPC software technology development.

  7. High Performance Sustainable Building

    Directives, Delegations, and Other Requirements [Office of Management (MA)]

    2011-11-09

    This Guide highlights the DOE O 413.3B drivers for incorporating high performance sustainable building (HPSB) principles into Critical Decisions 1 through 4 and provides guidance for implementing the Order's HPSB requirements.

  8. High Performance Sustainable Building

    Directives, Delegations, and Other Requirements [Office of Management (MA)]

    2011-11-09

    This Guide provides approaches for implementing the High Performance Sustainable Building (HPSB) requirements of DOE Order 413.3B, Program and Project Management for the Acquisition of Capital Assets. Supersedes DOE G 413.3-6.

  9. High Performance Sustainable Buildings

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Buildings Goal 3: High Performance Sustainable Buildings Maintaining the conditions of a building improves the health of not only the surrounding ecosystems, but also the well-being of its occupants. Energy Conservation» Efficient Water Use & Management» High Performance Sustainable Buildings» Greening Transportation» Green Purchasing & Green Technology» Pollution Prevention» Science Serving Sustainability» ENVIRONMENTAL SUSTAINABILITY GOALS at LANL The Radiological Laboratory

  10. Application of high performance computing to automotive design and manufacturing: Composite materials modeling task technical manual for constitutive models for glass fiber-polymer matrix composites

    SciTech Connect

    Simunovic, S; Zacharia, T

    1997-11-01

    This report provides a theoretical background for three constitutive models for a continuous strand mat (CSM) glass fiber-thermoset polymer matrix composite. The models were developed during fiscal years 1994 through 1997 as a part of the Cooperative Research and Development Agreement, "Application of High-Performance Computing to Automotive Design and Manufacturing." The full derivation of constitutive relations in the framework of the continuum program DYNA3D and have been used for the simulation and impact analysis of CSM composite tubes. The analysis of simulation and experimental results show that the model based on strain tensor split yields the most accurate results of the three implemented models. The parameters used in the models and their derivation from the physical tests are documented.

  11. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

    SciTech Connect

    Joseph, Earl C.; Conway, Steve; Dekate, Chirag

    2013-09-30

    This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size.  A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

  12. High Performance Sustainable Building

    Directives, Delegations, and Other Requirements [Office of Management (MA)]

    2008-06-20

    The guide supports DOE O 413.3A and provides useful information on the incorporation of high performance sustainable building principles into building-related General Plant Projects and Institutional General Plant Projects at DOE sites. Canceled by DOE G 413.3-6A. Does not cancel other directives.

  13. Task-parallel message passing interface implementation of Autodock4 for docking of very large databases of compounds using high-performance super-computers

    SciTech Connect

    Collignon, Barbara C; Schultz, Roland; Smith, Jeremy C; Baudry, Jerome Y

    2011-01-01

    A message passing interface (MPI)-based implementation (Autodock4.lga.MPI) of the grid-based docking program Autodock4 has been developed to allow simultaneous and independent docking of multiple compounds on up to thousands of central processing units (CPUs) using the Lamarkian genetic algorithm. The MPI version reads a single binary file containing precalculated grids that represent the protein-ligand interactions, i.e., van der Waals, electrostatic, and desolvation potentials, and needs only two input parameter files for the entire docking run. In comparison, the serial version of Autodock4 reads ASCII grid files and requires one parameter file per compound. The modifications performed result in significantly reduced input/output activity compared with the serial version. Autodock4.lga.MPI scales up to 8192 CPUs with a maximal overhead of 16.3%, of which two thirds is due to input/output operations and one third originates from MPI operations. The optimal docking strategy, which minimizes docking CPU time without lowering the quality of the database enrichments, comprises the docking of ligands preordered from the most to the least flexible and the assignment of the number of energy evaluations as a function of the number of rotatable bounds. In 24 h, on 8192 high-performance computing CPUs, the present MPI version would allow docking to a rigid protein of about 300K small flexible compounds or 11 million rigid compounds.

  14. High Performance Tools And Technologies

    SciTech Connect

    Collette, M R; Corey, I R; Johnson, J R

    2005-01-24

    This goal of this project was to evaluate the capability and limits of current scientific simulation development tools and technologies with specific focus on their suitability for use with the next generation of scientific parallel applications and High Performance Computing (HPC) platforms. The opinions expressed in this document are those of the authors, and reflect the authors' current understanding and functionality of the many tools investigated. As a deliverable for this effort, we are presenting this report describing our findings along with an associated spreadsheet outlining current capabilities and characteristics of leading and emerging tools in the high performance computing arena. This first chapter summarizes our findings (which are detailed in the other chapters) and presents our conclusions, remarks, and anticipations for the future. In the second chapter, we detail how various teams in our local high performance community utilize HPC tools and technologies, and mention some common concerns they have about them. In the third chapter, we review the platforms currently or potentially available to utilize these tools and technologies on to help in software development. Subsequent chapters attempt to provide an exhaustive overview of the available parallel software development tools and technologies, including their strong and weak points and future concerns. We categorize them as debuggers, memory checkers, performance analysis tools, communication libraries, data visualization programs, and other parallel development aides. The last chapter contains our closing information. Included with this paper at the end is a table of the discussed development tools and their operational environment.

  15. A Comparison of Library Tracking Methods in High Performance

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Library Tracking Methods in High Performance Computing Computer System Cluster and Networking Summer Institute 2013 Poster Seminar William Rosenberger (New Mexico Tech), Dennis...

  16. High Performance Window Retrofit

    SciTech Connect

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  17. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  18. Misleading Performance Claims in Parallel Computations

    SciTech Connect

    Bailey, David H.

    2009-05-29

    In a previous humorous note entitled 'Twelve Ways to Fool the Masses,' I outlined twelve common ways in which performance figures for technical computer systems can be distorted. In this paper and accompanying conference talk, I give a reprise of these twelve 'methods' and give some actual examples that have appeared in peer-reviewed literature in years past. I then propose guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion, not only in the world of device simulation but also in the larger arena of technical computing.

  19. High-performance steels

    SciTech Connect

    Barsom, J.M.

    1996-03-01

    Steel is the material of choice in structures such as storage tanks, gas and oil distribution pipelines, high-rise buildings, and bridges because of its strength, ductility, and fracture toughness, as well as its repairability and recyclability. Furthermore, these properties are continually being improved via advances in steelmaking, casting, rolling, and chemistry. Developments in steelmaking have led to alloys having low sulfur, sulfide shape control, and low hydrogen. They provide reduced chemical segregation, higher fracture toughness, better through-thickness and weld heat-affected zone properties, and lower susceptibility to hydrogen cracking. Processing has moved beyond traditional practices to designed combinations of controlled rolling and cooling known as thermomechanical control processes (TMCP). In fact, chemical composition control and TMCP now enable such precise adjustment of final properties that these alloys are now known as high-performance steels (HPS), engineered materials having properties tailored for specific applications.

  20. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    SciTech Connect

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin; Pasccci, Valerio; Gamblin, Todd; Brunst, Holger

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  1. Evaluating iterative reconstruction performance in computed tomography

    SciTech Connect

    Chen, Baiyu Solomon, Justin; Ramirez Giraldo, Juan Carlos; Samei, Ehsan

    2014-12-15

    Purpose: Iterative reconstruction (IR) offers notable advantages in computed tomography (CT). However, its performance characterization is complicated by its potentially nonlinear behavior, impacting performance in terms of specific tasks. This study aimed to evaluate the performance of IR with both task-specific and task-generic strategies. Methods: The performance of IR in CT was mathematically assessed with an observer model that predicted the detection accuracy in terms of the detectability index (d′). d′ was calculated based on the properties of the image noise and resolution, the observer, and the detection task. The characterizations of image noise and resolution were extended to accommodate the nonlinearity of IR. A library of tasks was mathematically modeled at a range of sizes (radius 1–4 mm), contrast levels (10–100 HU), and edge profiles (sharp and soft). Unique d′ values were calculated for each task with respect to five radiation exposure levels (volume CT dose index, CTDI{sub vol}: 3.4–64.8 mGy) and four reconstruction algorithms (filtered backprojection reconstruction, FBP; iterative reconstruction in imaging space, IRIS; and sinogram affirmed iterative reconstruction with strengths of 3 and 5, SAFIRE3 and SAFIRE5; all provided by Siemens Healthcare, Forchheim, Germany). The d′ values were translated into the areas under the receiver operating characteristic curve (AUC) to represent human observer performance. For each task and reconstruction algorithm, a threshold dose was derived as the minimum dose required to achieve a threshold AUC of 0.9. A task-specific dose reduction potential of IR was calculated as the difference between the threshold doses for IR and FBP. A task-generic comparison was further made between IR and FBP in terms of the percent of all tasks yielding an AUC higher than the threshold. Results: IR required less dose than FBP to achieve the threshold AUC. In general, SAFIRE5 showed the most significant dose reduction

  2. Elucidating geochemical response of shallow heterogeneous aquifers to CO2 leakage using high-performance computing: Implications for monitoring of CO2 sequestration

    SciTech Connect

    Navarre-Sitchler, Alexis K.; Maxwell, Reed M.; Siirila, Erica R.; Hammond, Glenn E.; Lichtner, Peter C.

    2013-03-01

    Predicting and quantifying impacts of potential carbon dioxide (CO2) leakage into shallow aquifers that overlie geologic CO2 storage formations is an important part of developing reliable carbon storage techniques. Leakage of CO2 through fractures, faults or faulty wellbores can reduce groundwater pH, inducing geochemical reactions that release solutes into the groundwater and pose a risk of degrading groundwater quality. In order to help quantify this risk, predictions of metal concentrations are needed during geologic storage of CO2. Here, we present regional-scale reactive transport simulations, at relatively fine-scale, of CO2 leakage into shallow aquifers run on the PFLOTRAN platform using high-performance computing. Multiple realizations of heterogeneous permeability distributions were generated using standard geostatistical methods. Increased statistical anisotropy of the permeability field resulted in more lateral and vertical spreading of the plume of impacted water, leading to increased Pb2+ (lead) concentrations and lower pH at a well down gradient of the CO2 leak. Pb2+ concentrations were higher in simulations where calcite was the source of Pb2+ compared to galena. The low solubility of galena effectively buffered the Pb2+ concentrations as galena reached saturation under reducing conditions along the flow path. In all cases, Pb2+ concentrations remained below the maximum contaminant level set by the EPA. Results from this study, compared to natural variability observed in aquifers, suggest that bicarbonate (HCO3) concentrations may be a better geochemical indicator of a CO2 leak under the conditions simulated here.

  3. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    SciTech Connect

    Brown, Maxine D.; Leigh, Jason

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascale computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.

  4. Computational Tools to Assess Turbine Biological Performance

    SciTech Connect

    Richmond, Marshall C.; Serkowski, John A.; Rakowski, Cynthia L.; Strickler, Brad; Weisbeck, Molly; Dotson, Curtis L.

    2014-07-24

    Public Utility District No. 2 of Grant County (GCPUD) operates the Priest Rapids Dam (PRD), a hydroelectric facility on the Columbia River in Washington State. The dam contains 10 Kaplan-type turbine units that are now more than 50 years old. Plans are underway to refit these aging turbines with new runners. The Columbia River at PRD is a migratory pathway for several species of juvenile and adult salmonids, so passage of fish through the dam is a major consideration when upgrading the turbines. In this paper, a method for turbine biological performance assessment (BioPA) is demonstrated. Using this method, a suite of biological performance indicators is computed based on simulated data from a CFD model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. Using known relationships between the dose of an injury mechanism and frequency of injury (dose–response) from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from proposed designs, the engineer can identify the more-promising alternatives. We present an application of the BioPA method for baseline risk assessment calculations for the existing Kaplan turbines at PRD that will be used as the minimum biological performance that a proposed new design must achieve.

  5. High Performance Energy Management

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Performance Energy Management Reduce energy use and meet your business objectives By applying continuous improvement practices similar to Lean and Six Sigma, the BPA Energy Smart...

  6. High Performance Window Attachments

    Energy.gov [DOE] (indexed site)

    Statement: * A wide range of residential window attachments are available, but they ... to model wide range of window coverings * Performed window coverings ...

  7. Salishan: Conference on High Speed Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Salishan Conference Bringing together the high-speed computing community Los Alamos National Laboratory Salishan Conference Menu Salishan Conference The Salishan Conference on High-Speed Computing was founded in 1981 as a means of getting experts in computer architecture, languages, and algorithms together to improve communications, develop collaborations, solve problems of mutual interest, and provide effective leadership in the field of high-speed computing. Organized by a Tri-Lab committee

  8. Using High Performance Libraries and Tools

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    High Performance Libraries and Tools Using High Performance Libraries and Tools Memkind Library on Edison The memkind library is a user extensible heap manager built on top of jemalloc which enables control of memory characteristics and a partitioning of the heap between kinds of memory (including user defined kinds of memory). This library can be used to simulate the benefit of the high bandwidth memory that will be available on KNL system on the dual socket Edison compute nodes (the two

  9. Thermoelectrics Partnership: High Performance Thermoelectric...

    Energy.gov [DOE] (indexed site)

    70shakouri2011p.pdf (856.16 KB) More Documents & Publications High Performance Zintl Phase TE Materials with Embedded Particles High performance Zintl phase TE materials with ...

  10. Connecting HPC and High Performance

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    HPC and High Performance Networks for Scientists and Researchers SC15 Austin, Texas November 18, 2015 1 Agenda 2 * Welcome and introductions * BoF Goals * Overview of National Research & Education Networks at work Globally * Discuss needs, challenges for leveraging HPC and high-performance networks * HPC/HTC pre-SC15 ESnet/GEANT/Internet2 survey results overview * Next steps discussion * Closing and Thank You BoF: Connecting HPC and High Performance Networks for Scientists and Researchers

  11. Exploration of multi-block polymer morphologies using high performance...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Exploration of multi-block polymer morphologies using high performance computing Modern material design increasingly relies on controlling small scale morphologies. Multi-block...

  12. Continuous Monitoring And Cyber Security For High Performance...

    Office of Scientific and Technical Information (OSTI)

    Continuous Monitoring And Cyber Security For High Performance Computing Malin, Alex B. Los Alamos National Laboratory; Van Heule, Graham K. Los Alamos National Laboratory...

  13. Illustrating the future prediction of performance based on computer...

    Office of Scientific and Technical Information (OSTI)

    Illustrating the future prediction of performance based on computer code, physical experiments, and critical performance parameter samples Citation Details In-Document Search ...

  14. Virtual Design Studio (VDS) - Development of an Integrated Computer Simulation Environment for Performance Based Design of Very-Low Energy and High IEQ Buildings

    SciTech Connect

    Chen, Yixing; Zhang, Jianshun; Pelken, Michael; Gu, Lixing; Rice, Danial; Meng, Zhaozhou; Semahegn, Shewangizaw; Feng, Wei; Ling, Francesca; Shi, Jun; Henderson, Hugh

    2013-09-01

    Executive Summary The objective of this study was to develop a “Virtual Design Studio (VDS)”: a software platform for integrated, coordinated and optimized design of green building systems with low energy consumption, high indoor environmental quality (IEQ), and high level of sustainability. This VDS is intended to assist collaborating architects, engineers and project management team members throughout from the early phases to the detailed building design stages. It can be used to plan design tasks and workflow, and evaluate the potential impacts of various green building strategies on the building performance by using the state of the art simulation tools as well as industrial/professional standards and guidelines for green building system design. Engaged in the development of VDS was a multi-disciplinary research team that included architects, engineers, and software developers. Based on the review and analysis of how existing professional practices in building systems design operate, particularly those used in the U.S., Germany and UK, a generic process for performance-based building design, construction and operation was proposed. It distinguishes the whole process into five distinct stages: Assess, Define, Design, Apply, and Monitoring (ADDAM). The current VDS is focused on the first three stages. The VDS considers building design as a multi-dimensional process, involving multiple design teams, design factors, and design stages. The intersection among these three dimensions defines a specific design task in terms of “who”, “what” and “when”. It also considers building design as a multi-objective process that aims to enhance the five aspects of performance for green building systems: site sustainability, materials and resource efficiency, water utilization efficiency, energy efficiency and impacts to the atmospheric environment, and IEQ. The current VDS development has been limited to energy efficiency and IEQ performance, with particular focus

  15. High Performance Home Cost Performance Trade-Offs: Production...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    High Performance Home Cost Performance Trade-Offs: Production Builders - Building America Top Innovation High Performance Home Cost Performance Trade-Offs: Production Builders - ...

  16. High energy neutron Computed Tomography developed

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    High energy neutron Computed Tomography developed High energy neutron Computed Tomography developed LANSCE now has a high-energy neutron imaging capability that can be deployed on WNR flight paths for unclassified and classified objects. May 9, 2014 Neutron tomography horizontal "slice" of a tungsten and polyethylene test object containing tungsten carbide BBs. Neutron tomography horizontal "slice" of a tungsten and polyethylene test object containing tungsten carbide BBs.

  17. Hydro Review: Computational Tools to Assess Turbine Biological Performance

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    | Department of Energy Hydro Review: Computational Tools to Assess Turbine Biological Performance Hydro Review: Computational Tools to Assess Turbine Biological Performance This review covers the BioPA method used to analyze the biological performance of proposed designs to help ensure the safety of fish passing through the turbines at the Priest Rapids Dam in Grant County, Washington. Computational Tools to Assess Turbine Biological Performance (483.71 KB) More Documents & Publications

  18. A High Performance Computing Platform for Performing High-Volume...

    Office of Scientific and Technical Information (OSTI)

    developed external to the core simulation engine without consideration for ease of use. This has created a technical gap for applying HPC-based tools to today's power grid studies. ...

  19. Performance comparison of desktop multiprocessing and workstation cluster computing

    SciTech Connect

    Crandall, P.E.; Sumithasri, E.V.; Clement, M.A.

    1996-12-31

    This paper describes our initial findings regarding the performance trade-offs between cluster computing where the participating processors are independent machines connected by a high speed switch and desktop multiprocessing where the processors reside within a single workstation and share a common memory. While interprocessor communication time has typically been cited as the limiting force on performance in the cluster, bus and memory contention have had similar effects in shared memory systems. The advent of high speed interconnects and improved bus and memory access speeds have enhanced the performance curves of both platforms. We present comparisons of the execution times of three applications with varying levels of data dependencies-numerical integration, matrix multiplication, and Jacobi iteration across three environment: the PVM distributed memory model, the PVM shared memory model, and the Solaris threads package.

  20. Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Computing Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest. News Releases Science Briefs Photos Picture of the Week Publications Social Media Videos Fact Sheets Since 1978 Los Alamos has won 137 of the prestigious R&D 100 Awards. Los Alamos honored for industry collaboration in 2016 HPCwire Awards Los Alamos National Laboratory has been recognized for the Lab's collaboration with

  1. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  2. High Performance and Sustainable Buildings Guidance | Department...

    Office of Environmental Management (EM)

    High Performance and Sustainable Buildings Guidance High Performance and Sustainable Buildings Guidance High Performance and Sustainable Buildings Guidance (192.76 KB) More ...

  3. Illustrating the future prediction of performance based on computer...

    Office of Scientific and Technical Information (OSTI)

    Illustrating the future prediction of performance based on computer code, physical ... Citation Details In-Document Search Title: Illustrating the future prediction of ...

  4. Computing in high-energy physics

    DOE PAGES [OSTI]

    Mount, Richard P.

    2016-05-31

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  5. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing.

    SciTech Connect

    Bartlett, Roscoe Ainsworth

    2010-05-01

    The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classes is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory

  6. INL High Performance Building Strategy

    SciTech Connect

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  7. High Performance Photovoltaic Project Overview

    SciTech Connect

    Symko-Davies, M.; McConnell, R.

    2005-01-01

    The High-Performance Photovoltaic (HiPerf PV) Project was initiated by the U.S. Department of Energy to substantially increase the viability of photovoltaics (PV) for cost-competitive applications so that PV can contribute significantly to our energy supply and environment in the 21st century. To accomplish this, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices. In this paper, we describe the recent research accomplishments in the in-house directed efforts and the research efforts under way in the subcontracted area.

  8. Scalable Computer Performance and Analysis (Hierarchical INTegration)

    Energy Science and Technology Software Center

    1999-09-02

    HINT is a program to measure a wide variety of scalable computer systems. It is capable of demonstrating the benefits of using more memory or processing power, and of improving communications within the system. HINT can be used for measurement of an existing system, while the associated program ANALYTIC HINT can be used to explain the measurements or as a design tool for proposed systems.

  9. High Performance Outdoor Lighting Accelerator

    Energy.gov [DOE]

    Hosted by the U.S. Department of Energy (DOE)’s Weatherization and Intergovernmental Programs Office (WIPO), this webinar covered the expansion of the Better Buildings platform to include the newest initiative for the public sector: the High Performance Outdoor Lighting Accelerator (HPOLA).

  10. High Performance Bulk Thermoelectric Materials

    SciTech Connect

    Ren, Zhifeng

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  11. Computer modeling of 12-inch actuator shock test performance

    SciTech Connect

    Bell, R.G.; Baca, T.J.

    1990-01-01

    The 12-Inch Horizontal Actuator is a high velocity mechanical shock simulation testing device used for component development shock testing at Sandia National Laboratories. This machine is pneumatically driven and propels a ram sled along a track where a programmed shock pulse is generated during an impact with another sled containing the test item. Computer models have been developed which allow prediction of: (1) actuator performance in terms of pressure setting which produce the desired initial thrust of the ram sled; (2) sled brake performance which is needed to ensure stopping the sleds within length of the sled track; and (3) shock pulse amplitude and duration parameters associated with the desired shock test input. Experimental results are presented which verify the accuracy of the models. The modeling effort has significantly improved test efficiency by reducing the number of calibration tests required to develop required shock loading conditions. Successful development of these computer models demonstrates the great potential of computer applications in improving the quality of unique test capabilities such as the 12-Inch Actuator. 5 refs., 7 figs., 3 tabs.

  12. High-Performance Nanostructured Coating

    Energy.gov [DOE]

    The High-Performance Nanostructured Coating fact sheet details a SunShot project led by a University of California, San Diego research team working to develop a new high-temperature spectrally selective coating for receiver surfaces. These receiver surfaces, used in concentrating solar power systems, rely on high-temperature SSCs to effectively absorb solar energy without emitting much blackbody radiation.The optical properties of the SSC directly determine the efficiency and maximum attainable temperature of solar receivers, which in turn influence the power-conversion efficiency and overall system cost.

  13. Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Study: Innovative Energy Efficiency Approaches in NOAA's Environmental Security Computing Center in Fairmont, West Virginia High-Performance Computing Data Center Metering Protocol

  14. Software Synthesis for High Productivity Exascale Computing

    SciTech Connect

    Bodik, Rastislav

    2010-09-01

    Over the three years of our project, we accomplished three key milestones: We demonstrated how ideas from generative programming and software synthesis can help support the development of bulk-synchronous distributed memory kernels. These ideas are realized in a new language called MSL, a C-like language that combines synthesis features with high level notations for array manipulation and bulk-synchronous parallelism to simplify the semantic analysis required for synthesis. We also demonstrated that these high level notations map easily to low level C code and show that the performance of this generated code matches that of handwritten Fortran. Second, we introduced the idea of solver-aided domain-specific languages (SDSLs), which are an emerging class of computer-aided programming systems. SDSLs ease the construction of programs by automating tasks such as verification, debugging, synthesis, and non-deterministic execution. SDSLs are implemented by translating the DSL program into logical constraints. Next, we developed a symbolic virtual machine called Rosette, which simplifies the construction of such SDSLs and their compilers. We have used Rosette to build SynthCL, a subset of OpenCL that supports synthesis. Third, we developed novel numeric algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. We achieved progress in three aspects of this problem. First we determined lower bounds on communication. Second, we compared these lower bounds to widely used versions of these algorithms, and noted that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identified or invented new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrated large speed-ups in theory and practice.

  15. Computing and Computational Sciences Directorate - Computer Science...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, ...

  16. High performance image processing of SPRINT

    SciTech Connect

    DeGroot, T.

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  17. Large Scale Computing and Storage Requirements for High Energy...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Large Scale Computing and Storage Requirements for High Energy Physics HEPFrontcover.png Large Scale Computing and Storage Requirements for High Energy Physics An HEP ASCR ...

  18. Computationally Efficient Modeling of High-Efficiency Clean Combustion...

    Energy.gov [DOE] (indexed site)

    More Documents & Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines Computationally Efficient Modeling of High-Efficiency Clean Combustion ...

  19. Performance Models for Split-execution Computing Systems

    SciTech Connect

    Humble, Travis S; McCaskey, Alex; Schrock, Jonathan; Seddiqi, Hadayat; Britt, Keith A; Imam, Neena

    2016-01-01

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardware limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.

  20. High-Performance Computing at Los

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Billion inserts-per-second data milestone reached for supercomputing tool LOS ALAMOS, N.M., May 29, 2014-At Los Alamos, a supercomputer epicenter where "big data set" really means ...

  1. High Performance Computing for Manufacturing Parternship | GE...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    GE, US DOE Partner on HPC4Mfg projects to deliver new capabilities in 3D Printing and higher jet engine efficiency Click to email this to a friend (Opens in new window) Share on ...

  2. High Performance Computing Facility Operational Assessment, CY...

    Office of Scientific and Technical Information (OSTI)

    costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from...

  3. DOE High Performance Computing Operational Review (HPCOR)

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Operational Management » Freedom of Information Act » DOE Headquarters FOIA Request Form DOE Headquarters FOIA Request Form To make an Electronic FOIA (E-FOIA) request, please provide the information below. Failure to enter accurate and complete information may render your FOIA request impossible to fulfill. * Requests submitted under the Privacy Act must be signed and, therefore, cannot be submitted on this form. Contact information Name * Organization Address * Fax number Phone number Email

  4. BG/Q Performance Counters | Argonne Leadership Computing Facility

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] BG/Q Performance Counters The

  5. Building America Webinar: High Performance Enclosure Strategies...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Next Gen Advanced Framing for High Performance Homes Integrated System Solutions Building ... - August 13, 2014 - Next Gen Advanced Framing for High Performance Homes Integrated ...

  6. Using High Performance Libraries and Tools

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    High Performance Libraries and Tools Using High Performance Libraries and Tools Memkind Library on Edison The memkind library is a user extensible heap manager built on top of ...

  7. Routing performance analysis and optimization within a massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  8. Systems, Methods and Computer Readable Media for Modeling Cell Performance

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Fade, Kinetic Performance, Capacity Loss, of Rechargeable Electrochemical Devices - Energy Innovation Portal Energy Storage Energy Storage Find More Like This Return to Search Systems, Methods and Computer Readable Media for Modeling Cell Performance Fade, Kinetic Performance, Capacity Loss, of Rechargeable Electrochemical Devices Idaho National Laboratory Contact INL About This Technology Publications: PDF Document Publication CellSage Fact Sheet (3,409 KB) CellSage battery metrics

  9. Ultra-high resolution computed tomography imaging

    DOEpatents

    Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  10. The NetLogger Methodology for High Performance Distributed Systems

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    NetLogger Methodology for High Performance Distributed Systems Performance Analysis Brian Tierney, William Johnston, Brian Crowley, Gary Hoo, Chris Brooks, Dan Gunter Computing Sciences Directorate Lawrence Berkeley National Laboratory University of California, Berkeley, CA, 94720 Abstract We describe a methodology that enables the real-time diagnosis of performance problems in complex high-per- formance distributed systems. The methodology includes tools for generating precision event logs that

  11. High Performance Factory Built Housing

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Performance Factory Built Housing 2015 Building Technologies Office Peer Review Jordan Dentz, jdentz@levypartnership.com ARIES The Levy Partnership, Inc. Project Summary ...

  12. Energy Department Announces Ten New Projects to Apply High-Performance

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Computing to Manufacturing Challenges | Department of Energy Ten New Projects to Apply High-Performance Computing to Manufacturing Challenges Energy Department Announces Ten New Projects to Apply High-Performance Computing to Manufacturing Challenges February 17, 2016 - 9:30am Addthis The Energy Department today announced $3 million for ten new projects that will enable private-sector companies to use high-performance computing resources at the department's national laboratories to tackle

  13. High-Performance Phylogeny Reconstruction

    SciTech Connect

    Tiffani L. Williams

    2004-11-10

    Under the Alfred P. Sloan Fellowship in Computational Biology, I have been afforded the opportunity to study phylogenetics--one of the most important and exciting disciplines in computational biology. A phylogeny depicts an evolutionary relationship among a set of organisms (or taxa). Typically, a phylogeny is represented by a binary tree, where modern organisms are placed at the leaves and ancestral organisms occupy internal nodes, with the edges of the tree denoting evolutionary relationships. The task of phylogenetics is to infer this tree from observations upon present-day organisms. Reconstructing phylogenies is a major component of modern research programs in many areas of biology and medicine, but it is enormously expensive. The most commonly used techniques attempt to solve NP-hard problems such as maximum likelihood and maximum parsimony, typically by bounded searches through an exponentially-sized tree-space. For example, there are over 13 billion possible trees for 13 organisms. Phylogenetic heuristics that quickly analyze large amounts of data accurately will revolutionize the biological field. This final report highlights my activities in phylogenetics during the two-year postdoctoral period at the University of New Mexico under Prof. Bernard Moret. Specifically, this report reports my scientific, community and professional activities as an Alfred P. Sloan Postdoctoral Fellow in Computational Biology.

  14. A Comprehensive Look at High Performance Parallel I/O

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    A Comprehensive Look at High Performance Parallel I/O A Comprehensive Look at High Performance Parallel I/O Book Signing @ SC14! Nov. 18, 5 p.m. in Booth 1939 November 10, 2014 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov HighPerf Parallel IO In the 1990s, high performance computing (HPC) made a dramatic transition to massively parallel processors. As this model solidified over the next 20 years, supercomputing performance increased from gigaflops-billions of calculations per second-to

  15. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  16. FPGAs in High Perfomance Computing: Results from Two LDRD Projects.

    SciTech Connect

    Underwood, Keith D; Ulmer, Craig D.; Thompson, David; Hemmert, Karl Scott

    2006-11-01

    Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5

  17. Computing at JLab

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

  18. NUG 2013 User Day: Trends, Discovery, and Innovation in High Performance

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Computing Home » For Users » NERSC Users Group » Annual Meetings » NUG 2013 » User Day NUG 2013 User Day: Trends, Discovery, and Innovation in High Performance Computing Wednesday, Feb. 13 Berkeley Lab Building 50 Auditorium Live streaming: http://hosting.epresence.tv/LBL/1.aspx 8:45 - Welcome: Kathy Yelick, Berkeley Lab Associate Director for Computing Sciences Trends 9:00 - The Future of High Performance Scientific Computing, Kathy Yelick, Berkeley Lab Associate Director for Computing

  19. Building America Webinar: High Performance Space Conditioning...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    versus mini-splits being used in high performance (high R value enclosurelow air leakage) houses, often configured as a simplified distribution system (one heat source per floor). ...

  20. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES [OSTI]

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  1. Maximizing sparse matrix vector product performance in MIMD computers

    SciTech Connect

    McLay, R.T.; Kohli, H.S.; Swift, S.L.; Carey, G.F.

    1994-12-31

    A considerable component of the computational effort involved in conjugate gradient solution of structured sparse matrix systems is expended during the Matrix-Vector Product (MVP), and hence it is the focus of most efforts at improving performance. Such efforts are hindered on MIMD machines due to constraints on memory, cache and speed of memory-cpu data transfer. This paper describes a strategy for maximizing the performance of the local computations associated with the MVP. The method focuses on single stride memory access, and the efficient use of cache by pre-loading it with data that is re-used while bypassing it for other data. The algorithm is designed to behave optimally for varying grid sizes and number of unknowns per gridpoint. Results from an assembly language implementation of the strategy on the iPSC/860 show a significant improvement over the performance using FORTRAN.

  2. High Performance Binderless Electrodes for Rechargeable Lithium...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    High Performance Binderless Electrodes for Rechargeable Lithium Batteries National ... Electrode for fast-charging Lithium Ion Batteries, Accelerating Innovation Webinar ...

  3. Large Scale Production Computing and Storage Requirements for High Energy

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 HEPlogo.jpg The NERSC Program Requirements Review "Large Scale Computing and Storage Requirements for High Energy Physics" is organized by the Department of Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to characterize

  4. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-02-12

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: performing, for each node, a local reduction operation using allreduce contribution data for the cores of that node, yielding, for each node, a local reduction result for one or more representative cores for that node; establishing one or more logical rings among the nodes, each logical ring including only one of the representative cores from each node; performing, for each logical ring, a global allreduce operation using the local reduction result for the representative cores included in that logical ring, yielding a global allreduce result for each representative core included in that logical ring; and performing, for each node, a local broadcast operation using the global allreduce results for each representative core on that node.

  5. PPPL and Princeton join high-performance software project | Princeton

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Plasma Physics Lab and Princeton join high-performance software project By John Greenwald July 22, 2016 Tweet Widget Google Plus One Share on Facebook Co-principal investigators William Tang and Bei Wang (Photo by Elle Starkman/Office of Communications) Co-principal investigators William Tang and Bei Wang Princeton University and the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) are participating in the accelerated development of a modern high-performance computing

  6. PPPL and Princeton join high-performance software project | Princeton

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Plasma Physics Lab and Princeton join high-performance software project By John Greenwald July 22, 2016 Tweet Widget Google Plus One Share on Facebook Co-principal investigators William Tang and Bei Wang. (Photo by Elle Starkman/Office of Communications) Co-principal investigators William Tang and Bei Wang. Princeton University and the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) are participating in the accelerated development of a modern high-performance computing

  7. Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Computing Center | Department of Energy Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance Computing Center Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance Computing Center Study evaluates the energy efficiency of a new, liquid-cooled computing system applied in a retrofit project compared to the previously used air-cooled system. Download the study. (1.25 MB) More Documents & Publications Energy Efficiency Opportunities in Federal High

  8. High Temperature Fuel Cell Performance High Temperature Fuel Cell

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Performance of of Sulfonated Sulfonated Poly(phenylene Poly(phenylene) Proton) Proton Conducting Conducting Polymers | Department of Energy Cell Performance High Temperature Fuel Cell Performance of of Sulfonated Sulfonated Poly(phenylene Poly(phenylene) Proton) Proton Conducting Conducting Polymers High Temperature Fuel Cell Performance High Temperature Fuel Cell Performance of of Sulfonated Sulfonated Poly(phenylene Poly(phenylene) Proton) Proton Conducting Conducting Polymers Presentation

  9. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-07-09

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: establishing, for each node, a plurality of logical rings, each ring including a different set of at least one core on that node, each ring including the cores on at least two of the nodes; iteratively for each node: assigning each core of that node to one of the rings established for that node to which the core has not previously been assigned, and performing, for each ring for that node, a global allreduce operation using contribution data for the cores assigned to that ring or any global allreduce results from previous global allreduce operations, yielding current global allreduce results for each core; and performing, for each node, a local allreduce operation using the global allreduce results.

  10. HIGH-PERFORMANCE COATING MATERIALS

    SciTech Connect

    SUGAMA,T.

    2007-01-01

    Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

  11. Functionalized High Performance Polymer Membranes for Separation...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Functionalized High Performance Polymer Membranes for Separation of Carbon Dioxide and Methane Previous Next List Natalia Blinova and Frantisek Svec, J. Mater. Chem. A, 2, 600-604...

  12. High performance carbon nanocomposites for ultracapacitors

    DOEpatents

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  13. Building America Webinar: High Performance Enclosure Strategies...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Strategies: Part II, New Construction - August 13, 2014 - Introduction This presentation is the Introduction to the Building America webinar, High Performance Enclosure Strategies...

  14. Building America Webinar: High Performance Enclosure Strategies...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    High Performance Enclosure Strategies, Part II, on August 13, 2014. BAwebinarbscbaker81314.pdf (1.03 MB) More Documents & Publications Cladding Attachment Over Thick ...

  15. High Performance Sustainable Building - DOE Directives, Delegations...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    by Adam Pugh Functional areas: Program Management, Project Management This Guide provides approaches for implementing the High Performance Sustainable Building (HPSB) requirements...

  16. Building America Webinar: High Performance Building Enclosures...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Building America Webinar: High Performance Building Enclosures: Part I, Existing Homes The webinar, presented on May 21, 2014, focused on specific Building America projects that ...

  17. Building America Webinar: High Performance Space Conditioning...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    II - Design Options for Locating Ducts within Conditioned Space Building America Webinar: High Performance Space Conditioning Systems, Part II - Design Options for Locating Ducts ...

  18. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  19. Strategy Guideline: High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  20. Energy Proportionality and Performance in Data Parallel Computing Clusters

    SciTech Connect

    Kim, Jinoh; Chou, Jerry; Rotem, Doron

    2011-02-14

    Energy consumption in datacenters has recently become a major concern due to the rising operational costs andscalability issues. Recent solutions to this problem propose the principle of energy proportionality, i.e., the amount of energy consumedby the server nodes must be proportional to the amount of work performed. For data parallelism and fault tolerancepurposes, most common file systems used in MapReduce-type clusters maintain a set of replicas for each data block. A coveringset is a group of nodes that together contain at least one replica of the data blocks needed for performing computing tasks. In thiswork, we develop and analyze algorithms to maintain energy proportionality by discovering a covering set that minimizesenergy consumption while placing the remaining nodes in lowpower standby mode. Our algorithms can also discover coveringsets in heterogeneous computing environments. In order to allow more data parallelism, we generalize our algorithms so that itcan discover k-covering sets, i.e., a set of nodes that contain at least k replicas of the data blocks. Our experimental results showthat we can achieve substantial energy saving without significant performance loss in diverse cluster configurations and workingenvironments.

  1. High Level Computational Chemistry Approaches to the Prediction...

    Energy.gov [DOE] (indexed site)

    Presentation on the High Level Computational Chemistry given at the DOE Theory Focus Session on Hydrogen Storage Materials on May 18, 2006. storagetheorysessiondixon.pdf (692.3 ...

  2. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    SciTech Connect

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  3. High-reliability computing for the smarter planet

    SciTech Connect

    Quinn, Heather M; Graham, Paul; Manuzzato, Andrea; Dehon, Andre; Carter, Nicholas

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary

  4. High Performance Plastic DSSC | ANSER Center | Argonne-Northwestern...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    High Performance Plastic DSSC Home > Research > ANSER Research Highlights > High Performance Plastic DSSC...

  5. High Level Computational Chemistry Approaches to the Prediction of

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Energetic Properties of Chemical Hydrogen Storage Systems | Department of Energy Level Computational Chemistry Approaches to the Prediction of Energetic Properties of Chemical Hydrogen Storage Systems High Level Computational Chemistry Approaches to the Prediction of Energetic Properties of Chemical Hydrogen Storage Systems Presentation on the High Level Computational Chemistry given at the DOE Theory Focus Session on Hydrogen Storage Materials on May 18, 2006.

  6. High Performance Colloidal Nanocrystals | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Colloidal Nanocrystals High Performance Colloidal Nanocrystals Lead Performer: Lumisyn, LLC - Rochester, NY DOE Total Funding: $149,997 Project Term: February 22, 2016 - November 21, 2016 Funding Type: SBIR PROJECT OBJECTIVE Lumisyn has created a novel class of high-efficiency, nontoxic nanocrystals that overcome many longstanding problems with other alternatives but require improvements before being commercialized. This project will develop a model of the factors that contribute to high

  7. Strategy Guideline. Partnering for High Performance Homes

    SciTech Connect

    Prahl, Duncan

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  8. Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors

    SciTech Connect

    Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2007-06-01

    Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.

  9. Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Computing and Storage Requirements Computing and Storage Requirements for FES J. Candy General Atomics, San Diego, CA Presented at DOE Technical Program Review Hilton Washington DC/Rockville Rockville, MD 19-20 March 2013 2 Computing and Storage Requirements Drift waves and tokamak plasma turbulence Role in the context of fusion research * Plasma performance: In tokamak plasmas, performance is limited by turbulent radial transport of both energy and particles. * Gradient-driven: This turbulent

  10. TAP Webinar: High Performance Outdoor Lighting Accelerator

    Energy.gov [DOE]

    Hosted by the Technical Assistance Program (TAP), this webinar will cover the recently announced expansion of the Better Buildings platform —the High Performance Outdoor Lighting Accelerator (HPOLA).

  11. Durham County- High-Performance Building Policy

    Office of Energy Efficiency and Renewable Energy (EERE)

    Durham County adopted a resolution in October 2008 that requires new non-school public buildings and facilities to meet high-performance standards. New construction of public buildings and...

  12. High Performance Green Building Partnership Consortia | Department...

    Energy.gov [DOE] (indexed site)

    The High-Performance Green Building Partnership Consortia are groups from the public and private sectors recognized by the U.S. Department of Energy (DOE) for their commitment to ...

  13. Performing a global barrier operation in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-12-09

    Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.

  14. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

  15. High Performance Dielectrics - Energy Innovation Portal

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    High Performance Dielectrics Sandia National Laboratories Contact SNL About This Technology Publications: PDF Document Publication Market Sheet (342 KB) Technology Marketing Summary Current dielectric materials are limited and unable to meet all operating, temperature, response frequency, size, and reliability requirements needed for uncooled high-reliability electronics. To address this problem, scientists at Sandia have developed a method for producing dielectric materials using engineered

  16. Performance Tools & APIs | Argonne Leadership Computing Facility

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Performance Plans Performance Plans November 10, 2016 Combined Fiscal Year (FY) 2016 Annual Performance Results and FYs 2017 and 2018 Annual Performance Plan Combined Fiscal Year (FY) 2016 Annual Performance Results and FYs 2017 and 2018 Annual Performance Plan November 13, 2015 Combined Fiscal Year (FY) 2015 Annual Performance Results and FYs 2016 and 2017 Annual Performance Plan Combined Fiscal Year (FY) 2015 Annual Performance Results and FYs 2016 and 2017 Annual Performance Plan November 6,

  17. Performance characteristics of recently developed high-performance heat pipes

    SciTech Connect

    Schlitt, R.

    1995-01-01

    For future space projects such as Earth orbiting platforms, space stations, but also Moon or Mars bases, the need to manage waste heat up to 100 kW has been identified. For this purpose large heat pipe radiators have been proposed with heat pipe lengths of 15 m and heat transport capabilities up to 4 kW. It is demonstrated that conventional axially grooved heat pipes can be improved to provide 1 kWm heat transport capability. Higher heat loads can be handled only by high-composite wick designs with large liquid cross sections and circumferential grooves in the evaporator. With these high-performance heat pipes, heat transfer coefficients of about 200 kW/m{sup 2}K and transport capabilities of 2 kW over 15 m can be reached. Configurations with liquid fillets and axially tapered liquid channels are proposed to improve the ability of the highly composite wick to prime.

  18. High Performance and Sustainable Buildings Guidance

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    HIGH PERFORMANCE and SUSTAINABLE BUILDINGS GUIDANCE Final (12/1/08) PURPOSE The Interagency Sustainability Working Group (ISWG), as a subcommittee of the Steering Committee established by Executive Order (E.O.) 13423, initiated development of the following guidance to assist agencies in meeting the high performance and sustainable buildings goals of E.O. 13423, section 2(f). 1 E.O. 13423, sec. 2(f) states "In implementing the policy set forth in section 1 of this order, the head of each

  19. Building America Roadmap to High Performance Homes

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Building America Technical Update Meeting - April 29, 2013 Building America Roadmap to High Performance Homes Eric Werling Building America Coordinator Denver, CO April 29, 2013 Building Technology Office U.S. Department of Energy EERE's National Mission Mission: To create American leadership in the global transition to a clean energy economy 1) High-Impact Research, Development, and Demonstration to Make Clean Energy as Affordable and Convenient as Traditional Forms of Energy 2) Breaking Down

  20. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

    SciTech Connect

    Corones, James

    2013-09-23

    High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

  1. Strategy Guideline: Partnering for High Performance Homes

    SciTech Connect

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  2. Commercial Buildings High Performance Rooftop Unit Challenge

    SciTech Connect

    2011-12-16

    The U.S. Department of Energy (DOE) and the Commercial Building Energy Alliances (CBEAs) are releasing a new design specification for high performance rooftop air conditioning units (RTUs). Manufacturers who develop RTUs based on this new specification will find strong interest from the commercial sector due to the energy and financial savings.

  3. High Performance Builder Spotlight: Imagine Homes

    SciTech Connect

    2011-01-01

    Imagine Homes, working with the DOE's Building America research team member IBACOS, has developed a system that can be replicated by other contractors to build affordable, high-performance homes. Imagine Homes has used the system to produce more than 70 Builders Challenge-certified homes per year in San Antonio over the past five years.

  4. Project materials [Commercial High Performance Buildings Project

    SciTech Connect

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  5. High Performance Window Attachments | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Window Attachments High Performance Window Attachments Emerging Technologies Project for the 2013 Building Technologies Office's Program Peer Review emrgtech20_curcija_040413.pdf (2.09 MB) More Documents & Publications Fenestration Software Tools Advanced Facades, Daylighting, and Complex Fenestration Systems OpenStudio - 2013 Peer Review

  6. Strategy Guideline. High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  7. High voltage electric substation performance in earthquakes

    SciTech Connect

    Eidinger, J.; Ostrom, D.; Matsuda, E.

    1995-12-31

    This paper examines the performance of several types of high voltage substation equipment in past earthquakes. Damage data is provided in chart form. This data is then developed into a tool for estimating the performance of a substation subjected to an earthquake. First, suggests are made about the development of equipment class fragility curves that represent the expected earthquake performance of different voltages and types of equipment. Second, suggestions are made about how damage to individual pieces of equipment at a substation likely affects the post-earthquake performance of the substation as a whole. Finally, estimates are provided as to how quickly a substation, at various levels of damage, can be restored to operational service after the earthquake.

  8. High performance anode for advanced Li batteries

    SciTech Connect

    Lake, Carla

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  9. High Performance Commercial Fenestration Framing Systems

    SciTech Connect

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  10. NREL: Computational Science Home Page

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    high-performance computing, computational science, applied mathematics, scientific data management, visualization, and informatics. NREL is home to the largest high performance...

  11. Proceedings from the conference on high speed computing: High speed computing and national security

    SciTech Connect

    Hirons, K.P.; Vigil, M.; Carlson, R.

    1997-07-01

    This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.

  12. Highly-Accessible Catalysts for Durable High-Power Performance

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Highly-Accessible Catalysts for Durable High-Power Performance This presentation does not contain any proprietary, confidential, or otherwise restricted information Anusorn Kongkanand (PI) General Motors, Fuel Cell Activities DOE Catalyst Working Group at Argonne National Lab July 27, 2016 FC144 2 Energy Environ. Sci., 2014. Exceptional Durability of ORR Activity with Dealloyed PtNi/HSC and PtCo/HSC FC087 2011-2014 * Meeting DOE ORR durability in MEA. Validated at multiple sites. * Need thicker

  13. High-Performance Energy Applications and Systems

    SciTech Connect

    Miller, Barton

    2014-05-19

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “Foundational Tools for Petascale Computing”, SC0003922/FG02-10ER25940, UW PRJ27NU.

  14. Reducing power consumption while performing collective operations on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  15. High Performance Leasing Strategies for State and Local Governments...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    High Performance Leasing Strategies for State and Local Governments High Performance Leasing Strategies for State and Local Governments Presentation for the SEE Action Series: High ...

  16. Design and Development of High-Performance Polymer Fuel Cell...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Design and Development of High-Performance Polymer Fuel Cell Membranes Design and Development of High-Performance Polymer Fuel Cell Membranes A presentation to the High Temperature ...

  17. High Performance Preconditioners and Linear Solvers

    Energy Science and Technology Software Center

    2006-07-27

    Hypre is a software library focused on the solution of large, sparse linear systems of equations on massively parallel computers.

  18. High Performance Walls in Hot-Dry Climates (Technical Report...

    Office of Scientific and Technical Information (OSTI)

    High Performance Walls in Hot-Dry Climates Citation Details In-Document Search Title: High Performance Walls in Hot-Dry Climates High performance walls represent a high priority...

  19. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing. The PRIMA Project

    SciTech Connect

    Malony, Allen D.; Wolf, Felix G.

    2014-01-31

    The growing number of cores provided by todays high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish

  20. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing: the PRIMA Project

    SciTech Connect

    Malony, Allen D.; Wolf, Felix G.

    2014-01-31

    The growing number of cores provided by todays high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these

  1. Mira Performance Boot Camp 2014 | Argonne Leadership Computing...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    4 Mira Performance Boot Camp 2014 This event has expired. See agenda for links to presentations. Ready to take your code to the next level? At the Mira Performance Boot Camp,...

  2. DOE High Performance Concentrator PV Project

    SciTech Connect

    McConnell, R.; Symko-Davies, M.

    2005-08-01

    Much in demand are next-generation photovoltaic (PV) technologies that can be used economically to make a large-scale impact on world electricity production. The U.S. Department of Energy (DOE) initiated the High-Performance Photovoltaic (HiPerf PV) Project to substantially increase the viability of PV for cost-competitive applications so that PV can contribute significantly to both our energy supply and environment. To accomplish such results, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices with the goal of enabling progress of high-efficiency technologies toward commercial-prototype products. We will describe the details of the subcontractor and in-house progress in exploring and accelerating pathways of III-V multijunction concentrator solar cells and systems toward their long-term goals. By 2020, we anticipate that this project will have demonstrated 33% system efficiency and a system price of $1.00/Wp for concentrator PV systems using III-V multijunction solar cells with efficiencies over 41%.

  3. High performance robotic traverse of desert terrain.

    SciTech Connect

    Whittaker, William

    2004-09-01

    This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.

  4. High Performance Piezoelectric Actuated Gimbal (HIERAX)

    SciTech Connect

    Charles Tschaggeny; Warren Jones; Eberhard Bamberg

    2007-04-01

    This paper presents a 3-axis gimbal whose three rotational axes are actuated by a novel drive system: linear piezoelectric motors whose linear output is converted to rotation by using drive disks. Advantages of this technology are: fast response, high accelerations, dither-free actuation and backlash-free positioning. The gimbal was developed to house a laser range finder for the purpose of tracking and guiding unmanned aerial vehicles during landing maneuvers. The tilt axis was built and the test results indicate excellent performance that meets design specifications.

  5. Automatic Performance Collection (AutoPerf) | Argonne Leadership Computing

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Facility Automatic Performance Collection (AutoPerf) A library for the automatic collection of hardware performance counter and MPI information is available on ALCF BG/Q machines (Mira, Cetus, Vesta). This library transparently collects performance data from running jobs and saves it into files at jobs completion. AutoPerf is enabled by default on Cetus, Mira, and Vesta - no action will be needed to utilize the library. Executables complied or linked on on these machine will automatically be

  6. High-performance laboratories and cleanrooms

    SciTech Connect

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-07-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations--primarily safety driven--that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities.

  7. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A; Faraj, Daniel A

    2013-06-04

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  8. Performing a local reduction operation on a parallel computer

    SciTech Connect

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  9. High-Performance Leasing for State and Local Governments

    SciTech Connect

    Existing Commercial Buildings Working Group

    2012-05-23

    Describes the value of high-performance leasing and how states can lead by example by using high-performance leases in their facilities and encourage high-performance leasing in the private sector.

  10. Mira Performance Boot Camp 2015 | Argonne Leadership Computing...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Performance Vis - Holger Brunst, Dresden University of Technology 10:00 - 10:30 a.m. TAU Performance System - Sameer Shende, ParaTools, Inc. 10:30 a.m. - 12:00 p.m. Hands-on...

  11. Webinar: ENERGY STAR Hot Water Systems for High Performance Homes...

    Energy Saver

    ENERGY STAR Hot Water Systems for High Performance Homes Webinar: ENERGY STAR Hot Water Systems for High Performance Homes This presentation is from the Building America research ...

  12. LBNL: High Performance Active Perimeter Building Systems - 2015...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    LBNL: High Performance Active Perimeter Building Systems - 2015 Peer Review Presenter: Eleanor Lee, LBNL View the Presentation PDF icon LBNL: High Performance Active Perimeter ...

  13. Print-based Manufacturing of Integrated, Low Cost, High Performance...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Print-based Manufacturing of Integrated, Low Cost, High Performance SSL Luminaires Print-based Manufacturing of Integrated, Low Cost, High Performance SSL Luminaires Lead ...

  14. Memorandum of American High-Performance Buildings Coalition DOE...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Memorandum of American High-Performance Buildings Coalition DOE Meeting August 19, 2013 Memorandum of American High-Performance Buildings Coalition DOE Meeting August 19, 2013 This ...

  15. Materials Modeling for High-Performance Radiation Detectors ...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Materials Modeling for High-Performance Radiation Detectors Citation Details In-Document Search Title: Materials Modeling for High-Performance Radiation Detectors ...

  16. Natural Refrigerant High-Performance Heat Pump for Commercial...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Refrigerant High-Performance Heat Pump for Commercial Applications Natural Refrigerant High-Performance Heat Pump for Commercial Applications Credit: S-RAM Credit: S-RAM Lead ...

  17. Energy Design Guidelines for High Performance Schools: Hot and...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Design Guidelines for High Performance Schools: Hot and Humid Climates Energy Design Guidelines for High Performance Schools: Hot and Humid Climates School districts around the...

  18. Business Metrics for High-Performance Homes: A Colorado Springs...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Business Metrics for High-Performance Homes: A Colorado Springs Case Study Citation Details In-Document Search Title: Business Metrics for High-Performance Homes: ...

  19. Enhanced High Temperature Performance of NOx Storage/Reduction...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    More Documents & Publications Enhanced High Temperature Performance of NOx StorageReduction (NSR) Materials Enhanced High Temperature Performance of NOx StorageReduction (NSR) ...

  20. Enhanced High Temperature Performance of NOx Storage/Reduction...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    More Documents & Publications Enhanced High and Low Temperature Performance of NOx Reduction Materials Enhanced High Temperature Performance of NOx StorageReduction (NSR) ...

  1. USABC Development of Advanced High-Performance Batteries for...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Development of Advanced High-Performance Batteries for EV Applications USABC Development of Advanced High-Performance Batteries for EV Applications 2012 DOE Hydrogen and Fuel Cells ...

  2. Reduced Call-Backs with High Performance Production Builders...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Reduced Call-Backs with High Performance Production Builders - Building America Top Innovation Reduced Call-Backs with High Performance Production Builders - Building America Top ...

  3. Integrated Design: A High-Performance Solution for Affordable...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Design: A High-Performance Solution for Affordable Housing Integrated Design: A High-Performance Solution for Affordable Housing ARIES lab houses. Photo courtesy of The Levy ...

  4. New rocket propellant and motor design offer high-performance...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    New rocket propellant and motor design offer high-performance and safety New rocket propellant and motor design offer high-performance and safety Scientists recently flight tested ...

  5. Federal Leadership in High Performance and Sustainable Buildings...

    Energy.gov [DOE] (indexed site)

    and operation of High-Performance and Sustainable Buildings. Federal Leadership in High Performance and Sustainable Buildings Memorandum of Understanding (148.11 KB) ...

  6. Halide and Oxy-halide Eutectic Systems for High Performance High...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Halide and Oxy-halide Eutectic Systems for High Performance High Temperature Heat Transfer Fluids Halide and Oxy-halide Eutectic Systems for High Performance High Temperature Heat ...

  7. High-performance Si microwire photovoltaics

    SciTech Connect

    Kelzenberg, Michael D.; Turner-Evans, Daniel B.; Putnam, Morgan C.; Boettcher, Shannon W.; Briggs, Ryan M.; Baek, Jae Y.; Lewis, Nathan S.; Atwater, Harry A.

    2011-01-07

    Crystalline Si wires, grown by the vaporliquidsolid (VLS) process, have emerged as promising candidate materials for low-cost, thin-film photovoltaics. Here, we demonstrate VLS-grown Si microwires that have suitable electrical properties for high-performance photovoltaic applications, including long minority-carrier diffusion lengths (Ln>> 30 m) and low surface recombination velocities (S << 70 cms-1). Single-wire radial pn junction solar cells were fabricated with amorphous silicon and silicon nitride surface coatings, achieving up to 9.0% apparent photovoltaic efficiency, and exhibiting up to ~600 mV open-circuit voltage with over 80% fill factor. Projective single-wire measurements and optoelectronic simulations suggest that large-area Si wire-array solar cells have the potential to exceed 17% energy-conversion efficiency, offering a promising route toward cost-effective crystalline Si photovoltaics.

  8. High-performance, high-volume fly ash concrete

    SciTech Connect

    2008-01-15

    This booklet offers the construction professional an in-depth description of the use of high-volume fly ash in concrete. Emphasis is placed on the need for increased utilization of coal-fired power plant byproducts in lieu of Portland cement materials to eliminate increased CO{sub 2} emissions during the production of cement. Also addressed is the dramatic increase in concrete performance with the use of 50+ percent fly ash volume. The booklet contains numerous color and black and white photos, charts of test results, mixtures and comparisons, and several HVFA case studies.

  9. High performance internal reforming unit for high temperature fuel cells

    DOEpatents

    Ma, Zhiwen; Venkataraman, Ramakrishnan; Novacco, Lawrence J.

    2008-10-07

    A fuel reformer having an enclosure with first and second opposing surfaces, a sidewall connecting the first and second opposing surfaces and an inlet port and an outlet port in the sidewall. A plate assembly supporting a catalyst and baffles are also disposed in the enclosure. A main baffle extends into the enclosure from a point of the sidewall between the inlet and outlet ports. The main baffle cooperates with the enclosure and the plate assembly to establish a path for the flow of fuel gas through the reformer from the inlet port to the outlet port. At least a first directing baffle extends in the enclosure from one of the sidewall and the main baffle and cooperates with the plate assembly and the enclosure to alter the gas flow path. Desired graded catalyst loading pattern has been defined for optimized thermal management for the internal reforming high temperature fuel cells so as to achieve high cell performance.

  10. Improving network performance on multicore systems: Impact of core affinities on high throughput flows

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Future Generation Computer Systems ( ) - Contents lists available at ScienceDirect Future Generation Computer Systems journal homepage: www.elsevier.com/locate/fgcs Improving network performance on multicore systems: Impact of core affinities on high throughput flows Nathan Hanford a,∗ , Vishal Ahuja a , Matthew Farrens a , Dipak Ghosal a , Mehmet Balman b , Eric Pouyoul b , Brian Tierney b a Department of Computer Science, University of California, Davis, CA, United States b Energy Sciences

  11. High-Tech Tools Tackle Wind Farm Performance - News Feature | NREL

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    High-Tech Tools Tackle Wind Farm Performance September 20, 2012 Two men in silhouette stand in front of a screen and demonstrate a computer simulation. On the screen is a computer simulation showing how the wind flows through a group of wind turbines. Enlarge image NREL's Steve Hammond, director of the Computational Science Center, and Kenny Gruchalla, senior scientist, discuss a 3D model of wind plant aerodynamics that shows low-velocity wakes and the resulting impact on downstream turbines.

  12. Computer analysis of sodium cold trap design and performance. [LMFBR

    SciTech Connect

    McPheeters, C.C.; Raue, D.J.

    1983-11-01

    Normal steam-side corrosion of steam-generator tubes in Liquid Metal Fast Breeder Reactors (LMFBRs) results in liberation of hydrogen, and most of this hydrogen diffuses through the tubes into the heat-transfer sodium and must be removed by the purification system. Cold traps are normally used to purify sodium, and they operate by cooling the sodium to temperatures near the melting point, where soluble impurities including hydrogen and oxygen precipitate as NaH and Na/sub 2/O, respectively. A computer model was developed to simulate the processes that occur in sodium cold traps. The Model for Analyzing Sodium Cold Traps (MASCOT) simulates any desired configuration of mesh arrangements and dimensions and calculates pressure drops and flow distributions, temperature profiles, impurity concentration profiles, and impurity mass distributions.

  13. Computing Sciences

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Computing Sciences Our Vision National User Facilities Research Areas In Focus Global Solutions ⇒ Navigate Section Our Vision National User Facilities Research Areas In Focus Global Solutions Computational Research Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. Scientific Networking

  14. Federal Leadership in High Performance and Sustainable Buildings Memorandum

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    of Understanding | Department of Energy Leadership in High Performance and Sustainable Buildings Memorandum of Understanding Federal Leadership in High Performance and Sustainable Buildings Memorandum of Understanding With this Memorandum of Understanding (MOU), signatory agencies commit to federal leadership in the design, construction, and operation of High-Performance and Sustainable Buildings. Federal Leadership in High Performance and Sustainable Buildings Memorandum of Understanding

  15. Measuring Human Performance within Computer Security Incident Response Teams

    SciTech Connect

    McClain, Jonathan T.; Silva, Austin Ray; Avina, Glory Emmanuel; Forsythe, James C.

    2015-09-01

    Human performance has become a pertinen t issue within cyber security. However, this research has been stymied by the limited availability of expert cyber security professionals. This is partly attributable to the ongoing workload faced by cyber security professionals, which is compound ed by the limited number of qualified personnel and turnover of p ersonnel across organizations. Additionally, it is difficult to conduct research, and particularly, openly published research, due to the sensitivity inherent to cyber ope rations at most orga nizations. As an alternative, the current research has focused on data collection during cyb er security training exercises. These events draw individuals with a range of knowledge and experience extending from seasoned professionals to recent college gradu ates to college students. The current paper describes research involving data collection at two separate cyber security exercises. This data collection involved multiple measures which included behavioral performance based on human - machine transactions and questionnaire - based assessments of cyber security experience.

  16. Computational Earth Science

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental ...

  17. Building America Webinar: High Performance Enclosure Strategies: Part II,

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    New Construction - August 13, 2014 - Next Gen Advanced Framing for High Performance Homes Integrated System Solutions | Department of Energy Next Gen Advanced Framing for High Performance Homes Integrated System Solutions Building America Webinar: High Performance Enclosure Strategies: Part II, New Construction - August 13, 2014 - Next Gen Advanced Framing for High Performance Homes Integrated System Solutions This presentation, Next Gen Advanced Framing for High Performance Homes -

  18. High-Performance Secure Database Access Technologies for HEP Grids

    SciTech Connect

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  19. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    SciTech Connect

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  20. Computational Human Performance Modeling For Alarm System Design

    SciTech Connect

    Jacques Hugo

    2012-07-01

    The introduction of new technologies like adaptive automation systems and advanced alarms processing and presentation techniques in nuclear power plants is already having an impact on the safety and effectiveness of plant operations and also the role of the control room operator. This impact is expected to escalate dramatically as more and more nuclear power utilities embark on upgrade projects in order to extend the lifetime of their plants. One of the most visible impacts in control rooms will be the need to replace aging alarm systems. Because most of these alarm systems use obsolete technologies, the methods, techniques and tools that were used to design the previous generation of alarm system designs are no longer effective and need to be updated. The same applies to the need to analyze and redefine operators alarm handling tasks. In the past, methods for analyzing human tasks and workload have relied on crude, paper-based methods that often lacked traceability. New approaches are needed to allow analysts to model and represent the new concepts of alarm operation and human-system interaction. State-of-the-art task simulation tools are now available that offer a cost-effective and efficient method for examining the effect of operator performance in different conditions and operational scenarios. A discrete event simulation system was used by human factors researchers at the Idaho National Laboratory to develop a generic alarm handling model to examine the effect of operator performance with simulated modern alarm system. It allowed analysts to evaluate alarm generation patterns as well as critical task times and human workload predicted by the system.

  1. High-performance commercial building systems

    SciTech Connect

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and

  2. UltraSciencenet: High- Performance Network Research Test-Bed

    SciTech Connect

    Rao, Nageswara S; Wing, William R; Poole, Stephen W; Hicks, Susan Elaine; DeNap, Frank A; Carter, Steven M; Wu, Qishi

    2009-04-01

    The high-performance networking requirements for next generation large-scale applications belong to two broad classes: (a) high bandwidths, typically multiples of 10Gbps, to support bulk data transfers, and (b) stable bandwidths, typically at much lower bandwidths, to support computational steering, remote visualization, and remote control of instrumentation. Current Internet technologies, however, are severely limited in meeting these demands because such bulk bandwidths are available only in the backbone, and stable control channels are hard to realize over shared connections. The UltraScience Net (USN) facilitates the development of such technologies by providing dynamic, cross-country dedicated 10Gbps channels for large data transfers, and 150 Mbps channels for interactive and control operations. Contributions of the USN project are two-fold: (a) Infrastructure Technologies for Network Experimental Facility: USN developed and/or demonstrated a number of infrastructure technologies needed for a national-scale network experimental facility. Compared to Internet, USN's data-plane is different in that it can be partitioned into isolated layer-1 or layer-2 connections, and its control-plane is different in the ability of users and applications to setup and tear down channels as needed. Its design required several new components including a Virtual Private Network infrastructure, a bandwidth and channel scheduler, and a dynamic signaling daemon. The control-plane employs a centralized scheduler to compute the channel allocations and a signaling daemon to generate configuration signals to switches. In a nutshell, USN demonstrated the ability to build and operate a stable national-scale switched network. (b) Structured Network Research Experiments: A number of network research experiments have been conducted on USN that cannot be easily supported over existing network facilities, including test-beds and production networks. It settled an open matter by demonstrating

  3. Building America Webinar: High Performance Enclosure Strategies...

    Energy.gov [DOE] (indexed site)

    performance of the building enclosure, reduce the cost of energy-efficient construction, and simplify the construction process, all while accommodating higher levels of insulation. ...

  4. DOE High Performance Computing for Manufacturing (HPC4Mfg) Program...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    HPC systems, but also for experts in the use of these systems to solve complex problems." ... laboratories will play a key role in solving manufacturing challenges and ...

  5. High Performance Computing for Manufacturing Parternship | GE Global

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Research GE, US DOE Partner on HPC4Mfg projects to deliver new capabilities in 3D Printing and higher jet engine efficiency Click to email this to a friend (Opens in new window) Share on Facebook (Opens in new window) Click to share (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Tumblr (Opens in new window) GE, US DOE Partner on HPC4Mfg projects to deliver new capabilities in 3D Printing and higher jet engine efficiency NISKAYUNA, NY, February 17,

  6. Energy Efficiency Opportunities in Federal High Performance Computing...

    Energy.gov [DOE] (indexed site)

    .........10 Generator Block Heater Modification Package ... Typical simple payback is 2.5years. Generator Block Heater Modification Package Equip ...

  7. Toward Codesign in High Performance Computing Systems - 06386705...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    o w a r d C o d e s i g n i n H i g h P e r f o r m a n c e C o m p u t i n g S y s t e m ... S a n t a C l a r a , C A , U S A s p a r k e r @ n v i d i a . c o m J . S h a l f L a w ...

  8. High-Performance Computing at Los Alamos announces milestone...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Billion inserts-per-second data milestone reached for supercomputing tool LOS ALAMOS, N.M., May 29, 2014-At Los Alamos, a supercomputer epicenter where "big data set" really means ...

  9. NNSA High-Performance Computing Achievements | National Nuclear...

    National Nuclear Security Administration (NNSA)

    ASCI Q was located at Los Alamos and built by Compaq, with a speed of 20 teraFLOPS. It enabled the first million-atom simulation in biology for the molecular mechanism of the ...

  10. High Performance Parallel Computing of Flows in Complex Geometries...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Authors: Gicquela, L.Y.M., Gourdaina, N., Boussugea, J.F., Deniaua, H., Staffelbach, G., Wolf, P., Poinsot, T. Efficient numerical tools taking advantage of the ever increasing ...

  11. Toward a new metric for ranking high performance computing systems...

    Office of Scientific and Technical Information (OSTI)

    Close Cite: Bibtex Format Close 0 pages in this document matching the terms "" Search For Terms: Enter terms in the toolbar above to search the full text of this document for ...

  12. 100 supercomputers later, Los Alamos high-performance computing...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Serial Number 1 of the iconic Cray-1, and a Thinking Machines CM-5, with its lightning bolt footprint and fat-tree interconnect. "The fat-tree today seems an obvious topology, but ...

  13. A Generalized Portable SHMEM Library for High Performance Computing

    SciTech Connect

    Parzyszek, K.; Nieplocha, J.; Kendall, R.A.

    2000-09-15

    This paper describes a portable one-sided communication library GPSHMEM that follows the interfaces of the successful SHMEM library introduced by Cray Research Inc. for their distributed memory systems: the Cray T3D and T3E. The portability is achieved by relying on ARMCI, a low-level communication library developed to support one-sided communication in distributed array libraries and compiler run-time systems, and the MPI message passing interface. The paper discusses implementation, requirements, and initial experience with GPSHMEM.

  14. Nuclear Forces and High-Performance Computing: The Perfect Match...

    Office of Scientific and Technical Information (OSTI)

    DOE Contract Number: W-7405-ENG-48 Resource Type: Conference ... ELEMENTARY PARTICLES AND FIELDS; GLUONS; NUCLEAR FORCES; NUCLEAR PHYSICS; QUANTUM CHROMODYNAMICS; QUANTUM FIELD THEORY...

  15. Webinar "Applying High Performance Computing to Engine Design...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    combustion ---Powertrain research --Building design ---Construction --Manufacturing ... ---Nuclear fuel cycle ---Reactors -Energy usage --Energy storage ---Batteries ...

  16. Evaluation of distributed ANSYS for high performance computing...

    Office of Scientific and Technical Information (OSTI)

    DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource Relation: Conference: Proposed for presentation at the Seventh Biennial Tri-Laboratory Engineering Conference ...

  17. Toward Codesign in High Performance Computing Systems - 06386705.pdf

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    o w a r d C o d e s i g n i n H i g h P e r f o r m a n c e C o m p u t i n g S y s t e m s [ I n v i t e d S p e c i a l S e s s i o n p a p e r ] R i c h a r d F . B a r r e t t S a n d i a N a t i o n a l L a b o r a t o r i e s A l b u q u e r q u e , N M , U S A r f b a r r e @ s a n d i a . g o v S u d i p S . D o s a n j h S a n d i a N a t i o n a l L a b o r a t o r i e s A l b u q u e r q u e , N M , U S A s s d o s a n @ s a n d i a . g o v M i c h a e l A . H e r o u x S a n d i a N

  18. John Shalf Gives Talk at San Francisco High Performance Computing...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials...

  19. Coordinated Fault Tolerance for High-Performance Computing

    SciTech Connect

    Dongarra, Jack; Bosilca, George; et al.

    2013-04-08

    Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.

  20. Bill Carlson IDA Center for Computing Sciences Making High Performance

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    » Bilateral Cooperation Bilateral Cooperation 2014 U.S.-China Bilateral Action Plan Steering Committee Meeting 2014 U.S.-China Bilateral Action Plan Steering Committee Meeting The Office of Nuclear Energy Policy and Cooperation (INEPC) works with international partners on civil nuclear cooperation, ranging from advanced fuel cycle countries such as France, Russia and Japan, to those nations considering the development of nuclear energy for the first time NE-6 looks to leverage resources to

  1. Computational Proteomics: High-throughput Analysis for Systems Biology

    SciTech Connect

    Cannon, William R.; Webb-Robertson, Bobbie-Jo M.

    2007-01-03

    High-throughput (HTP) proteomics is a rapidly developing field that offers the global profiling of proteins from a biological system. The HTP technological advances are fueling a revolution in biology, enabling analyses at the scales of entire systems (e.g., whole cells, tumors, or environmental communities). However, simply identifying the proteins in a cell is insufficient for understanding the underlying complexity and operating mechanisms of the overall system. Systems level investigations are relying more and more on computational analyses, especially in the field of proteomics generating large-scale global data.

  2. High Fidelity Evaluation of Tidal Turbine Performance for Industry...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Fidelity Evaluation of Tidal Turbine Performance for Industry Partner - Sandia Energy ... High Fidelity Evaluation of Tidal Turbine Performance for Industry Partner Home...

  3. Enhanced High and Low Temperature Performance of NOx Reduction...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    and Low Temperature Performance of NOx Reduction Materials Enhanced High and Low Temperature Performance of NOx Reduction Materials 2013 DOE Hydrogen and Fuel Cells Program and ...

  4. High performance Zintl phase TE materials with embedded nanoparticles...

    Energy.gov [DOE] (indexed site)

    Performance of zintl phase thermoelectric materials with embedded particles are evaluated ... Partnership: High Performance Thermoelectric Waste Heat Recovery System Based on ...

  5. Project Profile: Development and Performance Evaluation of High...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Development and Performance Evaluation of High Temperature Concrete for Thermal Energy Storage for Solar Power Generation Project Profile: Development and Performance Evaluation of ...

  6. Computing Sciences Staff Help East Bay High Schoolers Upgrade...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    from underrepresented groups learn about careers in a variety of IT fields, the Laney College Computer Information Systems Department offered its Upgrade: Computer Science Program. ...

  7. Prospects for Accelerated Development of High Performance Structural Materials

    SciTech Connect

    Zinkle, Steven J; Ghoniem, Nasr M.

    2011-01-01

    We present an overview of key aspects for development of steels for fission and fusion energy applications, by linking material fabrication to thermo-mechanical properties through a physical understanding of microstructure evolution. Numerous design constraints (e.g. reduced activation, low ductile-brittle transition temperature, low neutron-induced swelling, good creep resistance, and weldability) need to be considered, which in turn can be controlled through material composition and processing techniques. Recent progress in the development of high-performance steels for fossil and fusion energy systems is summarized, along with progress in multiscale modeling of mechanical behavior in metals. Prospects for future design of optimum structural steels in nuclear applications by utilization of the hierarchy of multiscale experimental and computational strategies are briefly described.

  8. Bedford Farmhouse High Performance Retrofit Prototype

    SciTech Connect

    2010-04-26

    In this case study, Building Science Corporation partnered with Habitat for Humanity of Greater Lowell on a retrofit of a mid-19th century farmhouse into affordable housing meeting Building America performance standards.

  9. High Performance Electrolyzers for Hybrid Thermochemical Cycles

    SciTech Connect

    Dr. John W. Weidner

    2009-05-10

    Extensive electrolyzer testing was performed at the University of South Carolina (USC). Emphasis was given to understanding water transport under various operating (i.e., temperature, membrane pressure differential and current density) and design (i.e., membrane thickness) conditions when it became apparent that water transport plays a deciding role in cell voltage. A mathematical model was developed to further understand the mechanisms of water and SO2 transport, and to predict the effect of operating and design parameters on electrolyzer performance.

  10. High Performance Storage System Scalability: Architecture, Implementation, and Experience

    SciTech Connect

    Watson, R W

    2005-01-05

    The High Performance Storage System (HPSS) provides scalable hierarchical storage management (HSM), archive, and file system services. Its design, implementation and current dominant use are focused on HSM and archive services. It is also a general-purpose, global, shared, parallel file system, potentially useful in other application domains. When HPSS design and implementation began over a decade ago, scientific computing power and storage capabilities at a site, such as a DOE national laboratory, was measured in a few 10s of gigaops, data archived in HSMs in a few 10s of terabytes at most, data throughput rates to an HSM in a few megabytes/s, and daily throughput with the HSM in a few gigabytes/day. At that time, the DOE national laboratories and IBM HPSS design team recognized that we were headed for a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10s of terabytes/day. This paper discusses HPSS architectural, implementation and deployment experiences that contributed to its success in meeting the above orders of magnitude scaling targets. We also discuss areas that need additional attention as we continue significant scaling into the future.

  11. Systems, methods and computer-readable media to model kinetic performance of rechargeable electrochemical devices

    DOEpatents

    Gering, Kevin L.

    2013-01-01

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a Butler-Volmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulse-time dependence, electrode surface availability, or a combination thereof. A set of sigmoid-based expressions may be included with the modified-BV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modified-BV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.

  12. Seeking Information on Design and Construction of High-Performance...

    Energy Saver

    Design and Construction of High-Performance Tenant Spaces Seeking Information on Design and Construction of High-Performance Tenant Spaces August 3, 2015 - 11:27am Addthis VIEW THE ...

  13. High Performance Home Building Guide for Habitat for Humanity Affiliates

    SciTech Connect

    Lindsey Marburger

    2010-10-01

    This guide covers basic principles of high performance Habitat construction, steps to achieving high performance Habitat construction, resources to help improve building practices, materials, etc., and affiliate profiles and recommendations.

  14. ARIES: Building America, High Performance Factory Built Housing...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    ARIES: Building America, High Performance Factory Built Housing - 2015 Peer Review Presenter: Jordan Dentz, Levy Partnership View the Presentation ARIES: Building America, High ...

  15. Metaproteomics: Harnessing the power of high performance mass...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: Metaproteomics: Harnessing the power of high performance mass ... Citation Details In-Document Search Title: Metaproteomics: Harnessing the power of high ...

  16. DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS

    SciTech Connect

    Imam, Neena; Poole, Stephen W

    2013-01-01

    In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET, and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.

  17. A High-Performance PHEV Battery Pack | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    A High-Performance PHEV Battery Pack A High-Performance PHEV Battery Pack 2012 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting es002_alamgir_2012_p.pdf (1.57 MB) More Documents & Publications Vehicle Technologies Office Merit Review 2013: A High-Performance PHEV Battery Pack A High-Performance PHEV Battery Pack Vehicle Technologies Office Merit Review 2016: A 12V Start-Stop Li Polymer Battery Pack

  18. ARIES: Building America, High Performance Factory Built Housing - 2015 Peer

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Review | Department of Energy ARIES: Building America, High Performance Factory Built Housing - 2015 Peer Review ARIES: Building America, High Performance Factory Built Housing - 2015 Peer Review Presenter: Jordan Dentz, Levy Partnership View the Presentation ARIES: Building America, High Performance Factory Built Housing - 2015 Peer Review (3.34 MB) More Documents & Publications ARIES lab houses. Photo courtesy of The Levy Partnership, Inc. Integrated Design: A High-Performance Solution

  19. Enhanced High Temperature Performance of NOx Storage/Reduction (NSR)

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Materials | Department of Energy Enhanced High Temperature Performance of NOx Storage/Reduction (NSR) Materials Enhanced High Temperature Performance of NOx Storage/Reduction (NSR) Materials 2012 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting ace026_peden_2012_o.pdf (2.2 MB) More Documents & Publications Enhanced High and Low Temperature Performance of NOx Reduction Materials Enhanced High Temperature Performance of

  20. High Performance Sustainable Building Design RM | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    High Performance Sustainable Building Design RM High Performance Sustainable Building Design RM The High Performance Sustainable Building Design (HPSBD) Review Module (RM) is a tool that assists the DOE federal project review teams in evaluating the technical sufficiency for projects that may incorporate HPSBD Guiding Principles at CD-1 through CD-4 for both new construction and existing buildings. High Performance Sustainable Building Design RM (2.35 MB) More Documents & Publications

  1. NREL: Photovoltaics Research - High-Performance Photovoltaics

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    The dual-axis tracking modules use small mirrors to focus sunlight on high-efficient multijunction cells... NREL is a national laboratory of the U.S. Department of Energy, Office of ...

  2. LBNL: High Performance Active Perimeter Building Systems - 2015 Peer Review

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    | Department of Energy High Performance Active Perimeter Building Systems - 2015 Peer Review LBNL: High Performance Active Perimeter Building Systems - 2015 Peer Review Presenter: Eleanor Lee, LBNL View the Presentation LBNL: High Performance Active Perimeter Building Systems - 2015 Peer Review (2 MB) More Documents & Publications FLEXLAB Connected Buildings Interoperability Vision Webinar 2015 DOE CONNECTED LIGHTING SYSTEMS PRESENTATIONS

  3. A system analysis computer model for the High Flux Isotope Reactor (HFIRSYS Version 1)

    SciTech Connect

    Sozer, M.C.

    1992-04-01

    A system transient analysis computer model (HFIRSYS) has been developed for analysis of small break loss of coolant accidents (LOCA) and operational transients. The computer model is based on the Advanced Continuous Simulation Language (ACSL) that produces the FORTRAN code automatically and that provides integration routines such as the Gear`s stiff algorithm as well as enabling users with numerous practical tools for generating Eigen values, and providing debug outputs and graphics capabilities, etc. The HFIRSYS computer code is structured in the form of the Modular Modeling System (MMS) code. Component modules from MMS and in-house developed modules were both used to configure HFIRSYS. A description of the High Flux Isotope Reactor, theoretical bases for the modeled components of the system, and the verification and validation efforts are reported. The computer model performs satisfactorily including cases in which effects of structural elasticity on the system pressure is significant; however, its capabilities are limited to single phase flow. Because of the modular structure, the new component models from the Modular Modeling System can easily be added to HFIRSYS for analyzing their effects on system`s behavior. The computer model is a versatile tool for studying various system transients. The intent of this report is not to be a users manual, but to provide theoretical bases and basic information about the computer model and the reactor.

  4. High Performance Green LEDs by Homoepitaxial

    SciTech Connect

    Wetzel, Christian; Schubert, E Fred

    2009-11-22

    This work's objective was the development of processes to double or triple the light output power from green and deep green (525 - 555 nm) AlGaInN light emitting diode (LED) dies within 3 years in reference to the Lumileds Luxeon II. The project paid particular effort to all aspects of the internal generation efficiency of light. LEDs in this spectral region show the highest potential for significant performance boosts and enable the realization of phosphor-free white LEDs comprised by red-green-blue LED modules. Such modules will perform at and outperform the efficacy target projections for white-light LED systems in the Department of Energy's accelerated roadmap of the SSL initiative.

  5. Systems, methods and computer-readable media for modeling cell performance fade of rechargeable electrochemical devices

    DOEpatents

    Gering, Kevin L

    2013-08-27

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constant-current pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constant-current pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.

  6. High-performance commercial building facades

    SciTech Connect

    Lee, Eleanor; Selkowitz, Stephen; Bazjanac, Vladimir; Inkarojrit, Vorapat; Kohler, Christian

    2002-06-01

    This study focuses on advanced building facades that use daylighting, sun control, ventilation systems, and dynamic systems. A quick perusal of the leading architectural magazines, or a discussion in most architectural firms today will eventually lead to mention of some of the innovative new buildings that are being constructed with all-glass facades. Most of these buildings are appearing in Europe, although interestingly U.S. A/E firms often have a leading role in their design. This ''emerging technology'' of heavily glazed fagades is often associated with buildings whose design goals include energy efficiency, sustainability, and a ''green'' image. While there are a number of new books on the subject with impressive photos and drawings, there is little critical examination of the actual performance of such buildings, and a generally poor understanding as to whether they achieve their performance goals, or even what those goals might be. Even if the building ''works'' it is often dangerous to take a design solution from one climate and location and transport it to a new one without a good causal understanding of how the systems work. In addition, there is a wide range of existing and emerging glazing and fenestration technologies in use in these buildings, many of which break new ground with respect to innovative structural use of glass. It is unclear as to how well many of these designs would work as currently formulated in California locations dominated by intense sunlight and seismic events. Finally, the costs of these systems are higher than normal facades, but claims of energy and productivity savings are used to justify some of them. Once again these claims, while plausible, are largely unsupported. There have been major advances in glazing and facade technology over the past 30 years and we expect to see continued innovation and product development. It is critical in this process to be able to understand which performance goals are being met by current

  7. Project Profile: High-Performance Nanostructured Coating

    Office of Energy Efficiency and Renewable Energy (EERE)

    The University of California San Diego, under the 2012 SunShot Concentrating Solar Power (CSP) R&D funding opportunity announcement (FOA), is developing a new low-cost and scalable process for fabricating spectrally selective coatings (SSCs) to be used in solar absorbers for high-temperature CSP systems.

  8. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGES [OSTI]

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  9. High Thermoelectric Performance in Copper Telluride

    DOE PAGES [OSTI]

    He, Ying; Zhang, Tiansong; Shi, Xun; Wei, Su-Huai; Chen, Lidong

    2015-06-21

    Recently, Cu 2-δ S and Cu 2-δ Se were reported to have an ultralow thermal conductivity and high thermoelectric figure of merit zT. Thus, as a member of the copper chalcogenide group, Cu 2-δ Te is expected to possess superior zTs because Te is less ionic and heavy. However, the zT value is low in the Cu2Te sintered using spark plasma sintering, which is typically used to fabricate high-density bulk samples. In addition, the extra sintering processes may change the samples’ compositions as well as their physical properties, especially for Cu2Te, which has many stable and meta-stable phasesmore » as well as weaker ionic bonding between Cu and Te as compared with Cu2S and Cu2Se. In this study, high-density Cu2Te samples were obtained using direct annealing without a sintering process. In the absence of sintering processes, the samples’ compositions could be well controlled, leading to substantially reduced carrier concentrations that are close to the optimal value. The electrical transports were optimized, and the thermal conductivity was considerably reduced. The zT values were significantly improved—to 1.1 at 1000 K—which is nearly 100% improvement. Furthermore, this method saves substantial time and cost during the sample’s growth. The study demonstrates that Cu 2-δ X (X=S, Se and Te) is the only existing system to show high zTs in the series of compounds composed of three sequential primary group elements.« less

  10. High Thermoelectric Performance in Copper Telluride

    SciTech Connect

    He, Ying; Zhang, Tiansong; Shi, Xun; Wei, Su-Huai; Chen, Lidong

    2015-06-21

    Recently, Cu 2-δ S and Cu 2-δ Se were reported to have an ultralow thermal conductivity and high thermoelectric figure of merit zT. Thus, as a member of the copper chalcogenide group, Cu 2-δ Te is expected to possess superior zTs because Te is less ionic and heavy. However, the zT value is low in the Cu2Te sintered using spark plasma sintering, which is typically used to fabricate high-density bulk samples. In addition, the extra sintering processes may change the samples’ compositions as well as their physical properties, especially for Cu2Te, which has many stable and meta-stable phases as well as weaker ionic bonding between Cu and Te as compared with Cu2S and Cu2Se. In this study, high-density Cu2Te samples were obtained using direct annealing without a sintering process. In the absence of sintering processes, the samples’ compositions could be well controlled, leading to substantially reduced carrier concentrations that are close to the optimal value. The electrical transports were optimized, and the thermal conductivity was considerably reduced. The zT values were significantly improved—to 1.1 at 1000 K—which is nearly 100% improvement. Furthermore, this method saves substantial time and cost during the sample’s growth. The study demonstrates that Cu 2-δ X (X=S, Se and Te) is the only existing system to show high zTs in the series of compounds composed of three sequential primary group elements.

  11. Memorandum of American High-Performance Buildings Coalition DOE Meeting

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    August 19, 2013 | Department of Energy Memorandum of American High-Performance Buildings Coalition DOE Meeting August 19, 2013 Memorandum of American High-Performance Buildings Coalition DOE Meeting August 19, 2013 This memorandum is intended to provide a summary of a meeting between the American HighPerformance Buildings Coalition (AHBPC), a coalition of industry organizations committed to promoting performance-based energy efficiency and sustainable building standards developed through

  12. Global optimization algorithms to compute thermodynamic equilibria in large complex systems with performance considerations

    DOE PAGES [OSTI]

    Piro, M. H. A.; Simunovic, S.

    2016-03-17

    Several global optimization methods are reviewed that attempt to ensure that the integral Gibbs energy of a closed isothermal isobaric system is a global minimum to satisfy the necessary and sufficient conditions for thermodynamic equilibrium. In particular, the integral Gibbs energy function of a multicomponent system containing non-ideal phases may be highly non-linear and non-convex, which makes finding a global minimum a challenge. Consequently, a poor numerical approach may lead one to the false belief of equilibrium. Furthermore, confirming that one reaches a global minimum and that this is achieved with satisfactory computational performance becomes increasingly more challenging in systemsmore » containing many chemical elements and a correspondingly large number of species and phases. Several numerical methods that have been used for this specific purpose are reviewed with a benchmark study of three of the more promising methods using five case studies of varying complexity. A modification of the conventional Branch and Bound method is presented that is well suited to a wide array of thermodynamic applications, including complex phases with many constituents and sublattices, and ionic phases that must adhere to charge neutrality constraints. Also, a novel method is presented that efficiently solves the system of linear equations that exploits the unique structure of the Hessian matrix, which reduces the calculation from a O(N3) operation to a O(N) operation. As a result, this combined approach demonstrates efficiency, reliability and capabilities that are favorable for integration of thermodynamic computations into multi-physics codes with inherent performance considerations.« less

  13. Performing three-dimensional neutral particle transport calculations on tera scale computers

    SciTech Connect

    Woodward, C S; Brown, P N; Chang, B; Dorr, M R; Hanebutte, U R

    1999-01-12

    A scalable, parallel code system to perform neutral particle transport calculations in three dimensions is presented. To utilize the hyper-cluster architecture of emerging tera scale computers, the parallel code successfully combines the MPI message passing and paradigms. The code's capabilities are demonstrated by a shielding calculation containing over 14 billion unknowns. This calculation was accomplished on the IBM SP ''ASCI-Blue-Pacific computer located at Lawrence Livermore National Laboratory (LLNL).

  14. A high performance field-reversed configuration

    SciTech Connect

    Binderbauer, M. W.; Tajima, T.; Steinhauer, L. C.; Garate, E.; Tuszewski, M.; Smirnov, A.; Gota, H.; Barnes, D.; Deng, B. H.; Thompson, M. C.; Trask, E.; Yang, X.; Putvinski, S.; Rostoker, N.; Andow, R.; Aefsky, S.; Bolte, N.; Bui, D. Q.; Ceccherini, F.; Clary, R.; and others

    2015-05-15

    Conventional field-reversed configurations (FRCs), high-beta, prolate compact toroids embedded in poloidal magnetic fields, face notable stability and confinement concerns. These can be ameliorated by various control techniques, such as introducing a significant fast ion population. Indeed, adding neutral beam injection into the FRC over the past half-decade has contributed to striking improvements in confinement and stability. Further, the addition of electrically biased plasma guns at the ends, magnetic end plugs, and advanced surface conditioning led to dramatic reductions in turbulence-driven losses and greatly improved stability. Together, these enabled the build-up of a well-confined and dominant fast-ion population. Under such conditions, highly reproducible, macroscopically stable hot FRCs (with total plasma temperature of ∼1 keV) with record lifetimes were achieved. These accomplishments point to the prospect of advanced, beam-driven FRCs as an intriguing path toward fusion reactors. This paper reviews key results and presents context for further interpretation.

  15. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  16. Computing Sciences Staff Help East Bay High Schoolers Upgrade their Summer

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Computing Sciences Staff Help East Bay High Schoolers Upgrade their Summer Computing Sciences Staff Help East Bay High Schoolers Upgrade their Summer August 6, 2015 Jon Bashor, jbashor@lbl.gov, +1 510 486 5849 To help prepare students from underrepresented groups learn about careers in a variety of IT fields, the Laney College Computer Information Systems Department offered its Upgrade: Computer Science Program. Thirty-eight students from 10 East Bay high schools registered for the eight-week

  17. NextGen Advanced Framing for High Performance Homes

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    NextGen Advanced Framing for High Performance Homes Integrated System Solutions Vladimir Kochkin, Division Director Applied Engineering Home Innovation Research Labs High Performance Home  Efficient  Comfortable  Durable  Structural performance  Moisture performance  Other (UV, etc)  Cost-effective as a system 2 Value Quality A System's Approach  Don't simply add the new to the old  Find efficiencies in the new system  Offset cost increases  Combine

  18. Development of Alternative and Durable High Performance Cathode Supporst

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    for PEM Fuel Cells | Department of Energy Alternative and Durable High Performance Cathode Supporst for PEM Fuel Cells Development of Alternative and Durable High Performance Cathode Supporst for PEM Fuel Cells Part of a $100 million fuel cell award announced by DOE Secretary Bodman on Oct. 25, 2006. 3_pnnl.pdf (21.99 KB) More Documents & Publications Development of Alternative and Durable High Performance Cathode Supports for PEM Fuel Cells Fuel Cell Kickoff Meeting Agenda 2015 Pathways

  19. NRC Leadership Expectations and Practices for Sustaining a High Performing

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Organization | Department of Energy NRC Leadership Expectations and Practices for Sustaining a High Performing Organization NRC Leadership Expectations and Practices for Sustaining a High Performing Organization May 16, 2012 Presenter: William C. Ostendorff, NRC Commissioner Topics Covered: NRC Mission Safety Culture NRC Oversight NRC Inspection Program Technical Qualification Continuous Learning NRC Leadership Expectations and Practices for Sustaining a High Performing Organization (4.15

  20. Funding Opportunity: Building America High Performance Housing Innovation |

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Department of Energy Opportunity: Building America High Performance Housing Innovation Funding Opportunity: Building America High Performance Housing Innovation November 19, 2015 - 11:51am Addthis The Building Technologies Office (BTO) Residential Buildings Integration Program has announced the availability of $5.5 million for Funding Opportunity Announcement (FOA) DE-FOA-0001395, "Building America Industry Partnerships for High Performance Housing Innovation." DOE seeks to fund up

  1. High Performance Leasing Strategies for State and Local Governments |

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Department of Energy High Performance Leasing Strategies for State and Local Governments High Performance Leasing Strategies for State and Local Governments Presentation for the SEE Action Series: High Performance Leasing Strategies for State and Local Governments webinar, presented on January 26, 2013 as part of the U.S. Department of Energy's Technical Assistance Program (TAP). Presentation (5.98 MB) Transcript (93 KB) More Documents & Publications

  2. Building America Webinar: High-Performance Enclosure Strategies, Part I:

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Unvented Roof Systems and Innovative Advanced Framing Strategies | Department of Energy High-Performance Enclosure Strategies, Part I: Unvented Roof Systems and Innovative Advanced Framing Strategies Building America Webinar: High-Performance Enclosure Strategies, Part I: Unvented Roof Systems and Innovative Advanced Framing Strategies This webinar, held on February 12, 2015, focused on methods to design and build roof and wall systems for high performance homes that optimize energy and

  3. Building America Webinar: Ventilation Strategies for High Performance

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Homes, Part I: Application-Specific Ventilation Guidelines | Department of Energy Ventilation Strategies for High Performance Homes, Part I: Application-Specific Ventilation Guidelines Building America Webinar: Ventilation Strategies for High Performance Homes, Part I: Application-Specific Ventilation Guidelines This webinar, held on Aug. 26, 2015, covered what makes high-performance homes different from a ventilation perspective and how they might need to be treated differently than

  4. Building America's Top Innovations Advance High Performance Homes |

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Department of Energy Building America's Top Innovations Advance High Performance Homes Building America's Top Innovations Advance High Performance Homes Innovations sponsored by the U.S. Department of Energy's (DOE) Building America program and its teams of building science experts continue to have a transforming impact, leading our nation's home building industry to high-performance homes. Building America researchers have worked directly with more than 300 U.S. production home builders and

  5. Reduced Call-Backs with High Performance Production Builders - Building

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    America Top Innovation | Department of Energy Reduced Call-Backs with High Performance Production Builders - Building America Top Innovation Reduced Call-Backs with High Performance Production Builders - Building America Top Innovation Photo of a home with a fence. Engaging production builders to build high-performance homes is key to successfully transforming the market. For this Top Innovation, Building America has effectively addressed this challenge by demonstrating the compelling

  6. Text-Alternative Version of High Performance Space Conditioning Systems:

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Part II | Department of Energy II Text-Alternative Version of High Performance Space Conditioning Systems: Part II High Performance Space Conditioning Systems: Part II November 18, 2014 William Zoeller, Stephen Winter Associates Dave Mallay, Home Innovation Research Labs Jordan Dentz, The Levy Partnership Francis Conlin, High Performance Building Solutions Hello everyone! I am Gail Werren with the National Renewable Energy Laboratory, and I'd like to welcome you to today's webinar hosted by

  7. High Performance Builder Spotlight: Green Coast Enterprises - New Orleans,

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Louisiana | Department of Energy High Performance Builder Spotlight: Green Coast Enterprises - New Orleans, Louisiana High Performance Builder Spotlight: Green Coast Enterprises - New Orleans, Louisiana This four-page case study describes Green Coast Enterprises efforts to rebuild hurricane-ravaged New Orleans through Project Home Again. green_coast_enterprises.pdf (3 MB) More Documents & Publications High Performance Builder Spotlight: Green Coast Enterprises - New Orleans, Louisiana

  8. High Performance Without Increased Cost: Urbane Homes, Louisville, KY -

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Building America Top Innovation | Department of Energy High Performance Without Increased Cost: Urbane Homes, Louisville, KY - Building America Top Innovation High Performance Without Increased Cost: Urbane Homes, Louisville, KY - Building America Top Innovation Photo of a Housing Award logo with a home. This Top Innovation highlights Building America field projects that demonstrated minimal or cost-neutral impacts for high-performance homes and that have significantly influenced the housing

  9. High-Performance Affordable Housing with Habitat for Humanity - Building

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    America Top Innovation | Department of Energy High-Performance Affordable Housing with Habitat for Humanity - Building America Top Innovation High-Performance Affordable Housing with Habitat for Humanity - Building America Top Innovation Photo of people building ENERGY STAR homes. High-performance homes provide compelling benefits for all homeowners, but no sector is better served than affordable housing. These are the homeowners that most need the reduced costs of ownership and maintenance

  10. The Gadonanotubes: Structural Origin of their High-Performance...

    Office of Scientific and Technical Information (OSTI)

    Title: The Gadonanotubes: Structural Origin of their High-Performance MRI Contrast Agent Behavior Authors: Ma, Qing ; Jebb, Meghan ; Tweedle, Michael F. ; Wilson, Lon J. 1 ; NWU) ...

  11. Building America Webinar: High-Performance Enclosure Strategies...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Advanced Framing Strategies Building America Webinar: High-Performance Enclosure Strategies, Part I: Unvented Roof Systems and Innovative Advanced Framing Strategies This ...

  12. Building America Webinar: Ventilation Strategies for High Performance...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Building America Webinar: High-Performance Enclosure Strategies, Part I: Unvented Roof Systems and Innovative Advanced Framing Strategies Building America Webinar: Retrofit ...

  13. Rethinking the idealized morphology in high-performance organic...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Rethinking the idealized morphology in high-performance organic photovoltaics December 9, 2011 Tweet EmailPrint Traditionally, organic photovoltaic (OPV) active layers are viewed...

  14. OLEDWORKS DEVELOPS INNOVATIVE HIGH-PERFORMANCE DEPOSITION TECHNOLOGY...

    Energy Saver

    high-performance deposition technology that addresses two major aspects of this manufacturing cost: the expense of organic materials per area of useable product, and the...

  15. Affordable High Performance in Production Homes: Artistic Homes...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    extraordinary impact, demonstrating the mainstream builder's business case for adopting ... that demonstrate how high performance homes can be affordable for the mainstream market. ...

  16. 'Catch and Suppress' Control of Instabilities in High Performance...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    'Catch and Suppress' Control of Instabilities in High Performance Fusion Plasmas Fusion Energy Sciences (FES) FES Home About Research Facilities Science Highlights Benefits of FES ...

  17. Technology Transfer Webinar on November 12: High-Performance...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Technology Transfer Webinar on November 12: High-Performance Hybrid SimulationMeasurement-Based Tools for Proactive Operator Decision-Support Technology Transfer Webinar on...

  18. Moderate Doping Leads to High Performance of Semiconductor/Insulator...

    Office of Scientific and Technical Information (OSTI)

    Title: Moderate Doping Leads to High Performance of SemiconductorInsulator Polymer Blend Transistors Authors: Lu, Guanghao ; Blakesley, James ; Himmelberger, Scott ; Pingel, ...

  19. High Performance Without Increased Cost: Urbane Homes, Louisville...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    In this profile, Urbane Homes of Louisville, KY, worked with Building America team National Association of Home Builders Research Center to build its first high performance home at ...

  20. DOE ZERH Webinar: High-Performance Home Sales Training, Part...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    number of other green and high-performance home programs, these skills will be critical. ... DOE ZERH Webinar: Technical Resources for Marketing and Selling Zero Energy Ready Homes ...

  1. High-Performance Home Technologies: Solar Thermal & Photovoltaic...

    Energy.gov [DOE] (indexed site)

    High-Performance Home Technologies: Solar Thermal & Photovoltaic Systems (9.49 MB) More Documents & Publications Building America Whole-House Solutions for New Homes: John Wesley ...

  2. High-Performance Home Technologies: Solar Thermal & Photovoltaic...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Solar Thermal & Photovoltaic Systems; Volume 6 Building America Best Practices Series High-Performance Home Technologies: Solar Thermal & Photovoltaic Systems; Volume 6 ...

  3. Overcoming Processing Cost Barriers of High-Performance Lithium...

    Energy.gov [DOE] (indexed site)

    Lithium-Ion Battery Electrodes Vehicle Technologies Office Merit Review 2014: Overcoming Processing Cost Barriers of High-Performance Lithium-Ion Battery Electrodes ...

  4. High Performance Zintl Phase TE Materials with Embedded Particles...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Presents results from embedding nanoparticles in magnesium silicide alloy matrix ... Zintl Phase Materials with Embedded Nanoparticles High performance Zintl phase TE ...

  5. High-Performance Thermoelectric Devices Based on Abundant Silicide...

    Energy.gov [DOE] (indexed site)

    Development of high-performance thermoelectric devices for vehicle waste heat recovery will include fundamental research to use abundant promising low-cost thermoelectric ...

  6. Direct Probe Mounted High-Performance Amplifiers for Pulsed Measuremen...

    Office of Scientific and Technical Information (OSTI)

    Direct Probe Mounted High-Performance Amplifiers for Pulsed Measurement Citation Details ... Visit OSTI to utilize additional information resources in energy science and technology. A ...

  7. Direct Probe Mounted High-Performance Amplifiers for Pulsed Measuremen...

    Office of Scientific and Technical Information (OSTI)

    Direct Probe Mounted High-Performance Amplifiers for Pulsed Measurement Citation Details ... Country of Publication: United States Language: English Subject: Materials Science(36) ...

  8. Enhanced High Temperature Performance of NOx Storage/Reduction...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    (LNT) Materials Enhanced High Temperature Performance of NOx StorageReduction (NSR) Materials Deactivation Mechanisms of Base MetalZeolite Urea Selective Catalytic Reduction...

  9. High Performance Mica-based Compressive Seals for Solid Oxide...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    High Performance Mica-based Compressive Seals for Solid Oxide Fuel Cells Pacific Northwest National Laboratory Contact PNNL About This Technology In their work, PNNL researchers...

  10. High Performance Photovoltaic Project: Identifying Critical Paths; Preprint

    SciTech Connect

    Symko-Davies, M.; Zweibel, K.; Benner, J.; Sheldon, P.; Noufi, R.; Kurtz, S.; Coutts, T.; Hulstrom, R.

    2001-10-01

    Presented at the 2001 NCPV Program Review Meeting: Describes recent research accomplishments in in-house and subcontracted work in the High-Performance PV Project.

  11. DOE Announces Webinars on High Performance Enclosure Strategies...

    Energy Saver

    for Buildings, Fuel Cell Forklifts and Energy Management, and More DOE Announces Webinars on High Performance Enclosure Strategies for Buildings, Fuel Cell Forklifts and Energy ...

  12. Development of Alternative and Durable High Performance Cathode...

    Energy.gov [DOE] (indexed site)

    Development of Alternative and Durable High Performance Cathode Supporst for PEM Fuel Cells Fuel Cell Kickoff Meeting Agenda Energy Storage Systems 2012 Peer Review Presentations - ...

  13. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    SciTech Connect

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.; Gaidamaka, Yuliya V.; Gudkova, Irina A.; Sopin, Eduard S.

    2015-03-10

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. For better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.

  14. Quantitative evaluation of wrist posture and typing performance: A comparative study of 4 computer keyboards

    SciTech Connect

    Burastero, S.

    1994-05-01

    The present study focuses on an ergonomic evaluation of 4 computer keyboards, based on subjective analyses of operator comfort and on a quantitative analysis of typing performance and wrist posture during typing. The objectives of this study are (1) to quantify differences in the wrist posture and in typing performance when the four different keyboards are used, and (2) to analyze the subjective preferences of the subjects for alternative keyboards compared to the standard flat keyboard with respect to the quantitative measurements.

  15. Chemically Bonded Phosphorus/Graphene Hybrid as a High Performance...

    Office of Scientific and Technical Information (OSTI)

    Room temperature sodium-ion batteries are of great interest for high-energy-density energy ... anode for high performance sodium-ion batteries though a facile ball-milling of red ...

  16. User's manual for the vertical axis wind turbine performance computer code darter

    SciTech Connect

    Klimas, P. C.; French, R. E.

    1980-05-01

    The computer code DARTER (DARrieus, Turbine, Elemental Reynolds number) is an aerodynamic performance/loads prediction scheme based upon the conservation of momentum principle. It is the latest evolution in a sequence which began with a model developed by Templin of NRC, Canada and progressed through the Sandia National Laboratories-developed SIMOSS (SSImple MOmentum, Single Streamtube) and DART (SARrieus Turbine) to DARTER.

  17. High-Performance I/O: HDF5 for Lattice QCD

    SciTech Connect

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav; Syritsyn, Sergey; Walker-Loud, Andre

    2015-01-01

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design and implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.

  18. A computational study of x-ray emission from high-Z x-ray sources...

    Office of Scientific and Technical Information (OSTI)

    A computational study of x-ray emission from high-Z x-ray sources on the National Ignition Facility laser Citation Details In-Document Search Title: A computational study of x-ray ...

  19. Scalable Computational Methods for the Analysis of High-Throughput Biological Data

    SciTech Connect

    Langston, Michael A

    2012-09-06

    This primary focus of this research project is elucidating genetic regulatory mechanisms that control an organism?¢????s responses to low-dose ionizing radiation. Although low doses (at most ten centigrays) are not lethal to humans, they elicit a highly complex physiological response, with the ultimate outcome in terms of risk to human health unknown. The tools of molecular biology and computational science will be harnessed to study coordinated changes in gene expression that orchestrate the mechanisms a cell uses to manage the radiation stimulus. High performance implementations of novel algorithms that exploit the principles of fixed-parameter tractability will be used to extract gene sets suggestive of co-regulation. Genomic mining will be performed to scrutinize, winnow and highlight the most promising gene sets for more detailed investigation. The overall goal is to increase our understanding of the health risks associated with exposures to low levels of radiation.

  20. Building America Webinar: High Performance Enclosure Strategies: Part II,

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    New Construction - August 13, 2014 - Cladding Attachment Over Thick Exterior Rigid Insulation | Department of Energy Cladding Attachment Over Thick Exterior Rigid Insulation Building America Webinar: High Performance Enclosure Strategies: Part II, New Construction - August 13, 2014 - Cladding Attachment Over Thick Exterior Rigid Insulation This presentation, Cladding Attachment Over Thick Rigid Exterior Insulation, was delivered at the Building America webinar, High Performance Enclosure

  1. Building America Webinar: High Performance Space Conditioning Systems, Part

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    II - Air Distribution Retrofit Strategies for Affordable Housing | Department of Energy Air Distribution Retrofit Strategies for Affordable Housing Building America Webinar: High Performance Space Conditioning Systems, Part II - Air Distribution Retrofit Strategies for Affordable Housing Jordan Dentz, Advanced Residential Integrated Energy Solutions (ARIES), and Francis Conlin, High Performance Building Solutions, Inc., presenting Air Distribution Retrofit Strategies for Affordable Housing.

  2. Guiding Market Introduction of High-Performance SSL Products | Department

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    of Energy Guiding Market Introduction of High-Performance SSL Products Guiding Market Introduction of High-Performance SSL Products 2014 DOE Solid-State Lighting Program Fact Sheet guidingmarket_factsheet_sept2013.pdf (2.19 MB) More Documents & Publications LED T8 Replacement Lamps Solid State Lighting: GATEWAY and CALiPER Emerging Lighting Technology

  3. Flourescent Pigments for High-Performance Cool Roofing and Facades |

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Department of Energy Flourescent Pigments for High-Performance Cool Roofing and Facades Flourescent Pigments for High-Performance Cool Roofing and Facades Addthis 1 of 3 PPG Industries and Lawrence Berkeley National Laboratory are partnering to develop a new class of dark-colored pigments for cool metal roof and façade coatings that incorporate near-infrared fluorescence and reflectance to improve energy performance. Image: PPG Industries 2 of 3 Berkeley Lab Heat Island Group physicist Paul

  4. High Performance Walls in Hot-Dry Climates

    SciTech Connect

    Hoeschele, M.; Springer, D.; Dakin, B.; German, A.

    2015-01-01

    High performance walls represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. The primary goal in improving wall thermal performance revolves around increasing the wall framing from 2x4 to 2x6, adding more cavity and exterior rigid insulation, achieving insulation installation criteria meeting ENERGY STAR's thermal bypass checklist, and reducing the amount of wood penetrating the wall cavity.

  5. Innovative High-Performance Deposition Technology for Low-Cost

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Manufacturing of OLED Lighting | Department of Energy High-Performance Deposition Technology for Low-Cost Manufacturing of OLED Lighting Innovative High-Performance Deposition Technology for Low-Cost Manufacturing of OLED Lighting Lead Performer: OLEDWorks, LLC - Rochester, NY DOE Total Funding: $1,046,452 Cost Share: $1,046,452 Project Term: October 1, 2013 - March 31, 2017 Funding Opportunity: SSL Manufacturing R&D Funding Opportunity Announcement (FOA) DE-FOA-000079 Project Objective

  6. Integrated Design: A High-Performance Solution for Affordable Housing |

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Department of Energy Integrated Design: A High-Performance Solution for Affordable Housing Integrated Design: A High-Performance Solution for Affordable Housing ARIES lab houses. Photo courtesy of The Levy Partnership, Inc. ARIES lab houses. Photo courtesy of The Levy Partnership, Inc. Lead Performer: The Levy Partnership, Inc.-New York, NY Partners: Habitat for Humanity International /Habitat Research Foundation, Atlanta, GA Columbia Count Habitat, NY Habitat of Newburgh, NY Habitat Greater

  7. Computer-Aided Design of Materials for use under High Temperature Operating Condition

    SciTech Connect

    Rajagopal, K. R.; Rao, I. J.

    2010-01-31

    The procedures in place for producing materials in order to optimize their performance with respect to creep characteristics, oxidation resistance, elevation of melting point, thermal and electrical conductivity and other thermal and electrical properties are essentially trial and error experimentation that tend to be tremendously time consuming and expensive. A computational approach has been developed that can replace the trial and error procedures in order that one can efficiently design and engineer materials based on the application in question can lead to enhanced performance of the material, significant decrease in costs and cut down the time necessary to produce such materials. The work has relevance to the design and manufacture of turbine blades operating at high operating temperature, development of armor and missiles heads; corrosion resistant tanks and containers, better conductors of electricity, and the numerous other applications that are envisaged for specially structured nanocrystalline solids. A robust thermodynamic framework is developed within which the computational approach is developed. The procedure takes into account microstructural features such as the dislocation density, lattice mismatch, stacking faults, volume fractions of inclusions, interfacial area, etc. A robust model for single crystal superalloys that takes into account the microstructure of the alloy within the context of a continuum model is developed. Having developed the model, we then implement in a computational scheme using the software ABAQUS/STANDARD. The results of the simulation are compared against experimental data in realistic geometries.

  8. PERFORMANCE

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    Jee Choi Kent Czechowski Cong Hou Chris McClanahan David S. Noble, Jr. Richard (Rich) Vuduc Salishan Conference on High-Speed Computing Gleneden Beach, Oregon -...

  9. Rebuilding It Better: Greensburg, Kansas, High Performance Buildings

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Meeting Energy Savings Goals (Brochure) | Department of Energy Rebuilding It Better: Greensburg, Kansas, High Performance Buildings Meeting Energy Savings Goals (Brochure) Rebuilding It Better: Greensburg, Kansas, High Performance Buildings Meeting Energy Savings Goals (Brochure) This fact sheet provides a summary of how NREL's technical assistance in Greensburg, Kansas, helped the town rebuild green after recovering from a tornado in May 2007. Rebuilding It Better: Greensburg, Kansas, High

  10. Final Report- Low Cost High Performance Nanostructured Spectrally Selective Coating

    Office of Energy Efficiency and Renewable Energy (EERE)

    Solar absorbing coating is a key enabling technology to achieve hightemperature high-efficiency concentrating solar power operation. A high-performance solar absorbing material must simultaneously meet all the following three stringent requirements: high thermal efficiency (usually measured by figure of merit), hightemperature durability, and oxidation resistance. The objective of this research is to employ a highly scalable process to fabricate and coat black oxide nanoparticles onto solar absorber surface to achieve ultra-high thermal efficiency.

  11. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  12. PORST: a computer code to analyze the performance of retrofitted steam turbines

    SciTech Connect

    Lee, C.; Hwang, I.T.

    1980-09-01

    The computer code PORST was developed to analyze the performance of a retrofitted steam turbine that is converted from a single generating to a cogenerating unit for purposes of district heating. Two retrofit schemes are considered: one converts a condensing turbine to a backpressure unit; the other allows the crossover extraction of steam between turbine cylinders. The code can analyze the performance of a turbine operating at: (1) valve-wide-open condition before retrofit, (2) partial load before retrofit, (3) valve-wide-open after retrofit, and (4) partial load after retrofit.

  13. Computer Modeling of Chemical and Geochemical Processes in High...

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    within Sandia's Defense Waste Managment Programs located in Carlsbad, New Mexico. ... These parameters were developed through research performed within Sandia's Defense Waste ...

  14. Local Option- Property Tax Credit for High Performance Buildings

    Energy.gov [DOE]

    Similar to Maryland's Local Option Property Tax Credit for Renewable Energy, Title 9 of Maryland's property tax code creates an optional property tax credit for high performance buildings. This...

  15. A Comprehensive Look at High Performance Parallel I/O

    U.S. Department of Energy (DOE) - all webpages (Extended Search)

    In this era of "big data," high performance parallel IO-the way disk drives efficiently read and write information on HPC systems-is extremely important. Yet the last book to ...

  16. Anne Arundel County- High Performance Dwelling Property Tax Credit

    Office of Energy Efficiency and Renewable Energy (EERE)

    The state of Maryland permits local governments (Md Code: Property Tax § 9-242) to offer property tax credits for high performance buildings if they choose to do so. In October 2010 Anne Arundel...

  17. Montgomery County- High Performance Building Property Tax Credit

    Office of Energy Efficiency and Renewable Energy (EERE)

    The state of Maryland permits local governments (Md Code: Property Tax § 9-242) to offer property tax credits for high performance buildings if they choose to do so. Montgomery County has...

  18. Howard County- High Performance and Green Building Property Tax Credit

    Office of Energy Efficiency and Renewable Energy (EERE)

    The state of Maryland permits local governments (Md Code: Property Tax § 9-242) to offer property tax credits for high performance buildings and energy conservation devices (Md Code: Property Tax §...

  19. Development of Alternative and Durable High Performance Cathode...

    Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

    Supporst for PEM Fuel Cells Development of Alternative and Durable High Performance Cathode Supporst for PEM Fuel Cells Part of a 100 million fuel cell award announced by DOE ...

  20. Advanced High-Performance Batteries for Electric Vehicle (EV...

    Energy.gov [DOE] (indexed site)

    High-Performance Batteries for Electric Vehicle (EV) Applications Ionel C. Stefan, Principal Investigator Amprius, Inc. June 6-10, 2016 ES241 This presentation does not contain any ...

    1. A High-Performance Recycling Solution for PolystyreneAchieved...

      Office of Scientific and Technical Information (OSTI)

      A High-Performance Recycling Solution for PolystyreneAchieved by the Synthesis of Renewable Poly(thioether) Networks Derived from D -Limonene Citation Details In-Document Search ...

    2. Energy Design Guidelines for High Performance Schools: Tropical Island Climates

      SciTech Connect

      2004-11-01

      Design guidelines outline high performance principles for the new or retrofit design of K-12 schools in tropical island climates. By incorporating energy improvements into construction or renovation plans, schools can reduce energy consumption and costs.

    3. Building America Webinar: High Performance Space Conditioning Systems, Part

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      I | Department of Energy I Building America Webinar: High Performance Space Conditioning Systems, Part I The webinar on Oct. 23, 2014, focused on strategies to improve the performance of HVAC systems for low load homes and home performance retrofits. Presenters and specific topics for this webinar will be: * Andrew Poerschke, IBACOS, presenting Simplified Space Conditioning in Low-load Homes. The presentation will focus on what is "simple" when it comes to space conditioning?

    4. Building America Webinar: High Performance Space Conditioning Systems, Part

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      II | Department of Energy II Building America Webinar: High Performance Space Conditioning Systems, Part II The webinar on Nov. 18, 2014, continued the series on strategies to improve the performance of HVAC systems for low load homes and home performance retrofits. Presenters and specific topics for this webinar included: William Zoeller, Consortium for Advanced Residential Retrofit (CARB), presented Design Options for Locating Ducts within Conditioned Space. The presentation provided an

    5. Building America Webinar: High Performance Enclosure Strategies: Part II,

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      New Construction - August 13, 2014 - Introduction | Department of Energy Introduction Building America Webinar: High Performance Enclosure Strategies: Part II, New Construction - August 13, 2014 - Introduction This presentation is the Introduction to the Building America webinar, High Performance Enclosure Strategies, Part II, held on August 13, 2014. BA webinar_intro_8_13_14.pdf (969.17 KB) More Documents & Publications Building America Webinar: Retrofitting Central Space Conditioning

    6. High-Performance Refrigerator Using Novel Rotating Heat Exchanger |

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Department of Energy High-Performance Refrigerator Using Novel Rotating Heat Exchanger High-Performance Refrigerator Using Novel Rotating Heat Exchanger Rotating heat exchangers installed in appliances and heat pumps have the potentially to reduce energy costs and refrigerant charge in a compact space. Rotating heat exchangers installed in appliances and heat pumps have the potentially to reduce energy costs and refrigerant charge in a compact space. Sandia-developed rotating heat exchanger

    7. Seven NNSS buildings achieve High Performance Sustainable Building status |

      National Nuclear Security Administration (NNSA)

      National Nuclear Security Administration | (NNSA) Seven NNSS buildings achieve High Performance Sustainable Building status Monday, March 21, 2016 - 2:15pm Nevada Support Facility (NSF), Nevada National Security Site administrative headquarters. Nevada National Security Site (NNSS) - The National Nuclear Security Administration announced the award today of seven High Performance Sustainable Building (HPSB) plaques to the NNSS team for seven "green" buildings. The buildings are:

    8. Centerra Earns High Performance Rating for Savannah River Site Security

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Operations | Department of Energy Centerra Earns High Performance Rating for Savannah River Site Security Operations Centerra Earns High Performance Rating for Savannah River Site Security Operations January 27, 2016 - 12:30pm Addthis Centerra protective force personnel conduct a vehicle inspection to prevent the introduction of prohibited items into a limited area at SRS. The canine team is trained to detect the presence of explosives and assists with vehicle and package inspections.

    9. Reliable, High Performance Transistors on Flexible Substrates - Energy

      U.S. Department of Energy (DOE) - all webpages (Extended Search)

      Innovation Portal Advanced Materials Advanced Materials Find More Like This Return to Search Reliable, High Performance Transistors on Flexible Substrates Lawrence Berkeley National Laboratory Contact LBL About This Technology Publications: PDF Document Publication Backplanes for Conformal Electronics and Sensors, "Nano Lett., 2011, 11, 5408-5413 (924 KB) Technology Marketing Summary Researchers at Berkeley Lab have produced uniform, high performance transistors on mechanically

    10. National Best Practices Manual for Building High Performance Schools |

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Department of Energy National Best Practices Manual for Building High Performance Schools National Best Practices Manual for Building High Performance Schools The Best Practices Manual was written as a part of the promotional effort for EnergySmart Schools, provided by the US Department of Energy, to educate school districts around the country about energy efficiency and renewable energy. nationalbestpracticesmanual31545.pdf (8.84 MB) More Documents & Publications Building

    11. Building America Roadmap to High Performance Homes | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Roadmap to High Performance Homes Building America Roadmap to High Performance Homes This presentation was delivered at the U.S. Department of Energy Building America Technical Update meeting on April 29-30, 2013, in Denver, Colorado. ba_roadmap_highperformance_werling.pdf (2.82 MB) More Documents & Publications Update on U.S. Department of Energy Building America Program Goals Update on U.S. Department of Energy Building America Program Goals Collective Impact for Zero Net Energy Homes

    12. Building America Webinar: High Performance Building Enclosures: Part I,

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Existing Homes | Department of Energy High Performance Building Enclosures: Part I, Existing Homes Building America Webinar: High Performance Building Enclosures: Part I, Existing Homes The webinar, presented on May 21, 2014, focused on specific Building America projects that have implemented technical solutions to retrofit building enclosures to reduce energy use and improve durability. Presenters answered tough questions such as: How can builders deal with increasing exterior foundation

    13. Building America Webinar: High Performance Enclosure Strategies: Part II,

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      New Construction | Department of Energy Strategies: Part II, New Construction Building America Webinar: High Performance Enclosure Strategies: Part II, New Construction The webinar is the second in the series on designing and constructing high performance building enclosures, and will focus on effective strategies to address moisture and thermal needs. Peter Baker, Building Science Corporation, will discuss results of 3 years of laboratory and field exposure testing that examined the

    14. Vehicle Technologies Office Merit Review 2016: Advanced High-Performance

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Batteries for Electric Vehicle (EV) Applications | Department of Energy Advanced High-Performance Batteries for Electric Vehicle (EV) Applications Vehicle Technologies Office Merit Review 2016: Advanced High-Performance Batteries for Electric Vehicle (EV) Applications Presentation given by Amprius at the 2016 DOE Vehicle Technologies Office and Hydrogen and Fuel Cells Program Annual Merit Review and Peer Evaluation Meeting about Batteries es241_stefan_2016_p_web.pdf (739.96 KB) More

    15. Text-Alternative Version of High Performance Space Conditioning Systems:

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Part I | Department of Energy I Text-Alternative Version of High Performance Space Conditioning Systems: Part I High Performance Space Conditioning Systems: Part I October 21, 2014 Andrew Poerschke, Research Initiatives Specialist, IBACOS Kohta Ueno, Senior Associate, Building Science Corporation Gail: Hello everyone. I am Gail Werren with the National Renewable Energy Laboratory. And I'd like to welcome you to today's webinar hosted by the Building America Program. We are excited to have

    16. Webinar: ENERGY STAR Hot Water Systems for High Performance Homes |

      Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)

      Department of Energy ENERGY STAR Hot Water Systems for High Performance Homes Webinar: ENERGY STAR Hot Water Systems for High Performance Homes This presentation is from the Building America research team BA-PIRC webinar on September 30, 2011 providing informationprovide information about how to achieve energy savings from solar water heating, electric dedicated heat pump water heating, and gas tankless systems. es_hot_water_systems.pdf (7.66 MB) More Documents & Publications Tankless

    17. In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation

      SciTech Connect

      G. R. Odette; G. E. Lucas

      2005-11-15

      This final report on "In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation" (DE-FG03-01ER54632) consists of a series of summaries of work that has been published, or presented at meetings, or both. It briefly describes results on the following topics: 1) A Transport and Fate Model for Helium and Helium Management; 2) Atomistic Studies of Point Defect Energetics, Dynamics and Interactions; 3) Multiscale Modeling of Fracture consisting of: 3a) A Micromechanical Model of the Master Curve (MC) Universal Fracture Toughness-Temperature Curve Relation, KJc(T - To), 3b) An Embrittlement DTo Prediction Model for the Irradiation Hardening Dominated Regime, 3c) Non-hardening Irradiation Assisted Thermal and Helium Embrittlement of 8Cr Tempered Martensitic Steels: Compilation and Analysis of Existing Data, 3d) A Model for the KJc(T) of a High Strength NFA MA957, 3e) Cracked Body Size and Geometry Effects of Measured and Effective Fracture Toughness-Model Based MC and To Evaluations of F82H and Eurofer 97, 3-f) Size and Geometry Effects on the Effective Toughness of Cracked Fusion Structures; 4) Modeling the Multiscale Mechanics of Flow Localization-Ductility Loss in Irradiation Damaged BCC Alloys; and 5) A Universal Relation Between Indentation Hardness and True Stress-Strain Constitutive Behavior. Further details can be found in the cited references or presentations that generally can be accessed on the internet, or provided upon request to the authors. Finally, it is noted that this effort was integrated with our base program in fusion materials, also funded by the DOE OFES.

    18. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

      SciTech Connect

      1997-03-01

      This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

    19. High-pressure X-ray diffraction, Raman, and computational studies...

      Office of Scientific and Technical Information (OSTI)

      High-pressure X-ray diffraction, Raman, and computational studies of MgCl2 up to 1 Mbar: ... Citation Details In-Document Search Title: High-pressure X-ray diffraction, Raman, and ...

    20. High-Performance Photovoltaic Project: Identifying Critical Pathways; Kickoff Meeting

      SciTech Connect

      Symko-Davis, M.

      2001-11-07

      The High Performance Photovoltaic Project held a Kickoff Meeting in October, 2001. This booklet contains the presentations given by subcontractors and in-house teams at that meeting. The areas of subcontracted research under the HiPer project include Polycrystalline Thin Films and Multijunction Concentrators. The in-house teams in this initiative will focus on three areas: (1) High-Performance Thin-Film Team-leads the investigation of tandem structures and low-flux concentrators, (2) High-Efficiency Concepts and Concentrators Team-an expansion of an existing team that leads the development of high-flux concentrators, and (3) Thin-Film Process Integration Team-will perform fundamental process and characterization research, to resolve the complex issues of making thin-film multijunction devices.