National Library of Energy BETA

Sample records for high performance computing

  1. High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

  2. Sandia Energy - High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

  3. Introduction to High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Introduction to High Performance Computing Introduction to High Performance Computing June 10, 2013 Photo on 7 30 12 at 7.10 AM Downloads Download File Gerber-HPC-2.pdf...

  4. Software and High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Software and High Performance Computing Software and High Performance Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest Contact thumbnail of Kathleen McDonald Head of Intellectual Property, Business Development Executive Kathleen McDonald Richard P. Feynman Center for Innovation (505) 667-5844 Email Software Computational physics, computer science, applied mathematics, statistics and the

  5. Thrusts in High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    in HPC 1 Thrusts in High Performance Computing Science at Scale Petaflops to Exaflops Science through Volume Thousands to Millions of Simulations Science in Data Petabytes to ...

  6. Presentation: High Performance Computing Applications

    Office of Energy Efficiency and Renewable Energy (EERE)

    A briefing to the Secretary's Energy Advisory Board on High Performance Computing Applications delivered by Frederick H. Streitz, Lawrence Livermore National Laboratory.

  7. Software and High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computational physics, computer science, applied mathematics, statistics and the ... a fully operational supercomputing environment Providing Current Capability Scientific ...

  8. in High Performance Computing Computer System, Cluster, and Networking...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

  9. High Performance Computing Student Career Resources

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    HPC » Students High Performance Computing Student Career Resources Explore the multiple dimensions of a career at Los Alamos Lab: work with the best minds on the planet in an inclusive environment that is rich in intellectual vitality and opportunities for growth. Contact Us Student Liaison Josephine Kilde (505) 667-5086 Email High Performance Computing Capabilities The High Performance Computing (HPC) Division supports the Laboratory mission by managing world-class Supercomputing Centers. Our

  10. SciTech Connect: "high performance computing"

    Office of Scientific and Technical Information (OSTI)

    Advanced Search Term Search Semantic Search Advanced Search All Fields: "high performance computing" Semantic Semantic Term Title: Full Text: Bibliographic Data: Creator ...

  11. Introduction to High Performance Computing Using GPUs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    HPC Using GPUs Introduction to High Performance Computing Using GPUs July 11, 2013 NERSC, NVIDIA, and The Portland Group presented a one-day workshop "Introduction to High Performance Computing Using GPUs" on July 11, 2013 in Room 250 of Sutardja Dai Hall on the University of California, Berkeley, campus. Registration was free and open to all NERSC users; Berkeley Lab Researchers; UC students, faculty, and staff; and users of the Oak Ridge Leadership Computing Facility. This workshop

  12. High Performance Computing Data Center (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2014-08-01

    This two-page fact sheet describes the new High Performance Computing Data Center in the ESIF and talks about some of the capabilities and unique features of the center.

  13. High Performance Computing Data Center (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2012-08-01

    This two-page fact sheet describes the new High Performance Computing Data Center being built in the ESIF and talks about some of the capabilities and unique features of the center.

  14. OCIO Technology Summit: High Performance Computing

    Office of Energy Efficiency and Renewable Energy (EERE)

    Last week, the Office of the Chief Information Officer sponsored a Technology Summit on High Performance Computing (HPC), hosted by the Chief Technology Officer.  This was the eleventh in a series...

  15. Collaboration to advance high-performance computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Collaboration to advance high-performance computing Collaboration to advance high-performance computing LANL and EMC will enhance, design, build, test, and deploy new cutting-edge technologies to meet some of the most difficult information technology challenges. December 21, 2011 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy

  16. Debugging a high performance computing program

    DOE Patents [OSTI]

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  17. Debugging a high performance computing program

    DOE Patents [OSTI]

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  18. Fermilab | Science at Fermilab | Computing | High-performance...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Lattice QCD Farm at the Grid Computing Center at Fermilab. Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer ...

  19. Climate Modeling using High-Performance Computing

    SciTech Connect (OSTI)

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  20. High Performance Computing at the Oak Ridge Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Computing at the Oak Ridge Leadership Computing Facility Go to Menu Page 2 Outline * Our Mission * Computer Systems: Present, Past, Future * Challenges Along the Way * Resources for Users Go to Menu Page 3 Our Mission Go to Menu Page 4 * World's most powerful computing facility * Nation's largest concentration of open source materials research * $1.3B budget * 4,250 employees * 3,900 research guests annually * $350 million invested in modernization * Nation's most diverse energy

  1. High-performance computing for airborne applications

    SciTech Connect (OSTI)

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  2. High Performance Computing | Argonne National Laboratory

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Computing A visualization of a simulated collision event in the ATLAS detector. This simulation, containing a Z boson and five hadronic jets, is an example of an event that is too complex to be simulated in bulk using ordinary PC-based computing grids. A visualization of a simulated collision event in the ATLAS detector. This simulation, containing a Z boson and five hadronic jets, is an example of an event that is too complex to be simulated in bulk using ordinary PC-based

  3. OSTIblog Articles in the High-performance computing Topic | OSTI...

    Office of Scientific and Technical Information (OSTI)

    Research, ASCR, climate change, earth systems modeling, High-performance computing, ... ORNL's National Center for Computational Sciences... Related Topics: High-performance ...

  4. Nuclear Forces and High-Performance Computing: The Perfect Match...

    Office of Scientific and Technical Information (OSTI)

    Conference: Nuclear Forces and High-Performance Computing: The Perfect Match Citation Details In-Document Search Title: Nuclear Forces and High-Performance Computing: The Perfect ...

  5. High Performance Computing Richard F. BARRETT

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Role of Co-design in High Performance Computing Richard F. BARRETT a,1 , Shekhar BORKAR b , Sudip S. DOSANJH c , Simon D. HAMMOND a , Michael A. HEROUX a , X. Sharon HU d , Justin LUITJENS e , Steven G. PARKER e , John SHALF c , and Li TANG d a Sandia National Laboratories, Albuquerque, NM, USA b Intel Corporation c Lawrence Berkeley National Laboratory, Berkeley, CA, USA d University of Notre Dame, South Bend, IN, USA e Nvidia, Inc., Santa Clara, CA, USA Abstract. Preparations for Exascale

  6. Computational Performance of Ultra-High-Resolution Capability...

    Office of Scientific and Technical Information (OSTI)

    Computational Performance of Ultra-High-Resolution Capability in the Community Earth System Model Citation Details In-Document Search Title: Computational Performance of ...

  7. High Performance Computing Facility Operational Assessment, CY...

    Office of Scientific and Technical Information (OSTI)

    At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational ...

  8. Energy Efficiency Opportunities in Federal High Performance Computing...

    Broader source: Energy.gov (indexed) [DOE]

    Efficiency Opportunities in Federal High Performance Computing Data Centers Prepared for .........9 EEMs for HPC Data Centers ......

  9. High-performance computer system installed at Los Alamos National

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Laboratory High-performance computer system installed at Lab High-performance computer system installed at Los Alamos National Laboratory New high-performance computer system, called Wolf, will be used for unclassified research. June 17, 2014 The Wolf computer system modernizes mid-tier resources for Los Alamos scientists. The Wolf computer system modernizes mid-tier resources for Los Alamos scientists. Contact Nancy Ambrosiano Communications Office (505) 667-0471 Email "This machine

  10. High-Performance Computing at Los

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Performance Computing at Los Alamos announces milestone for key/ value middleware May 26, 2014 Billion inserts-per-second data milestone reached for supercomputing tool LOS ALAMOS, N.M., May 29, 2014-At Los Alamos, a supercomputer epicenter where "big data set" really means something, a data middleware project has achieved a milestone for specialized information organization and storage. The Multi-dimensional Hashed Indexed Middleware (MDHIM) project at Los Alamos National Laboratory

  11. Bill Carlson IDA Center for Computing Sciences Making High Performance

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Integration Establish correlation between database tables and data structures in memory. ... ctime(t->b)); t++; High Performance computing is in trouble Not because of performance ...

  12. High-Performance Computing Data Center Metering Protocol | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy High-Performance Computing Data Center Metering Protocol High-Performance Computing Data Center Metering Protocol Guide details the methods for measurement in High-Performance Computing (HPC) data center facilities and documents system strategies that have been used in Department of Energy data centers to increase data center energy efficiency. Download the guide. (1.34 MB) More Documents & Publications Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance

  13. High-performance computer system installed at Los Alamos National

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Laboratory High-performance computer system installed at Los Alamos National Laboratory Alumni Link: Opportunities, News and Resources for Former Employees Latest Issue:September 2015 all issues All Issues » submit High-performance computer system installed at Los Alamos National Laboratory New high-performance computer system, called Wolf, will be used for unclassified research September 2, 2014 New insights to changing the atomic structure of metals The Wolf computer system modernizes

  14. Energy Efficiency Opportunities in Federal High Performance Computing Data

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Centers | Department of Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Case study describes an outline of energy efficiency opportunities in federal high-performance computing data centers. Download the case study. (1.05 MB) More Documents & Publications Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers Case Study: Innovative Energy

  15. High-Performance Computing for Advanced Smart Grid Applications...

    Office of Scientific and Technical Information (OSTI)

    Title: High-Performance Computing for Advanced Smart Grid Applications The power grid is becoming far more complex as a result of the grid evolution meeting an information ...

  16. NNSA Awards Contract for High-Performance Computers | National...

    National Nuclear Security Administration (NNSA)

    Awards Contract for High-Performance Computers October 02, 2007 Contract Highlights Efforts to Integrate Nuclear Weapons Complex WASHINGTON, D.C. -- The Department of Energy's ...

  17. Webinar "Applying High Performance Computing to Engine Design...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Webinar "Applying High Performance Computing to Engine Design Using Supercomputers" Share ... Study Benefits of Bioenergy Crop Integration Video: Biofuel technology at Argonne

  18. high performance computing | National Nuclear Security Administration

    National Nuclear Security Administration (NNSA)

    Livermore National Laboratory (LLNL), announced her retirement last week after 15 years of leading Livermore's Computation Directorate. "Dona has successfully led a ...

  19. High Performance Computing Data Center Metering Protocol

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    1.5% of all electricity used in the US at that time. The report then suggested that the overall consumption would rise ... computers utilized by end users, and servers and ...

  20. DOE ASSESSMENT SEAB Recommendations Related to High Performance Computing

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of 10 DOE ASSESSMENT SEAB Recommendations Related to High Performance Computing 1. Introduction The Department of Energy (DOE) is planning to develop and deliver capable exascale computing systems by 2023-24. These systems are expected to have a one-hundred to one-thousand-fold increase in sustained performance over today's computing capabilities, capabilities critical to enabling the next-generation computing for national security, science, engineering, and large- scale data analytics needed to

  1. High Performance Computational Biology: A Distributed computing Perspective (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Konerding, David [Google, Inc

    2011-06-08

    David Konerding from Google, Inc. gives a presentation on "High Performance Computational Biology: A Distributed Computing Perspective" at the JGI/Argonne HPC Workshop on January 26, 2010.

  2. 100 supercomputers later, Los Alamos high-performance computing still

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    supports national security mission High-performance computing supports national security 100 supercomputers later, Los Alamos high-performance computing still supports national security mission Los Alamos National Laboratory has deployed 100 supercomputers in the last 60 years. November 12, 2014 1952 MANIAC-I supercomputer 1952 MANIAC-I supercomputer Contact Nancy Ambrosiano Communications Office (505) 667-0471 Email "Computing power for our Laboratory's national security mission is a

  3. High-performance computing of electron microstructures

    SciTech Connect (OSTI)

    Bishop, A. [Los Alamos National Lab., NM (United States); Birnir, B.; Galdrikian, B.; Wang, L. [Univ. of California, Santa Barbara, CA (United States)

    1998-12-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The project was a collaboration between the Quantum Institute at the University of California-Santa Barbara (UCSB) and the Condensed Matter and Statistical Physics Group at LANL. The project objective, which was successfully accomplished, was to model quantum properties of semiconductor nanostructures that were fabricated and measured at UCSB using dedicated molecular-beam epitaxy and free-electron laser facilities. A nonperturbative dynamic quantum theory was developed for systems driven by time-periodic external fields. For such systems, dynamic energy spectra of electrons and photons and their corresponding wave functions were obtained. The results are in good agreement with experimental investigations. The algorithms developed are ideally suited for massively parallel computing facilities and provide a fundamental advance in the ability to predict quantum-well properties and guide their engineering. This is a definite step forward in the development of nonlinear optical devices.

  4. Intro - High Performance Computing for 2015 HPC Annual Report

    SciTech Connect (OSTI)

    Klitsner, Tom

    2015-10-01

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  5. High-Performance Computing and Visualization | Energy Systems Integration |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NREL High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a research tool by which scientists and engineers find new ways to tackle our nation's energy challenges-challenges that cannot be addressed through traditional experimentation alone. Photo of two men standing in front of a 3D visualization screen These research efforts will save time and money and significantly improve the likelihood of breakthroughs

  6. Continuous Monitoring And Cyber Security For High Performance Computing

    Office of Scientific and Technical Information (OSTI)

    (Conference) | SciTech Connect Conference: Continuous Monitoring And Cyber Security For High Performance Computing Citation Details In-Document Search Title: Continuous Monitoring And Cyber Security For High Performance Computing Authors: Malin, Alex B. [1] ; Van Heule, Graham K. [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2013-08-02 OSTI Identifier: 1089452 Report Number(s): LA-UR-13-21921 DOE Contract Number: AC52-06NA25396 Resource Type: Conference

  7. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect (OSTI)

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  8. LANL installs high-performance computer system | National Nuclear Security

    National Nuclear Security Administration (NNSA)

    Administration | (NNSA) LANL installs high-performance computer system Friday, June 20, 2014 - 10:29am Los Alamos National Laboratory recently installed a new high-performance computer system, called Wolf, which will be used for unclassified research. Wolf will help modernize mid-tier resources available to the lab and can be used to advance many fields of science. Wolf, manufactured by Cray Inc., has 616 compute nodes, each with two 8-core 2.6 GHz Intel "Sandybridge" processors,

  9. High-Performance Computing for Advanced Smart Grid Applications

    SciTech Connect (OSTI)

    Huang, Zhenyu; Chen, Yousu

    2012-07-06

    The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

  10. High performance computing and communications: FY 1997 implementation plan

    SciTech Connect (OSTI)

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  11. High Performance Parallel Computing of Flows in Complex Geometries |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Argonne Leadership Computing Facility Geometries Authors: Gicquela, L.Y.M., Gourdaina, N., Boussugea, J.F., Deniaua, H., Staffelbach, G., Wolf, P., Poinsot, T. Efficient numerical tools taking advantage of the ever increasing power of high-performance computers, become key elements in the fields of energy supply and transportation, not only from a purely scientific point of view, but also at the design stage in industry. Indeed, flow phenomena that occur in or around the industrial

  12. High Performance Parallel Computing of Flows in Complex Geometries: I.

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Methods | Argonne Leadership Computing Facility I. Methods Authors: Gourdain, N., Gicquel, L., Montagnac, M., Vermorel, O., Gazaix, M., Staffelbach, G., Garcia, M., Boussuge, J-F, Poinsot, T. Efficient numerical tools coupled with high-performance computers, have become a key element of the design process in the fields of energy supply and transportation. However flow phenomena that occur in complex systems such as gas turbines and aircrafts are still not understood mainly because of the

  13. The role of interpreters in high performance computing

    SciTech Connect (OSTI)

    Naumann, Axel; Canal, Philippe; /Fermilab

    2008-01-01

    Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. We show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.

  14. High performance computing and communications: FY 1996 implementation plan

    SciTech Connect (OSTI)

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  15. Toward a new metric for ranking high performance computing systems...

    Office of Scientific and Technical Information (OSTI)

    performance for a growing collection of important science and engineering applications. ... performance and expect to drive computer system design and implementation in ...

  16. A directory service for configuring high-performance distributed computations

    SciTech Connect (OSTI)

    Fitzgerald, S.; Kesselman, C.; Foster, I.

    1997-08-01

    High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

  17. SC15 High Performance Computing (HPC) Transforms Batteries - Joint Center

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    for Energy Storage Research September 21, 2015, Videos SC15 High Performance Computing (HPC) Transforms Batteries A new breakthrough battery-one that has significantly higher energy, lasts longer, and is cheaper and safer-will likely be impossible without a new material discovery. Kristin Persson and other JCESR scientists at Lawrence Berkeley National Laboratory are taking some of the guesswork out of the discovery process with the Electrolyte Genome Project. Electrolyte Genome

  18. 100 supercomputers later, Los Alamos high-performance computing still

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    supports national security mission 100 supercomputers later Alumni Link: Opportunities, News and Resources for Former Employees Latest Issue:September 2015 all issues All Issues » submit 100 supercomputers later, Los Alamos high-performance computing still supports national security mission Los Alamos National Laboratory has deployed 100 supercomputers in the last 60 years January 1, 2015 1952 MANIAC-I supercomputer 1952 MANIAC-I supercomputer Contact Linda Anderman Email From the 1952

  19. Transforming Power Grid Operations via High Performance Computing

    SciTech Connect (OSTI)

    Huang, Zhenyu; Nieplocha, Jarek

    2008-07-31

    Past power grid blackout events revealed the adequacy of grid operations in responding to adverse situations partially due to low computational efficiency in grid operation functions. High performance computing (HPC) provides a promising solution to this problem. HPC applications in power grid computation also become necessary to take advantage of parallel computing platforms as the computer industry is undergoing a significant change from the traditional single-processor environment to an era for multi-processor computing platforms. HPC applications to power grid operations are multi-fold. HPC can improve todays grid operation functions like state estimation and contingency analysis and reduce the solution time from minutes to seconds, comparable to SCADA measurement cycles. HPC also enables the integration of dynamic analysis into real-time grid operations. Dynamic state estimation, look-ahead dynamic simulation and real-time dynamic contingency analysis can be implemented and would be three key dynamic functions in future control centers. HPC applications call for better decision support tools, which also need HPC support to handle large volume of data and large number of cases. Given the complexity of the grid and the sheer number of possible configurations, HPC is considered to be an indispensible element in the next generation control centers.

  20. High Performance Computing with Harness over InfiniBand

    SciTech Connect (OSTI)

    Valentini, Alessandro; Di Biagio, Christian; Batino, Fabrizio; Pennella, Guido; Palma, Fabrizio; Engelmann, Christian

    2009-01-01

    Harness is an adaptable and plug-in-based middleware framework able to support distributed parallel computing. By now, it is based on the Ethernet protocol which cannot guarantee high performance throughput and real time (determinism) performance. During last years, both, the research and industry environments have developed new network architectures (InfiniBand, Myrinet, iWARP, etc.) to avoid those limits. This paper concerns the integration between Harness and InfiniBand focusing on two solutions: IP over InfiniBand (IPoIB) and Socket Direct Protocol (SDP) technology. They allow the Harness middleware to take advantage of the enhanced features provided by the InfiniBand Architecture.

  1. High performance computing and communications: FY 1995 implementation plan

    SciTech Connect (OSTI)

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  2. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect (OSTI)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  3. In the OSTI Collections: High-Performance Computing | OSTI, US...

    Office of Scientific and Technical Information (OSTI)

    ... approach to exascale-computing resilience, but choosing one approach now would ... opportunities for low-power, high-resilience technology, aiming for an early ...

  4. High-performance Computing Applied to Semantic Databases

    SciTech Connect (OSTI)

    Goodman, Eric L.; Jimenez, Edward; Mizell, David W.; al-Saffar, Sinan; Adolf, Robert D.; Haglin, David J.

    2011-06-02

    To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.

  5. High-performance computing applied to semantic databases.

    SciTech Connect (OSTI)

    al-Saffar, Sinan; Jimenez, Edward Steven, Jr.; Adolf, Robert; Haglin, David; Goodman, Eric L.; Mizell, David

    2010-12-01

    To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.

  6. Multicore Challenges and Benefits for High Performance Scientific Computing

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Nielsen, Ida M.B.; Janssen, Curtis L.

    2008-01-01

    Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less

  7. Toward a new metric for ranking high performance computing systems...

    Office of Scientific and Technical Information (OSTI)

    as a true measure of system performance for a growing collection of important science and engineering applications. In this paper we describe a new high performance conjugate...

  8. Power/energy use cases for high performance computing.

    SciTech Connect (OSTI)

    Laros, James H.,; Kelly, Suzanne M; Hammond, Steven; Elmore, Ryan; Munch, Kristin

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  9. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    SciTech Connect (OSTI)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  10. High Performance Computing at TJNAF| U.S. DOE Office of Science...

    Office of Science (SC) Website

    Applications of Nuclear Science Archives High Performance Computing at TJNAF Print Text ... collaboration with other institutions, computer scientists and physicists are exploiting ...

  11. Arthur B. (Barney) Maccabe Computer Science Department Center for High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Linux never has been and never will be "Extreme" Arthur B. (Barney) Maccabe Computer Science Department Center for High Performance Computing The University of New Mexico Salishan April 23, 2003 Salishan April 23, 2003 1 This talk was prepared on a Debain Linux box http://www.debian.org using OpenOffice http://www.openoffice.org Salishan April 23, 2003 1 Outline ● My background: lightweight operating systems ● Linux and world domination ● Adapting to innovative technologies ●

  12. DOE Science Showcase - High-Performance Computing | OSTI, US Dept of Energy

    Office of Scientific and Technical Information (OSTI)

    Office of Scientific and Technical Information High-Performance Computing Supercomputers or massively parallel high-performance computers (HPCs) are machines that employ very large numbers of processors in parallel to address scientific and engineering challenges. HPCs carry out trillions or even quadrillions of calculations each second - current high-performance computers are powerful enough to simulate some of the most complex physical, biological, and chemical phenomena. High-performance

  13. John Shalf Gives Talk at San Francisco High Performance Computing Meetup

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    John Shalf Gives Talk at San Francisco High Performance Computing Meetup John Shalf Gives Talk at San Francisco High Performance Computing Meetup September 17, 2014 XBD200503 00083 John Shalf In his role as NERSC's chief technology officer, John Shalf gave a talk on "Converging Interconnect Requirements for HPC and Warehouse Scale Computing at San Francisco High Performance Computing Meetup. The Sept 17 meeting was held at GeekdomSF in downtown San Francisco. The group, which describes

  14. ALCF summer students gain experience with high-performance computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of computing that my textbooks couldn't keep up with," said Brown, who is majoring in computer science and computer game design. "Getting exposed to many-core machines and...

  15. A secure communications infrastructure for high-performance distributed computing

    SciTech Connect (OSTI)

    Foster, I.; Koenig, G.; Tuecke, S.

    1997-08-01

    Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentially of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. The authors address these requirements via a security-enhanced version of the Nexus communication library; which they use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine-grained security/performance tradeoffs. The authors present performance results that quantify the performance of their infrastructure.

  16. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    SciTech Connect (OSTI)

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  17. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    SciTech Connect (OSTI)

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  18. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    SciTech Connect (OSTI)

    Bland, Arthur S Buddy; Hack, James J; Baker, Ann E; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; White, Julia C

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next

  19. Towards an Abstraction-Friendly Programming Model for High Productivity and High Performance Computing

    SciTech Connect (OSTI)

    Liao, C; Quinlan, D; Panas, T

    2009-10-06

    General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will

  20. DOE Science Showcase - High-Performance Computing | OSTI, US...

    Office of Scientific and Technical Information (OSTI)

    DOE Computing, Energy.gov DOE Office of Science Advanced Scientific Computing Research ... SciTech Connect National Library of EnergyBeta Science.gov Ciencia.Science.gov ...

  1. Webinar: High Performance Computing For Manufacturing Spring Solicitation, March 24, 2016

    Broader source: Energy.gov [DOE]

    The Energy Department's Lawrence Livermore National Laboratory will be hosting an informational webinar on the High Performance Computing for Manufacturing (HPC4Mfg) spring solicitation on March...

  2. Webinar: High Performance Computing For Manufacturing Spring Solicitation, April 5, 2016

    Broader source: Energy.gov [DOE]

    The Energy Department's Lawrence Livermore National Laboratory will be hosting an informational webinar on the High Performance Computing for Manufacturing (HPC4Mfg) spring solicitation on April...

  3. Energy Efficiency Opportunities in Federal High Performance Computing...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers Case Study: Innovative Energy Efficiency Approaches in NOAA's Environmental Security Computing ...

  4. Introduction to High Performance Computers Richard Gerber NERSC User Services

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    What are the main parts of a computer? Merit Badge Requirements ... 4. Explain the following to your counselor: a. The five major parts of a computer. ... Boy Scouts of America Offer a Computers Merit Badge 5 What are the "5 major parts"? 6 Five Major Parts eHow.com Answers.com Fluther.com Yahoo! Wikipedia CPU CPU CPU CPU Motherboard RAM Monitor RAM RAM Power Supply Hard Drive Printer Storage Power Supply Removable Media Video Card Mouse Keyboard/ Mouse Video Card Secondary Storage

  5. High-performance, distributed computing software libraries and services

    Energy Science and Technology Software Center (OSTI)

    2002-01-24

    The Globus toolkit provides basic Grid software infrastructure (i.e. middleware), to facilitate the development of applications which securely integrate geographically separated resources, including computers, storage systems, instruments, immersive environments, etc.

  6. Simulation and High-Performance Computing | Department of Energy

    Office of Environmental Management (EM)

    and Former Under Secretary for Science What are the key facts? The Chinese's Tianhe-1A machine is now the world's most powerful computer, 40% faster than the fastest ...

  7. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect (OSTI)

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.

  8. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-11-01

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation packagemorecapable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).less

  9. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  10. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect (OSTI)

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-11-01

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).

  11. High Performance Parallel Computing of Flows in Complex Geometries: II.

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Applications | Argonne Leadership Computing Facility II. Applications Authors: Gourdain, N., Gicquel, L., Staffelbach, G., Vermorel, O., Duchaine, F., Boussuge, J-F, Poinsot, T. Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the

  12. Scalable File Systems for High Performance Computing Final Report

    SciTech Connect (OSTI)

    Brandt, S A

    2007-10-03

    Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-state testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.

  13. High performance computing and communications grand challenges program

    SciTech Connect (OSTI)

    Solomon, J.E.; Barr, A.; Chandy, K.M.; Goddard, W.A., III; Kesselman, C.

    1994-10-01

    The so-called protein folding problem has numerous aspects, however it is principally concerned with the {ital de novo} prediction of three-dimensional (3D) structure from the protein primary amino acid sequence, and with the kinetics of the protein folding process. Our current project focuses on the 3D structure prediction problem which has proved to be an elusive goal of molecular biology and biochemistry. The number of local energy minima is exponential in the number of amino acids in the protein. All current methods of 3D structure prediction attempt to alleviate this problem by imposing various constraints that effectively limit the volume of conformational space which must be searched. Our Grand Challenge project consists of two elements: (1) a hierarchical methodology for 3D protein structure prediction; and (2) development of a parallel computing environment, the Protein Folding Workbench, for carrying out a variety of protein structure prediction/modeling computations. During the first three years of this project, we are focusing on the use of two proteins selected from the Brookhaven Protein Data Base (PDB) of known structure to provide validation of our prediction algorithms and their software implementation, both serial and parallel. Both proteins, protein L from {ital peptostreptococcus magnus}, and {ital streptococcal} protein G, are known to bind to IgG, and both have an {alpha} {plus} {beta} sandwich conformation. Although both proteins bind to IgG, they do so at different sites on the immunoglobin and it is of considerable biological interest to understand structurally why this is so. 12 refs., 1 fig.

  14. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect (OSTI)

    Baker, Ann E; Bland, Arthur S Buddy; Hack, James J; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where

  15. Reliable High Performance Peta- and Exa-Scale Computing

    SciTech Connect (OSTI)

    Bronevetsky, G

    2012-04-02

    As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continue to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty

  16. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    SciTech Connect (OSTI)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  17. A High Performance Computing Platform for Performing High-Volume Studies With Windows-based Power Grid Tools

    SciTech Connect (OSTI)

    Chen, Yousu; Huang, Zhenyu

    2014-08-31

    Serial Windows-based programs are widely used in power utilities. For applications that require high volume simulations, the single CPU runtime can be on the order of days or weeks. The lengthy runtime, along with the availability of low cost hardware, is leading utilities to seriously consider High Performance Computing (HPC) techniques. However, the vast majority of the HPC computers are still Linux-based and many HPC applications have been custom developed external to the core simulation engine without consideration for ease of use. This has created a technical gap for applying HPC-based tools to todays power grid studies. To fill this gap and accelerate the acceptance and adoption of HPC for power grid applications, this paper presents a prototype of generic HPC platform for running Windows-based power grid programs on Linux-based HPC environment. The preliminary results show that the runtime can be reduced from weeks to hours to improve work efficiency.

  18. Energy Department Announces Ten New Projects to Apply High-Performance Computing to Manufacturing Challenges

    Broader source: Energy.gov [DOE]

    The Energy Department today announced $3 million for ten new projects that will enable private-sector companies to use high-performance computing resources at the department's national laboratories to tackle major manufacturing challenges.

  19. Energy Department Announces $3 Million for Industry Access to High Performance Computing

    Broader source: Energy.gov [DOE]

    The Energy Department today announced up to $3 million in available funding for manufacturers to use high-performance computing resources at the Department's national laboratories to tackle major manufacturing challenges.

  20. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Oehmen, Chris [PNNL

    2011-06-08

    Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

  1. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

    SciTech Connect (OSTI)

    Oehmen, Chris [PNNL] [PNNL

    2010-01-25

    Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

  2. Workshop on programming languages for high performance computing (HPCWPL): final report.

    SciTech Connect (OSTI)

    Murphy, Richard C.

    2007-05-01

    This report summarizes the deliberations and conclusions of the Workshop on Programming Languages for High Performance Computing (HPCWPL) held at the Sandia CSRI facility in Albuquerque, NM on December 12-13, 2006.

  3. Vehicle Technologies Office Merit Review 2013: Accelerating Predictive Simulation of IC Engines with High Performance Computing

    Office of Energy Efficiency and Renewable Energy (EERE)

    Presentation given by Oak Ridge National Laboratory at the 2013 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting about simulating internal combustion engines using high performance computing.

  4. An evaluation of Java's I/O capabilities for high-performance computing.

    SciTech Connect (OSTI)

    Dickens, P. M.; Thakur, R.

    2000-11-10

    Java is quickly becoming the preferred language for writing distributed applications because of its inherent support for programming on distributed platforms. In particular, Java provides compile-time and run-time security, automatic garbage collection, inherent support for multithreading, support for persistent objects and object migration, and portability. Given these significant advantages of Java, there is a growing interest in using Java for high-performance computing applications. To be successful in the high-performance computing domain, however, Java must have the capability to efficiently handle the significant I/O requirements commonly found in high-performance computing applications. While there has been significant research in high-performance I/O using languages such as C, C++, and Fortran, there has been relatively little research into the I/O capabilities of Java. In this paper, we evaluate the I/O capabilities of Java for high-performance computing. We examine several approaches that attempt to provide high-performance I/O--many of which are not obvious at first glance--and investigate their performance in both parallel and multithreaded environments. We also provide suggestions for expanding the I/O capabilities of Java to better support the needs of high-performance computing applications.

  5. NREL Selects Partners for New High Performance Computer Data Center - News

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Releases | NREL Selects Partners for New High Performance Computer Data Center NREL to work with HP and Intel to create one of the world's most energy efficient data centers. September 5, 2012 The U.S. Department of Energy's National Renewable Energy Laboratory (NREL) has selected HP and Intel to provide a new energy-efficient high performance computer (HPC) system dedicated to energy systems integration, renewable energy research, and energy efficiency technologies. The new center will

  6. Webinar "Applying High Performance Computing to Engine Design Using

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Supercomputers" | Argonne National Laboratory Webinar "Applying High Performance Computing to Engine Design Using Supercomputers" Share Description Video from the February 25, 2016 Convergent Science/Argonne National Laboratory webinar "Applying High Performance Computing to Engine Design using Supercomputers," featuring Janardhan Kodavasal of Argonne National Laboratory Speakers Janardhan Kodavasal, Argonne National Laboratory Duration 52:26 Topic Energy Energy

  7. High-Performance Computing at Los Alamos announces milestone for key/value

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    middleware High-Performance Computing announces milestone High-Performance Computing at Los Alamos announces milestone for key/value middleware Billion inserts-per-second data milestone reached for supercomputing tool. May 26, 2014 Billion inserts-per-second data milestone reached for supercomputing tool Billion inserts-per-second data milestone reached for supercomputing tool. Contact Nancy Ambrosiano Communications Office (505) 667-0471 Email "This milestone was achieved by a

  8. DOE High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Fund New Proposals To Jumpstart Energy Technologies | Department of Energy High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To Fund New Proposals To Jumpstart Energy Technologies DOE High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To Fund New Proposals To Jumpstart Energy Technologies March 18, 2016 - 3:31pm Addthis News release from Lawrence Livermore National Laboratory, March 17 2016 LIVERMORE, Calif - A new U.S. Department of Energy (DOE) program

  9. Process for selecting NEAMS applications for access to Idaho National Laboratory high performance computing resources

    SciTech Connect (OSTI)

    Michael Pernice

    2010-09-01

    INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

  10. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    SciTech Connect (OSTI)

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on high performance computing platforms.

  11. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect (OSTI)

    Baker, Ann E; Barker, Ashley D; Bland, Arthur S Buddy; Boudwin, Kathlyn J.; Hack, James J; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C; Hudson, Douglas L

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of

  12. Chapter 9: Enabling Capabilities for Science and Energy | High-Performance Computing Capabilities and Allocations Supplemental Information

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Capabilities and Allocations User Facility Statistics Examples and Case Studies ENERGY U.S. DEPARTMENT OF Quadrennial Technology Review 2015 1 Quadrennial Technology Review 2015 High Performance Computing Capabilities and Resource Allocations Chapter 9: Enabling Capabilities for Science and Energy High Performance Computing Capabilities The Department of Energy (DOE) laboratories integrate high performance computing (HPC) capabilities into their energy, science, and national security missions.

  13. High performance computing in chemistry and massively parallel computers: A simple transition?

    SciTech Connect (OSTI)

    Kendall, R.A.

    1993-03-01

    A review of the various problems facing any software developer targeting massively parallel processing (MPP) systems is presented. Issues specific to computational chemistry application software will be also outlined. Computational chemistry software ported to and designed for the Intel Touchstone Delta Supercomputer will be discussed. Recommendations for future directions will also be made.

  14. Failure detection in high-performance clusters and computers using chaotic map computations

    SciTech Connect (OSTI)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  15. Investigating methods of supporting dynamically linked executables on high performance computing platforms.

    SciTech Connect (OSTI)

    Kelly, Suzanne Marie; Laros, James H., III; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2009-09-01

    Shared libraries have become ubiquitous and are used to achieve great resource efficiencies on many platforms. The same properties that enable efficiencies on time-shared computers and convenience on small clusters prove to be great obstacles to scalability on large clusters and High Performance Computing platforms. In addition, Light Weight operating systems such as Catamount have historically not supported the use of shared libraries specifically because they hinder scalability. In this report we will outline the methods of supporting shared libraries on High Performance Computing platforms using Light Weight kernels that we investigated. The considerations necessary to evaluate utility in this area are many and sometimes conflicting. While our initial path forward has been determined based on this evaluation we consider this effort ongoing and remain prepared to re-evaluate any technology that might provide a scalable solution. This report is an evaluation of a range of possible methods of supporting dynamically linked executables on capability class1 High Performance Computing platforms. Efforts are ongoing and extensive testing at scale is necessary to evaluate performance. While performance is a critical driving factor, supporting whatever method is used in a production environment is an equally important and challenging task.

  16. HPC4Mfg: Boosting American Competiveness in Clean Energy Manufacturing through High Performance Computing

    Broader source: Energy.gov [DOE]

    Higher efficiency jet engines to save fuel; stronger fiberglass made with less energy for wind turbines and lightweight vehicles; next generation semiconductor devices for more efficient data centers: these are just a few of the manufacturing challenges that the Energy Department's ten new High Performance Computing for Manufacturing (HPC4Mfg) projects will tackle over the next year.

  17. Energy Department's High Performance Computing for Manufacturing Program Seeks to Fund New Industry Proposals

    Broader source: Energy.gov [DOE]

    The U.S. Department of Energy (DOE) is seeking concept proposals from qualified U.S. manufacturers to participate in short-term, collaborative projects. Selectees will be given access to High Performance Computing facilities and will work with experienced DOE National Laboratories staff in addressing challenges in U.S. manufacturing.

  18. High performance computing and communications: Advancing the frontiers of information technology

    SciTech Connect (OSTI)

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  19. An Overview of High Performance Computing and Challenges for the Future

    ScienceCinema (OSTI)

    Google Tech Talks

    2009-09-01

    In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and lgorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research

  20. Cloud object store for archive storage of high performance computing data using decoupling middleware

    DOE Patents [OSTI]

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  1. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    DOE Patents [OSTI]

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  2. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    SciTech Connect (OSTI)

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  3. OSTIblog Articles in the High-performance computing Topic | OSTI, US Dept

    Office of Scientific and Technical Information (OSTI)

    of Energy Office of Scientific and Technical Information High-performance computing Topic ACME - Perfecting Earth System Models by Kathy Chambers 29 Oct, 2014 in Earth system modeling as we know it and how it benefits climate change research is about to transform with the newly launched Accelerated Climate Modeling for Energy (ACME) project sponsored by the Earth System Modeling program within the Department of Energy's (DOE) Office of Biological and Environmental Research. ACME is an

  4. High-Performance Computing for Alloy Development | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High-Performance Computing for Alloy Development alloy-development.jpg Tomorrow's fossil-fuel based power plants will achieve higher efficiencies by operating at higher pressures and temperatures and under harsher and more corrosive conditions. Unfortunately, conventional metals simply cannot withstand these extreme environments, so advanced alloys must be designed and fabricated to meet the needs of these advanced systems. The properties of metal alloys, which are mixtures of metallic elements,

  5. Opening Remarks from the Joint Genome Institute and Argonne Lab High Performance Computing Workshop (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Rubin, Eddy

    2011-06-03

    DOE JGI Director Eddy Rubin gives opening remarks at the JGI/Argonne High Performance Computing (HPC) Workshop on January 25, 2010.

  6. Opening Remarks from the Joint Genome Institute and Argonne Lab High Performance Computing Workshop (2010 JGI/ANL HPC Workshop)

    SciTech Connect (OSTI)

    Rubin, Eddy

    2010-01-25

    DOE JGI Director Eddy Rubin gives opening remarks at the JGI/Argonne High Performance Computing (HPC) Workshop on January 25, 2010.

  7. Towards Real-Time High Performance Computing For Power Grid Analysis

    SciTech Connect (OSTI)

    Hui, Peter SY; Lee, Barry; Chikkagoudar, Satish

    2012-11-16

    Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored. Imposing real-time constraints on a parallel (cluster) computing environment introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we give a motivating example to demonstrate the need for such a system--- an application to estimate the electromechanical states of the power grid--- and we introduce a formal method for performing verification of certain temporal properties within a system of parallel processes. We describe our work towards a full real-time implementation of the target application--- namely, our progress towards extracting a key mathematical kernel from the application, the formal process by which we analyze the intricate timing behavior of the processes on the cluster, as well as timing measurements taken on our test cluster to demonstrate use of these concepts.

  8. Matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    DOE Patents [OSTI]

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2013-11-05

    Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.

  9. Evaluating Performance, Power, and Cooling in High Performance Computing (HPC) Data Centers

    SciTech Connect (OSTI)

    Evans, Jeffrey; Sandeep, Gupta; Karavanic, Karen; Marquez, Andres; Varsamopoulos, Girogios

    2012-01-24

    This chapter explores current research focused on developing our understanding of the interrelationships involved with HPC performance and energy management. The first section explores data center instrumentation, measurement, and performance analysis techniques, followed by a section focusing on work in data center thermal management and resource allocation. This is followed by an exploration of emerging techniques to identify application behavioral attributes that can provide clues and advice to HPC resource and energy management systems for the purpose of balancing HPC performance and energy efficiency.

  10. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect (OSTI)

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  11. Toward a Performance/Resilience Tool for Hardware/Software Co-Design of High-Performance Computing Systems

    SciTech Connect (OSTI)

    Engelmann, Christian; Naughton, III, Thomas J

    2013-01-01

    xSim is a simulation-based performance investigation toolkit that permits running high-performance computing (HPC) applications in a controlled environment with millions of concurrent execution threads, while observing application performance in a simulated extreme-scale system for hardware/software co-design. The presented work details newly developed features for xSim that permit the injection of MPI process failures, the propagation/detection/notification of such failures within the simulation, and their handling using application-level checkpoint/restart. These new capabilities enable the observation of application behavior and performance under failure within a simulated future-generation HPC system using the most common fault handling technique.

  12. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    SciTech Connect (OSTI)

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele; Timm, Steven; Kim, Hyun-Woo; Noh, Seo-Young; Raicu, Ioan

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56 virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).

  13. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    SciTech Connect (OSTI)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  14. High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017

    SciTech Connect (OSTI)

    Gerber, Richard; Wasserman, Harvey

    2015-01-20

    In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review

  15. High-performance computational and geostatistical experiments for testing the capabilities of 3-d electrical tomography

    SciTech Connect (OSTI)

    Carle, S. F.; Daily, W. D.; Newmark, R. L.; Ramirez, A.; Tompson, A.

    1999-01-19

    This project explores the feasibility of combining geologic insight, geostatistics, and high-performance computing to analyze the capabilities of 3-D electrical resistance tomography (ERT). Geostatistical methods are used to characterize the spatial variability of geologic facies that control sub-surface variability of permeability and electrical resistivity Synthetic ERT data sets are generated from geostatistical realizations of alluvial facies architecture. The synthetic data sets enable comparison of the "truth" to inversion results, quantification of the ability to detect particular facies at particular locations, and sensitivity studies on inversion parameters

  16. GridPACK Toolkit for Developing Power Grid Simulations on High Performance Computing Platforms

    SciTech Connect (OSTI)

    Palmer, Bruce J.; Perkins, William A.; Glass, Kevin A.; Chen, Yousu; Jin, Shuangshuang; Callahan, Charles D.

    2013-11-30

    This paper describes the GridPACK framework, which is designed to help power grid engineers develop modeling software capable of running on todays high performance computers. The framework contains modules for setting up distributed power grid networks, assigning buses and branches with arbitrary behaviors to the network, creating distributed matrices and vectors, using parallel linear and non-linear solvers to solve algebraic equations, and mapping functionality to create matrices and vectors based on properties of the network. In addition, the framework contains additional functionality to support IO and to manage errors.

  17. Investigating Operating System Noise in Extreme-Scale High-Performance Computing Systems using Simulation

    SciTech Connect (OSTI)

    Engelmann, Christian

    2013-01-01

    Hardware/software co-design for future-generation high-performance computing (HPC) systems aims at closing the gap between the peak capabilities of the hardware and the performance realized by applications (application-architecture performance gap). Performance profiling of architectures and applications is a crucial part of this iterative process. The work in this paper focuses on operating system (OS) noise as an additional factor to be considered for co-design. It represents the first step in including OS noise in HPC hardware/software co-design by adding a noise injection feature to an existing simulation-based co-design toolkit. It reuses an existing abstraction for OS noise with frequency (periodic recurrence) and period (duration of each occurrence) to enhance the processor model of the Extreme-scale Simulator (xSim) with synchronized and random OS noise simulation. The results demonstrate this capability by evaluating the impact of OS noise on MPI_Bcast() and MPI_Reduce() in a simulated future-generation HPC system with 2,097,152 compute nodes.

  18. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    SciTech Connect (OSTI)

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K; Wagner, Robert M

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allow rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark

  19. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect (OSTI)

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  20. A Lightweight, High-performance I/O Management Package for Data-intensive Computing

    SciTech Connect (OSTI)

    Wang, Jun

    2011-06-22

    Our group has been working with ANL collaborators on the topic ??bridging the gap between parallel file system and local file system? during the course of this project period. We visited Argonne National Lab -- Dr. Robert Ross??s group for one week in the past summer 2007. We looked over our current project progress and planned the activities for the incoming years 2008-09. The PI met Dr. Robert Ross several times such as HEC FSIO workshop 08, SC??08 and SC??10. We explored the opportunities to develop a production system by leveraging our current prototype to (SOGP+PVFS) a new PVFS version. We delivered SOGP+PVFS codes to ANL PVFS2 group in 2008.We also talked about exploring a potential project on developing new parallel programming models and runtime systems for data-intensive scalable computing (DISC). The methodology is to evolve MPI towards DISC by incorporating some functions of Google MapReduce parallel programming model. More recently, we are together exploring how to leverage existing works to perform (1) coordination/aggregation of local I/O operations prior to movement over the WAN, (2) efficient bulk data movement over the WAN, (3) latency hiding techniques for latency-intensive operations. Since 2009, we start applying Hadoop/MapReduce to some HEC applications with LANL scientists John Bent and Salman Habib. Another on-going work is to improve checkpoint performance at I/O forwarding Layer for the Road Runner super computer with James Nuetz and Gary Gridder at LANL. Two senior undergraduates from our research group did summer internships about high-performance file and storage system projects in LANL since 2008 for consecutive three years. Both of them are now pursuing Ph.D. degree in our group and will be 4th year in the PhD program in Fall 2011 and go to LANL to advance two above-mentioned works during this winter break. Since 2009, we have been collaborating with several computer scientists (Gary Grider, John bent, Parks Fields, James Nunez

  1. High-Performance Computing for Real-Time Grid Analysis and Operation

    SciTech Connect (OSTI)

    Huang, Zhenyu; Chen, Yousu; Chavarría-Miranda, Daniel

    2013-10-31

    Power grids worldwide are undergoing an unprecedented transition as a result of grid evolution meeting information revolution. The grid evolution is largely driven by the desire for green energy. Emerging grid technologies such as renewable generation, smart loads, plug-in hybrid vehicles, and distributed generation provide opportunities to generate energy from green sources and to manage energy use for better system efficiency. With utility companies actively deploying these technologies, a high level of penetration of these new technologies is expected in the next 5-10 years, bringing in a level of intermittency, uncertainties, and complexity that the grid did not see nor design for. On the other hand, the information infrastructure in the power grid is being revolutionized with large-scale deployment of sensors and meters in both the transmission and distribution networks. The future grid will have two-way flows of both electrons and information. The challenge is how to take advantage of the information revolution: pull the large amount of data in, process it in real time, and put information out to manage grid evolution. Without addressing this challenge, the opportunities in grid evolution will remain unfulfilled. This transition poses grand challenges in grid modeling, simulation, and information presentation. The computational complexity of underlying power grid modeling and simulation will significantly increase in the next decade due to an increased model size and a decreased time window allowed to compute model solutions. High-performance computing is essential to enable this transition. The essential technical barrier is to vastly increase the computational speed so operation response time can be reduced from minutes to seconds and sub-seconds. The speed at which key functions such as state estimation and contingency analysis are conducted (typically every 3-5 minutes) needs to be dramatically increased so that the analysis of contingencies is both

  2. Subsurface Multiphase Flow and Multicomponent Reactive Transport Modeling using High-Performance Computing

    SciTech Connect (OSTI)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

    2007-07-16

    Numerical modeling has become a critical tool to the U.S. Department of Energy for evaluating the environmental impact of alternative energy sources and remediation strategies for legacy waste sites. Unfortunately, the physical and chemical complexity of many sites overwhelms the capabilities of even most state of the art groundwater models. Of particular concern are the representation of highly-heterogeneous stratified rock/soil layers in the subsurface and the biological and geochemical interactions of chemical species within multiple fluid phases. Clearly, there is a need for higher-resolution modeling (i.e. more spatial, temporal, and chemical degrees of freedom) and increasingly mechanistic descriptions of subsurface physicochemical processes. We present SciDAC-funded research being performed in the development of PFLOTRAN, a parallel multiphase flow and multicomponent reactive transport model. Written in Fortran90, PFLOTRAN is founded upon PETSc data structures and solvers. We are employing PFLOTRAN in the simulation of uranium transport at the Hanford 300 Area, a contaminated site of major concern to the Department of Energy, the State of Washington, and other government agencies. By leveraging the billions of degrees of freedom available through high-performance computation using tens of thousands of processors, we can better characterize the release of uranium into groundwater and its subsequent transport to the Columbia River, and thereby better understand and evaluate the effectiveness of various proposed remediation strategies.

  3. Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center

    SciTech Connect (OSTI)

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew; Cader, Tahir; Fox, Kevin M.; Gustafson, William I.; Mundy, Christopher J.

    2013-06-30

    As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented, high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.

  4. Development of high performance scientific components for interoperability of computing packages

    SciTech Connect (OSTI)

    Gulabani, Teena Pratap

    2008-12-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  5. A High Performance Computing Network and System Simulator for the Power Grid: NGNS^2

    SciTech Connect (OSTI)

    Villa, Oreste; Tumeo, Antonino; Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.

    2012-11-11

    Designing and planing next generation power grid sys- tems composed of large power distribution networks, monitoring and control networks, autonomous generators and consumers of power requires advanced simulation infrastructures. The objective is to predict and analyze in time the behavior of networks of systems for unexpected events such as loss of connectivity, malicious attacks and power loss scenarios. This ultimately allows one to answer questions such as: What could happen to the power grid if .... We want to be able to answer as many questions as possible in the shortest possible time for the largest possible systems. In this paper we present a new High Performance Computing (HPC) oriented simulation infrastructure named Next Generation Network and System Simulator (NGNS2 ). NGNS2 allows for the distribution of a single simulation among multiple computing elements by using MPI and OpenMP threads. NGNS2 provides extensive configuration, fault tolerant and load balancing capabilities needed to simulate large and dynamic systems for long periods of time. We show the preliminary results of the simulator running approximately two million simulated entities both on a 64-node commodity Infiniband cluster and a 48-core SMP workstation.

  6. In the OSTI Collections: High-Performance Computing | OSTI, US Dept of

    Office of Scientific and Technical Information (OSTI)

    Energy Office of Scientific and Technical Information Performance Computing Computing efficiently Programming efficiently Correcting mistakes, avoiding failures Projections References Research Organizations Reports Available through OSTI's SciTech Connect Reports Available through OSTI's DOepatents Additional Reference What's happening in one current research field can be guessed from these recent report title excerpts: "Global Simulation of Plasma Microturbulence at the Petascale &

  7. Microsoft Word - The Essential Role of New Network Services for High Performance Distributed Computing - PARENG.CivilComp.2011.

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Second International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering 12-15 April 2011, Ajaccio - Corsica - France In "Trends in Parallel, Distributed, Grid and Cloud Computing for Engineering," Edited by: P. Iványi, B.H.V. Topping, Civil-Comp Press. Network Services for High Performance Distributed Computing and Data Management W. E. Johnston, C. Guok, J. Metzger, and B. Tierney ESnet and Lawrence Berkeley National Laboratory, Berkeley California, U.S.A

  8. National cyber defense high performance computing and analysis : concepts, planning and roadmap.

    SciTech Connect (OSTI)

    Hamlet, Jason R.; Keliiaa, Curtis M.

    2010-09-01

    There is a national cyber dilemma that threatens the very fabric of government, commercial and private use operations worldwide. Much is written about 'what' the problem is, and though the basis for this paper is an assessment of the problem space, we target the 'how' solution space of the wide-area national information infrastructure through the advancement of science, technology, evaluation and analysis with actionable results intended to produce a more secure national information infrastructure and a comprehensive national cyber defense capability. This cybersecurity High Performance Computing (HPC) analysis concepts, planning and roadmap activity was conducted as an assessment of cybersecurity analysis as a fertile area of research and investment for high value cybersecurity wide-area solutions. This report and a related SAND2010-4765 Assessment of Current Cybersecurity Practices in the Public Domain: Cyber Indications and Warnings Domain report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.

  9. Subsurface Multiphase Flow and Multicomponent Reactive Transport Modeling using High-Performance Computing

    SciTech Connect (OSTI)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

    2007-08-01

    Numerical modeling has become a critical tool to the Department of Energy for evaluating the environmental impact of alternative energy sources and remediation strategies for legacy waste sites. Unfortunately, the physical and chemical complexity of many sites overwhelms the capabilities of even most state of the art groundwater models. Of particular concern are the representation of highly-heterogeneous stratified rock/soil layers in the subsurface and the biological and geochemical interactions of chemical species within multiple fluid phases. Clearly, there is a need for higher-resolution modeling (i.e. more spatial, temporal, and chemical degrees of freedom) and increasingly mechanistic descriptions of subsurface physicochemical processes. We present research being performed in the development of PFLOTRAN, a parallel multiphase flow and multicomponent reactive transport model. Written in Fortran90, PFLOTRAN is founded upon PETSc data structures and solvers and has exhibited impressive strong scalability on up to 4000 processors on the ORNL Cray XT3. We are employing PFLOTRAN in the simulation of uranium transport at the Hanford 300 Area, a contaminated site of major concern to the Department of Energy, the State of Washington, and other government agencies where overly-simplistic historical modeling erroneously predicted decade removal times for uranium by ambient groundwater flow. By leveraging the billions of degrees of freedom available through high-performance computation using tens of thousands of processors, we can better characterize the release of uranium into groundwater and its subsequent transport to the Columbia River, and thereby better understand and evaluate the effectiveness of various proposed remediation strategies.

  10. High-Performance Computer Modeling of the Cosmos-Iridium Collision

    SciTech Connect (OSTI)

    Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W

    2009-08-28

    This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

  11. iSSH v. Auditd: Intrusion Detection in High Performance Computing

    SciTech Connect (OSTI)

    Karns, David M.; Protin, Kathryn S.; Wolf, Justin G.

    2012-07-30

    The goal is to provide insight into intrusions in high performance computing, focusing on tracking intruders motions through the system. The current tools, such as pattern matching, do not provide sufficient tracking capabilities. We tested two tools: an instrumented version of SSH (iSSH) and Linux Auditing Framework (Auditd). First discussed is Instrumented Secure Shell (iSSH): a version of SSH developed at Lawrence Berkeley National Laboratory. The goal is to audit user activity within a computer system to increase security. Capabilities are: Keystroke logging, Records user names and authentication information, and Catching suspicious remote and local commands. Strengths for iSSH are: (1) Good for keystroke logging, making it easier to track malicious users by catching suspicious commands; (2) Works with Bro to send alerts; could be configured to send pages to systems administrators; and (3) Creates visibility into SSH sessions. Weaknesses are: (1) Relatively new, so not very well documented; and (2) No capabilities to see if files have been edited, moved, or copied within the system. Second we discuss Auditd, the user component of the Linux Auditing System. It creates logs of user behavior, and monitors systems calls and file accesses. Its goal is to improve system security by keeping track of users actions within the system. Strenghts of Auditd are: (1) Very thorough logs; (2) Wider variety of tracking abilities than iSSH; and (3) Older, so better documented. Weaknesses are: (1) Logs record everything, not just malicious behavior; (2) The size of the logs can lead to overflowing directories; and (3) This level of logging leads to a lot of false alarms. Auditd is better documented than iSSH, which would help administrators during set up and troubleshooting. iSSH has a cleaner notification system, but the logs are not as detailed as Auditd. From our performance testing: (1) File transfer speed using SCP is increased when using iSSH; and (2) Network benchmarks

  12. LIAR -- A computer program for the modeling and simulation of high performance linacs

    SciTech Connect (OSTI)

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm.

  13. High Performance Computing at TJNAF| U.S. DOE Office of Science (SC)

    Office of Science (SC) Website

    Performance Computing at TJNAF Nuclear Physics (NP) NP Home About Research Facilities Science Highlights Benefits of NP Applications of Nuclear Science Applications of Nuclear Science Archives Small Business Innovation Research / Small Business Technology Transfer Funding Opportunities Nuclear Science Advisory Committee (NSAC) Community Resources Contact Information Nuclear Physics U.S. Department of Energy SC-26/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301)

  14. High performance systems

    SciTech Connect (OSTI)

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  15. Technologies and tools for high-performance distributed computing. Final report

    SciTech Connect (OSTI)

    Karonis, Nicholas T.

    2000-05-01

    In this project we studied the practical use of the MPI message-passing interface in advanced distributed computing environments. We built on the existing software infrastructure provided by the Globus Toolkit{trademark}, the MPICH portable implementation of MPI, and the MPICH-G integration of MPICH with Globus. As a result of this project we have replaced MPICH-G with its successor MPICH-G2, which is also an integration of MPICH with Globus. MPICH-G2 delivers significant improvements in message passing performance when compared to its predecessor MPICH-G and was based on superior software design principles resulting in a software base that was much easier to make the functional extensions and improvements we did. Using Globus services we replaced the default implementation of MPI's collective operations in MPICH-G2 with more efficient multilevel topology-aware collective operations which, in turn, led to the development of a new timing methodology for broadcasts [8]. MPICH-G2 was extended to include client/server functionality from the MPI-2 standard [23] to facilitate remote visualization applications and, through the use of MPI idioms, MPICH-G2 provided application-level control of quality-of-service parameters as well as application-level discovery of underlying Grid-topology information. Finally, MPICH-G2 was successfully used in a number of applications including an award-winning record-setting computation in numerical relativity. In the sections that follow we describe in detail the accomplishments of this project, we present experimental results quantifying the performance improvements, and conclude with a discussion of our applications experiences. This project resulted in a significant increase in the utility of MPICH-G2.

  16. Fair share on high performance computing systems : what does fair really mean?

    SciTech Connect (OSTI)

    Clearwater, Scott Harvey; Kleban, Stephen David

    2003-03-01

    We report on a performance evaluation of a Fair Share system at the ASCI Blue Mountain supercomputer cluster. We study the impacts of share allocation under Fair Share on wait times and expansion factor. We also measure the Service Ratio, a typical figure of merit for Fair Share systems, with respect to a number of job parameters. We conclude that Fair Share does little to alter important performance metrics such as expansion factor. This leads to the question of what Fair Share means on cluster machines. The essential difference between Fair Share on a uni-processor and a cluster is that the workload on a cluster is not fungible in space or time. We find that cluster machines must be highly utilized and support checkpointing in order for Fair Share to function more closely to the spirit in which it was originally developed.

  17. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    SciTech Connect (OSTI)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  18. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    SciTech Connect (OSTI)

    Panda, Dhabaleswar Kumar; Beckman, Pete

    2011-07-28

    implementations on top of existing publish-subscribe tools. #15; We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. #15; Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. #15; Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.

  19. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation

    SciTech Connect (OSTI)

    Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn

    2014-11-14

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.

  20. Complex matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    DOE Patents [OSTI]

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2014-02-11

    Mechanisms for performing a complex matrix multiplication operation are provided. A vector load operation is performed to load a first vector operand of the complex matrix multiplication operation to a first target vector register. The first vector operand comprises a real and imaginary part of a first complex vector value. A complex load and splat operation is performed to load a second complex vector value of a second vector operand and replicate the second complex vector value within a second target vector register. The second complex vector value has a real and imaginary part. A cross multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the complex matrix multiplication operation. The partial product is accumulated with other partial products and a resulting accumulated partial product is stored in a result vector register.

  1. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    SciTech Connect (OSTI)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  2. High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    when possible - Automatically using optimization methods. CONSTRUCTING REDUCED SCHEMES: GENETIC ALGORITHM PRICIPLE Initial population FITNESS EVALUATION of each individual F ...

  3. High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    activities span repeated lifetimes of supercomputing systems and infrastructure: Defining Future Environments Communication and collaborations with industry and academia to follow...

  4. Harnessing the Department of Energy’s High-Performance Computing Expertise to Strengthen the U.S. Chemical Enterprise

    SciTech Connect (OSTI)

    Dixon, David A.; Dupuis, Michel; Garrett, Bruce C.; Neaton, Jeffrey B.; Plata, Charity; Tarr, Matthew A.; Tomb, Jean-Francois; Golab, Joseph T.

    2012-01-17

    High-performance computing (HPC) is one area where the DOE has developed extensive expertise and capability. However, this expertise currently is not properly shared with or used by the private sector to speed product development, enable industry to move rapidly into new areas, and improve product quality. Such use would lead to substantial competitive advantages in global markets and yield important economic returns for the United States. To stimulate the dissemination of DOE's HPC expertise, the Council for Chemical Research (CCR) and the DOE jointly held a workshop on this topic. Four important energy topic areas were chosen as the focus of the meeting: Biomass/Bioenergy, Catalytic Materials, Energy Storage, and Photovoltaics. Academic, industrial, and government experts in these topic areas participated in the workshop to identify industry needs, evaluate the current state of expertise, offer proposed actions and strategies, and forecast the expected benefits of implementing those strategies.

  5. SU-E-T-531: Performance Evaluation of Multithreaded Geant4 for Proton Therapy Dose Calculations in a High Performance Computing Facility

    SciTech Connect (OSTI)

    Shin, J; Coss, D; McMurry, J; Farr, J [St. Jude Children's Research Hospital, Memphis, TN (United States); Faddegon, B [UC San Francisco, San Francisco, CA (United States)

    2014-06-01

    Purpose: To evaluate the efficiency of multithreaded Geant4 (Geant4-MT, version 10.0) for proton Monte Carlo dose calculations using a high performance computing facility. Methods: Geant4-MT was used to calculate 3D dose distributions in 111 mm3 voxels in a water phantom and patient's head with a 150 MeV proton beam covering approximately 55 cm2 in the water phantom. Three timestamps were measured on the fly to separately analyze the required time for initialization (which cannot be parallelized), processing time of individual threads, and completion time. Scalability of averaged processing time per thread was calculated as a function of thread number (1, 100, 150, and 200) for both 1M and 50 M histories. The total memory usage was recorded. Results: Simulations with 50 M histories were fastest with 100 threads, taking approximately 1.3 hours and 6 hours for the water phantom and the CT data, respectively with better than 1.0 % statistical uncertainty. The calculations show 1/N scalability in the event loops for both cases. The gains from parallel calculations started to decrease with 150 threads. The memory usage increases linearly with number of threads. No critical failures were observed during the simulations. Conclusion: Multithreading in Geant4-MT decreased simulation time in proton dose distribution calculations by a factor of 64 and 54 at a near optimal 100 threads for water phantom and patient's data respectively. Further simulations will be done to determine the efficiency at the optimal thread number. Considering the trend of computer architecture development, utilizing Geant4-MT for radiotherapy simulations is an excellent cost-effective alternative for a distributed batch queuing system. However, because the scalability depends highly on simulation details, i.e., the ratio of the processing time of one event versus waiting time to access for the shared event queue, a performance evaluation as described is recommended.

  6. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    SciTech Connect (OSTI)

    Clouse, C. J.; Edwards, M. J.; McCoy, M. G.; Marinak, M. M.; Verdon, C. P.

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  7. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

    SciTech Connect (OSTI)

    Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

    2011-02-01

    Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

  8. Geant4 Computing Performance Benchmarking and Monitoring

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

  9. Geant4 Computing Performance Benchmarking and Monitoring

    SciTech Connect (OSTI)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  10. High Performance Sustainable Buildings

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    science and bioscience capabiities. Occupational Medicine will become a High Performance Sustainable Building in 2013. On the former County landfill, a photovoltaic array field...

  11. High Performance Network Monitoring

    SciTech Connect (OSTI)

    Martinez, Jesse E

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  12. Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation and Completion of Episodic Information.

    SciTech Connect (OSTI)

    Aimone, James Bradley; Bernard, Michael Lewis; Vineyard, Craig Michael; Verzi, Stephen Joseph

    2014-10-01

    Adult neurogenesis in the hippocampus region of the brain is a neurobiological process that is believed to contribute to the brain's advanced abilities in complex pattern recognition and cognition. Here, we describe how realistic scale simulations of the neurogenesis process can offer both a unique perspective on the biological relevance of this process and confer computational insights that are suggestive of novel machine learning techniques. First, supercomputer based scaling studies of the neurogenesis process demonstrate how a small fraction of adult-born neurons have a uniquely larger impact in biologically realistic scaled networks. Second, we describe a novel technical approach by which the information content of ensembles of neurons can be estimated. Finally, we illustrate several examples of broader algorithmic impact of neurogenesis, including both extending existing machine learning approaches and novel approaches for intelligent sensing.

  13. System Software and Tools for High Performance Computing Environments: A report on the findings of the Pasadena Workshop, April 14--16, 1992

    SciTech Connect (OSTI)

    Sterling, T.; Messina, P.; Chen, M.

    1993-04-01

    The Pasadena Workshop on System Software and Tools for High Performance Computing Environments was held at the Jet Propulsion Laboratory from April 14 through April 16, 1992. The workshop was sponsored by a number of Federal agencies committed to the advancement of high performance computing (HPC) both as a means to advance their respective missions and as a national resource to enhance American productivity and competitiveness. Over a hundred experts in related fields from industry, academia, and government were invited to participate in this effort to assess the current status of software technology in support of HPC systems. The overall objectives of the workshop were to understand the requirements and current limitations of HPC software technology and to contribute to a basis for establishing new directions in research and development for software technology in HPC environments. This report includes reports written by the participants of the workshop`s seven working groups. Materials presented at the workshop are reproduced in appendices. Additional chapters summarize the findings and analyze their implications for future directions in HPC software technology development.

  14. High Performance Sustainable Building

    Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

    2011-11-09

    This Guide provides approaches for implementing the High Performance Sustainable Building (HPSB) requirements of DOE Order 413.3B, Program and Project Management for the Acquisition of Capital Assets. Supersedes DOE G 413.3-6.

  15. High Performance Sustainable Building

    Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

    2011-11-09

    This Guide highlights the DOE O 413.3B drivers for incorporating high performance sustainable building (HPSB) principles into Critical Decisions 1 through 4 and provides guidance for implementing the Order's HPSB requirements.

  16. High Performance Sustainable Buildings

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Buildings Goal 3: High Performance Sustainable Buildings Maintaining the conditions of a building improves the health of not only the surrounding ecosystems, but also the well-being of its occupants. Energy Conservation» Efficient Water Use & Management» High Performance Sustainable Buildings» Greening Transportation» Green Purchasing & Green Technology» Pollution Prevention» Science Serving Sustainability» ENVIRONMENTAL SUSTAINABILITY GOALS at LANL The Radiological Laboratory

  17. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

    SciTech Connect (OSTI)

    Joseph, Earl C.; Conway, Steve; Dekate, Chirag

    2013-09-30

    This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size.  A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

  18. Application of high performance computing to automotive design and manufacturing: Composite materials modeling task technical manual for constitutive models for glass fiber-polymer matrix composites

    SciTech Connect (OSTI)

    Simunovic, S; Zacharia, T

    1997-11-01

    This report provides a theoretical background for three constitutive models for a continuous strand mat (CSM) glass fiber-thermoset polymer matrix composite. The models were developed during fiscal years 1994 through 1997 as a part of the Cooperative Research and Development Agreement, "Application of High-Performance Computing to Automotive Design and Manufacturing." The full derivation of constitutive relations in the framework of the continuum program DYNA3D and have been used for the simulation and impact analysis of CSM composite tubes. The analysis of simulation and experimental results show that the model based on strain tensor split yields the most accurate results of the three implemented models. The parameters used in the models and their derivation from the physical tests are documented.

  19. High Performance Sustainable Building

    Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

    2008-06-20

    The guide supports DOE O 413.3A and provides useful information on the incorporation of high performance sustainable building principles into building-related General Plant Projects and Institutional General Plant Projects at DOE sites. Canceled by DOE G 413.3-6A. Does not cancel other directives.

  20. Task-parallel message passing interface implementation of Autodock4 for docking of very large databases of compounds using high-performance super-computers

    SciTech Connect (OSTI)

    Collignon, Barbara C; Schultz, Roland; Smith, Jeremy C; Baudry, Jerome Y

    2011-01-01

    A message passing interface (MPI)-based implementation (Autodock4.lga.MPI) of the grid-based docking program Autodock4 has been developed to allow simultaneous and independent docking of multiple compounds on up to thousands of central processing units (CPUs) using the Lamarkian genetic algorithm. The MPI version reads a single binary file containing precalculated grids that represent the protein-ligand interactions, i.e., van der Waals, electrostatic, and desolvation potentials, and needs only two input parameter files for the entire docking run. In comparison, the serial version of Autodock4 reads ASCII grid files and requires one parameter file per compound. The modifications performed result in significantly reduced input/output activity compared with the serial version. Autodock4.lga.MPI scales up to 8192 CPUs with a maximal overhead of 16.3%, of which two thirds is due to input/output operations and one third originates from MPI operations. The optimal docking strategy, which minimizes docking CPU time without lowering the quality of the database enrichments, comprises the docking of ligands preordered from the most to the least flexible and the assignment of the number of energy evaluations as a function of the number of rotatable bounds. In 24 h, on 8192 high-performance computing CPUs, the present MPI version would allow docking to a rigid protein of about 300K small flexible compounds or 11 million rigid compounds.

  1. A Comparison of Library Tracking Methods in High Performance

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Library Tracking Methods in High Performance Computing Computer System Cluster and Networking Summer Institute 2013 Poster Seminar William Rosenberger (New Mexico Tech), Dennis...

  2. High Performance Window Retrofit

    SciTech Connect (OSTI)

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  3. High Performance Buildings Database

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  4. Misleading Performance Claims in Parallel Computations

    SciTech Connect (OSTI)

    Bailey, David H.

    2009-05-29

    In a previous humorous note entitled 'Twelve Ways to Fool the Masses,' I outlined twelve common ways in which performance figures for technical computer systems can be distorted. In this paper and accompanying conference talk, I give a reprise of these twelve 'methods' and give some actual examples that have appeared in peer-reviewed literature in years past. I then propose guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion, not only in the world of device simulation but also in the larger arena of technical computing.

  5. High-performance steels

    SciTech Connect (OSTI)

    Barsom, J.M.

    1996-03-01

    Steel is the material of choice in structures such as storage tanks, gas and oil distribution pipelines, high-rise buildings, and bridges because of its strength, ductility, and fracture toughness, as well as its repairability and recyclability. Furthermore, these properties are continually being improved via advances in steelmaking, casting, rolling, and chemistry. Developments in steelmaking have led to alloys having low sulfur, sulfide shape control, and low hydrogen. They provide reduced chemical segregation, higher fracture toughness, better through-thickness and weld heat-affected zone properties, and lower susceptibility to hydrogen cracking. Processing has moved beyond traditional practices to designed combinations of controlled rolling and cooling known as thermomechanical control processes (TMCP). In fact, chemical composition control and TMCP now enable such precise adjustment of final properties that these alloys are now known as high-performance steels (HPS), engineered materials having properties tailored for specific applications.

  6. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    SciTech Connect (OSTI)

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin; Pasccci, Valerio; Gamblin, Todd; Brunst, Holger

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  7. Evaluating iterative reconstruction performance in computed tomography

    SciTech Connect (OSTI)

    Chen, Baiyu Solomon, Justin; Ramirez Giraldo, Juan Carlos; Samei, Ehsan

    2014-12-15

    Purpose: Iterative reconstruction (IR) offers notable advantages in computed tomography (CT). However, its performance characterization is complicated by its potentially nonlinear behavior, impacting performance in terms of specific tasks. This study aimed to evaluate the performance of IR with both task-specific and task-generic strategies. Methods: The performance of IR in CT was mathematically assessed with an observer model that predicted the detection accuracy in terms of the detectability index (d′). d′ was calculated based on the properties of the image noise and resolution, the observer, and the detection task. The characterizations of image noise and resolution were extended to accommodate the nonlinearity of IR. A library of tasks was mathematically modeled at a range of sizes (radius 1–4 mm), contrast levels (10–100 HU), and edge profiles (sharp and soft). Unique d′ values were calculated for each task with respect to five radiation exposure levels (volume CT dose index, CTDI{sub vol}: 3.4–64.8 mGy) and four reconstruction algorithms (filtered backprojection reconstruction, FBP; iterative reconstruction in imaging space, IRIS; and sinogram affirmed iterative reconstruction with strengths of 3 and 5, SAFIRE3 and SAFIRE5; all provided by Siemens Healthcare, Forchheim, Germany). The d′ values were translated into the areas under the receiver operating characteristic curve (AUC) to represent human observer performance. For each task and reconstruction algorithm, a threshold dose was derived as the minimum dose required to achieve a threshold AUC of 0.9. A task-specific dose reduction potential of IR was calculated as the difference between the threshold doses for IR and FBP. A task-generic comparison was further made between IR and FBP in terms of the percent of all tasks yielding an AUC higher than the threshold. Results: IR required less dose than FBP to achieve the threshold AUC. In general, SAFIRE5 showed the most significant dose reduction

  8. Elucidating geochemical response of shallow heterogeneous aquifers to CO2 leakage using high-performance computing: Implications for monitoring of CO2 sequestration

    SciTech Connect (OSTI)

    Navarre-Sitchler, Alexis K.; Maxwell, Reed M.; Siirila, Erica R.; Hammond, Glenn E.; Lichtner, Peter C.

    2013-03-01

    Predicting and quantifying impacts of potential carbon dioxide (CO2) leakage into shallow aquifers that overlie geologic CO2 storage formations is an important part of developing reliable carbon storage techniques. Leakage of CO2 through fractures, faults or faulty wellbores can reduce groundwater pH, inducing geochemical reactions that release solutes into the groundwater and pose a risk of degrading groundwater quality. In order to help quantify this risk, predictions of metal concentrations are needed during geologic storage of CO2. Here, we present regional-scale reactive transport simulations, at relatively fine-scale, of CO2 leakage into shallow aquifers run on the PFLOTRAN platform using high-performance computing. Multiple realizations of heterogeneous permeability distributions were generated using standard geostatistical methods. Increased statistical anisotropy of the permeability field resulted in more lateral and vertical spreading of the plume of impacted water, leading to increased Pb2+ (lead) concentrations and lower pH at a well down gradient of the CO2 leak. Pb2+ concentrations were higher in simulations where calcite was the source of Pb2+ compared to galena. The low solubility of galena effectively buffered the Pb2+ concentrations as galena reached saturation under reducing conditions along the flow path. In all cases, Pb2+ concentrations remained below the maximum contaminant level set by the EPA. Results from this study, compared to natural variability observed in aquifers, suggest that bicarbonate (HCO3) concentrations may be a better geochemical indicator of a CO2 leak under the conditions simulated here.

  9. Computational Tools to Assess Turbine Biological Performance

    SciTech Connect (OSTI)

    Richmond, Marshall C.; Serkowski, John A.; Rakowski, Cynthia L.; Strickler, Brad; Weisbeck, Molly; Dotson, Curtis L.

    2014-07-24

    Public Utility District No. 2 of Grant County (GCPUD) operates the Priest Rapids Dam (PRD), a hydroelectric facility on the Columbia River in Washington State. The dam contains 10 Kaplan-type turbine units that are now more than 50 years old. Plans are underway to refit these aging turbines with new runners. The Columbia River at PRD is a migratory pathway for several species of juvenile and adult salmonids, so passage of fish through the dam is a major consideration when upgrading the turbines. In this paper, a method for turbine biological performance assessment (BioPA) is demonstrated. Using this method, a suite of biological performance indicators is computed based on simulated data from a CFD model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. Using known relationships between the dose of an injury mechanism and frequency of injury (dose–response) from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from proposed designs, the engineer can identify the more-promising alternatives. We present an application of the BioPA method for baseline risk assessment calculations for the existing Kaplan turbines at PRD that will be used as the minimum biological performance that a proposed new design must achieve.

  10. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    SciTech Connect (OSTI)

    Brown, Maxine D.; Leigh, Jason

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascale computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.

  11. High Performance Energy Management

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Performance Energy Management Reduce energy use and meet your business objectives By applying continuous improvement practices similar to Lean and Six Sigma, the BPA Energy Smart...

  12. High Performance Window Attachments

    Broader source: Energy.gov (indexed) [DOE]

    Statement: * A wide range of residential window attachments are available, but they ... to model wide range of window coverings * Performed window coverings ...

  13. Using High Performance Libraries and Tools

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Libraries and Tools Using High Performance Libraries and Tools Memkind Library on Edison The memkind library is a user extensible heap manager built on top of jemalloc which enables control of memory characteristics and a partitioning of the heap between kinds of memory (including user defined kinds of memory). This library can be used to simulate the benefit of the high bandwidth memory that will be available on KNL system on the dual socket Edison compute nodes (the two

  14. Salishan: Conference on High Speed Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... of Notre Dame, and William Harrod, DARPA Exascale Ambitions What, me worry? : S > ... Systems (HPCS) (pdf), Robert Graybill, DARPA High-End Computing Revitalization (pdf), ...

  15. Connecting HPC and High Performance

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    HPC and High Performance Networks for Scientists and Researchers SC15 Austin, Texas November 18, 2015 1 Agenda 2 * Welcome and introductions * BoF Goals * Overview of National Research & Education Networks at work Globally * Discuss needs, challenges for leveraging HPC and high-performance networks * HPC/HTC pre-SC15 ESnet/GEANT/Internet2 survey results overview * Next steps discussion * Closing and Thank You BoF: Connecting HPC and High Performance Networks for Scientists and Researchers

  16. Thermoelectrics Partnership: High Performance Thermoelectric...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Embedded Nanoparticles Thermoelectrics Partnership: High Performance Thermoelectric Waste Heat Recovery System Based on Zintl Phase Materials with Embedded Nanoparticles 2011 DOE ...

  17. Exploration of multi-block polymer morphologies using high performance...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Exploration of multi-block polymer morphologies using high performance computing Modern material design increasingly relies on controlling small scale morphologies. Multi-block...

  18. Continuous Monitoring And Cyber Security For High Performance...

    Office of Scientific and Technical Information (OSTI)

    Continuous Monitoring And Cyber Security For High Performance Computing Malin, Alex B. Los Alamos National Laboratory; Van Heule, Graham K. Los Alamos National Laboratory...

  19. Illustrating the future prediction of performance based on computer...

    Office of Scientific and Technical Information (OSTI)

    Illustrating the future prediction of performance based on computer code, physical experiments, and critical performance parameter samples Citation Details In-Document Search ...

  20. High Performance Home Cost Performance Trade-Offs: Production...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    High Performance Home Cost Performance Trade-Offs: Production Builders - Building America Top Innovation High Performance Home Cost Performance Trade-Offs: Production Builders - ...

  1. Virtual Design Studio (VDS) - Development of an Integrated Computer Simulation Environment for Performance Based Design of Very-Low Energy and High IEQ Buildings

    SciTech Connect (OSTI)

    Chen, Yixing; Zhang, Jianshun; Pelken, Michael; Gu, Lixing; Rice, Danial; Meng, Zhaozhou; Semahegn, Shewangizaw; Feng, Wei; Ling, Francesca; Shi, Jun; Henderson, Hugh

    2013-09-01

    Executive Summary The objective of this study was to develop a “Virtual Design Studio (VDS)”: a software platform for integrated, coordinated and optimized design of green building systems with low energy consumption, high indoor environmental quality (IEQ), and high level of sustainability. This VDS is intended to assist collaborating architects, engineers and project management team members throughout from the early phases to the detailed building design stages. It can be used to plan design tasks and workflow, and evaluate the potential impacts of various green building strategies on the building performance by using the state of the art simulation tools as well as industrial/professional standards and guidelines for green building system design. Engaged in the development of VDS was a multi-disciplinary research team that included architects, engineers, and software developers. Based on the review and analysis of how existing professional practices in building systems design operate, particularly those used in the U.S., Germany and UK, a generic process for performance-based building design, construction and operation was proposed. It distinguishes the whole process into five distinct stages: Assess, Define, Design, Apply, and Monitoring (ADDAM). The current VDS is focused on the first three stages. The VDS considers building design as a multi-dimensional process, involving multiple design teams, design factors, and design stages. The intersection among these three dimensions defines a specific design task in terms of “who”, “what” and “when”. It also considers building design as a multi-objective process that aims to enhance the five aspects of performance for green building systems: site sustainability, materials and resource efficiency, water utilization efficiency, energy efficiency and impacts to the atmospheric environment, and IEQ. The current VDS development has been limited to energy efficiency and IEQ performance, with particular focus

  2. High energy neutron Computed Tomography developed

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High energy neutron Computed Tomography developed High energy neutron Computed Tomography developed LANSCE now has a high-energy neutron imaging capability that can be deployed on WNR flight paths for unclassified and classified objects. May 9, 2014 Neutron tomography horizontal "slice" of a tungsten and polyethylene test object containing tungsten carbide BBs. Neutron tomography horizontal "slice" of a tungsten and polyethylene test object containing tungsten carbide BBs.

  3. Hydro Review: Computational Tools to Assess Turbine Biological Performance

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    | Department of Energy Hydro Review: Computational Tools to Assess Turbine Biological Performance Hydro Review: Computational Tools to Assess Turbine Biological Performance This review covers the BioPA method used to analyze the biological performance of proposed designs to help ensure the safety of fish passing through the turbines at the Priest Rapids Dam in Grant County, Washington. Computational Tools to Assess Turbine Biological Performance (483.71 KB) More Documents & Publications

  4. Performance Tools & APIs | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    & Profiling Performance Tools & APIs Tuning MPI on BGQ Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BGQ Performance Counters...

  5. A High Performance Computing Platform for Performing High-Volume...

    Office of Scientific and Technical Information (OSTI)

    developed external to the core simulation engine without consideration for ease of use. This has created a technical gap for applying HPC-based tools to today's power grid studies. ...

  6. Performance comparison of desktop multiprocessing and workstation cluster computing

    SciTech Connect (OSTI)

    Crandall, P.E.; Sumithasri, E.V.; Clement, M.A.

    1996-12-31

    This paper describes our initial findings regarding the performance trade-offs between cluster computing where the participating processors are independent machines connected by a high speed switch and desktop multiprocessing where the processors reside within a single workstation and share a common memory. While interprocessor communication time has typically been cited as the limiting force on performance in the cluster, bus and memory contention have had similar effects in shared memory systems. The advent of high speed interconnects and improved bus and memory access speeds have enhanced the performance curves of both platforms. We present comparisons of the execution times of three applications with varying levels of data dependencies-numerical integration, matrix multiplication, and Jacobi iteration across three environment: the PVM distributed memory model, the PVM shared memory model, and the Solaris threads package.

  7. High Performance and Sustainable Buildings Guidance | Department...

    Office of Environmental Management (EM)

    High Performance and Sustainable Buildings Guidance High Performance and Sustainable Buildings Guidance High Performance and Sustainable Buildings Guidance (192.76 KB) More ...

  8. High Performance Sustainable Building Design RM | Department...

    Office of Environmental Management (EM)

    High Performance Sustainable Building Design RM High Performance Sustainable Building Design RM The High Performance Sustainable Building Design (HPSBD) Review Module (RM) is a ...

  9. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOE Patents [OSTI]

    Faraj, Ahmad

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  10. Illustrating the future prediction of performance based on computer...

    Office of Scientific and Technical Information (OSTI)

    Illustrating the future prediction of performance based on computer code, physical ... Citation Details In-Document Search Title: Illustrating the future prediction of ...

  11. INL High Performance Building Strategy

    SciTech Connect (OSTI)

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  12. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing.

    SciTech Connect (OSTI)

    Bartlett, Roscoe Ainsworth

    2010-05-01

    The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classes is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory

  13. High Performance Photovoltaic Project Overview

    SciTech Connect (OSTI)

    Symko-Davies, M.; McConnell, R.

    2005-01-01

    The High-Performance Photovoltaic (HiPerf PV) Project was initiated by the U.S. Department of Energy to substantially increase the viability of photovoltaics (PV) for cost-competitive applications so that PV can contribute significantly to our energy supply and environment in the 21st century. To accomplish this, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices. In this paper, we describe the recent research accomplishments in the in-house directed efforts and the research efforts under way in the subcontracted area.

  14. Computing in high-energy physics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mount, Richard P.

    2016-05-31

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  15. Scalable Computer Performance and Analysis (Hierarchical INTegration)

    Energy Science and Technology Software Center (OSTI)

    1999-09-02

    HINT is a program to measure a wide variety of scalable computer systems. It is capable of demonstrating the benefits of using more memory or processing power, and of improving communications within the system. HINT can be used for measurement of an existing system, while the associated program ANALYTIC HINT can be used to explain the measurements or as a design tool for proposed systems.

  16. High Performance Outdoor Lighting Accelerator

    Broader source: Energy.gov [DOE]

    Hosted by the U.S. Department of Energy (DOE)’s Weatherization and Intergovernmental Programs Office (WIPO), this webinar covered the expansion of the Better Buildings platform to include the newest initiative for the public sector: the High Performance Outdoor Lighting Accelerator (HPOLA).

  17. High Performance Bulk Thermoelectric Materials

    SciTech Connect (OSTI)

    Ren, Zhifeng

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  18. High-Performance Nanostructured Coating

    Broader source: Energy.gov [DOE]

    The High-Performance Nanostructured Coating fact sheet details a SunShot project led by a University of California, San Diego research team working to develop a new high-temperature spectrally selective coating for receiver surfaces. These receiver surfaces, used in concentrating solar power systems, rely on high-temperature SSCs to effectively absorb solar energy without emitting much blackbody radiation.The optical properties of the SSC directly determine the efficiency and maximum attainable temperature of solar receivers, which in turn influence the power-conversion efficiency and overall system cost.

  19. Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Study: Innovative Energy Efficiency Approaches in NOAA's Environmental Security Computing Center in Fairmont, West Virginia High-Performance Computing Data Center Metering Protocol

  20. High Performance Sustainable Building Design RM

    Office of Environmental Management (EM)

    High Performance Sustainable Building Design Review Module March 2010 CD-0 O High 0 This ... Director HPSBD High Performance Sustainable Building Design IESNA Illuminating ...

  1. Software Synthesis for High Productivity Exascale Computing

    SciTech Connect (OSTI)

    Bodik, Rastislav

    2010-09-01

    Over the three years of our project, we accomplished three key milestones: We demonstrated how ideas from generative programming and software synthesis can help support the development of bulk-synchronous distributed memory kernels. These ideas are realized in a new language called MSL, a C-like language that combines synthesis features with high level notations for array manipulation and bulk-synchronous parallelism to simplify the semantic analysis required for synthesis. We also demonstrated that these high level notations map easily to low level C code and show that the performance of this generated code matches that of handwritten Fortran. Second, we introduced the idea of solver-aided domain-specific languages (SDSLs), which are an emerging class of computer-aided programming systems. SDSLs ease the construction of programs by automating tasks such as verification, debugging, synthesis, and non-deterministic execution. SDSLs are implemented by translating the DSL program into logical constraints. Next, we developed a symbolic virtual machine called Rosette, which simplifies the construction of such SDSLs and their compilers. We have used Rosette to build SynthCL, a subset of OpenCL that supports synthesis. Third, we developed novel numeric algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. We achieved progress in three aspects of this problem. First we determined lower bounds on communication. Second, we compared these lower bounds to widely used versions of these algorithms, and noted that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identified or invented new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrated large speed-ups in theory and practice.

  2. High performance image processing of SPRINT

    SciTech Connect (OSTI)

    DeGroot, T.

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  3. Computer Modeling of Chemical and Geochemical Processes in High...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer modeling of chemical and geochemical processes in high ionic strength solutions ... in brine Computer modeling of chemical and geochemical processes in high ionic ...

  4. Computationally Efficient Modeling of High-Efficiency Clean Combustion...

    Broader source: Energy.gov (indexed) [DOE]

    More Documents & Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines Computationally Efficient Modeling of High-Efficiency Clean Combustion ...

  5. Department of Defense High Performance Computing Modernization...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... CBU-115 separation from F-16 GBU-38 separation from B1B J. Keen, R. Moran, J. Dudley, J. Torres, Lt. J. Babcock, C. Cureton, and T. Eymann, AFSEO, Eglin AFB, FL; B. Jolly, J. ...

  6. DOE High Performance Computing Operational Review (HPCOR)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... David Smith, Jack Deslippe, Shreyas Cholia, David Skinner, John Harney, Stuart Campbell, Rudy Garcia, Craig Ulmer, Ilana Stern. Co-Chairs: David Skinner, Stuart Campbell. ...

  7. High-Performance Computing at Los

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    a supercomputer epicenter where "big data set" really means something, a data ... Statistical analysis generally occurs over the entire data set. But more detailed analysis ...

  8. High Performance Computing for Manufacturing Parternship | GE...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    GE, US DOE Partner on HPC4Mfg projects to deliver new capabilities in 3D Printing and higher jet engine efficiency Click to email this to a friend (Opens in new window) Share on ...

  9. High Performance Computing Facility Operational Assessment, CY...

    Office of Scientific and Technical Information (OSTI)

    costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from...

  10. Building America Webinar: High Performance Enclosure Strategies...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Next Gen Advanced Framing for High Performance Homes Integrated System Solutions Building ... - August 13, 2014 - Next Gen Advanced Framing for High Performance Homes Integrated ...

  11. Funding Opportunity: Building America High Performance Housing...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Opportunity: Building America High Performance Housing Innovation Funding Opportunity: Building America High Performance Housing Innovation November 19, 2015 - 11:51am Addthis The ...

  12. Automatic Performance Collection (AutoPerf) | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Automatic

  13. Routing performance analysis and optimization within a massively parallel computer

    DOE Patents [OSTI]

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  14. Ultra-high resolution computed tomography imaging

    DOE Patents [OSTI]

    Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  15. High Performance Factory Built Housing

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Performance Factory Built Housing 2015 Building Technologies Office Peer Review Jordan Dentz, jdentz@levypartnership.com ARIES The Levy Partnership, Inc. Project Summary ...

  16. High-Performance Phylogeny Reconstruction

    SciTech Connect (OSTI)

    Tiffani L. Williams

    2004-11-10

    Under the Alfred P. Sloan Fellowship in Computational Biology, I have been afforded the opportunity to study phylogenetics--one of the most important and exciting disciplines in computational biology. A phylogeny depicts an evolutionary relationship among a set of organisms (or taxa). Typically, a phylogeny is represented by a binary tree, where modern organisms are placed at the leaves and ancestral organisms occupy internal nodes, with the edges of the tree denoting evolutionary relationships. The task of phylogenetics is to infer this tree from observations upon present-day organisms. Reconstructing phylogenies is a major component of modern research programs in many areas of biology and medicine, but it is enormously expensive. The most commonly used techniques attempt to solve NP-hard problems such as maximum likelihood and maximum parsimony, typically by bounded searches through an exponentially-sized tree-space. For example, there are over 13 billion possible trees for 13 organisms. Phylogenetic heuristics that quickly analyze large amounts of data accurately will revolutionize the biological field. This final report highlights my activities in phylogenetics during the two-year postdoctoral period at the University of New Mexico under Prof. Bernard Moret. Specifically, this report reports my scientific, community and professional activities as an Alfred P. Sloan Postdoctoral Fellow in Computational Biology.

  17. A Comprehensive Look at High Performance Parallel I/O

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A Comprehensive Look at High Performance Parallel I/O A Comprehensive Look at High Performance Parallel I/O Book Signing @ SC14! Nov. 18, 5 p.m. in Booth 1939 November 10, 2014 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov HighPerf Parallel IO In the 1990s, high performance computing (HPC) made a dramatic transition to massively parallel processors. As this model solidified over the next 20 years, supercomputing performance increased from gigaflops-billions of calculations per second-to

  18. Building America Webinar: High Performance Space Conditioning...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    versus mini-splits being used in high performance (high R value enclosurelow air leakage) houses, often configured as a simplified distribution system (one heat source per floor). ...

  19. High Performance Binderless Electrodes for Rechargeable Lithium...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Binderless Electrodes for Rechargeable Lithium Batteries National ... Electrode for fast-charging Lithium Ion Batteries, Accelerating Innovation Webinar ...

  20. Thermoelectrics Partnership: High Performance Thermoelectric Waste Heat

    Broader source: Energy.gov (indexed) [DOE]

    Recovery System Based on Zintl Phase Materials with Embedded Nanoparticles | Department of Energy 70_shakouri_2011_p.pdf (856.16 KB) More Documents & Publications High Performance Zintl Phase TE Materials with Embedded Particles High performance Zintl phase TE materials with embedded nanoparticles High performance Zintl phase TE materials with embedded nanoparticles

  1. FPGAs in High Perfomance Computing: Results from Two LDRD Projects.

    SciTech Connect (OSTI)

    Underwood, Keith D; Ulmer, Craig D.; Thompson, David; Hemmert, Karl Scott

    2006-11-01

    Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5

  2. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  3. NUG 2013 User Day: Trends, Discovery, and Innovation in High Performance

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Home » For Users » NERSC Users Group » Annual Meetings » NUG 2013 » User Day NUG 2013 User Day: Trends, Discovery, and Innovation in High Performance Computing Wednesday, Feb. 13 Berkeley Lab Building 50 Auditorium Live streaming: http://hosting.epresence.tv/LBL/1.aspx 8:45 - Welcome: Kathy Yelick, Berkeley Lab Associate Director for Computing Sciences Trends 9:00 - The Future of High Performance Scientific Computing, Kathy Yelick, Berkeley Lab Associate Director for Computing

  4. Computing at JLab

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

  5. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  6. PPPL and Princeton join high-performance software project | Princeton

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Plasma Physics Lab and Princeton join high-performance software project By John Greenwald July 22, 2016 Tweet Widget Google Plus One Share on Facebook Co-principal investigators William Tang and Bei Wang (Photo by Elle Starkman/Office of Communications) Co-principal investigators William Tang and Bei Wang Princeton University and the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) are participating in the accelerated development of a modern high-performance computing

  7. PPPL and Princeton join high-performance software project | Princeton

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Plasma Physics Lab and Princeton join high-performance software project By John Greenwald July 22, 2016 Tweet Widget Google Plus One Share on Facebook Co-principal investigators William Tang and Bei Wang. (Photo by Elle Starkman/Office of Communications) Co-principal investigators William Tang and Bei Wang. Princeton University and the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) are participating in the accelerated development of a modern high-performance computing

  8. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOE Patents [OSTI]

    Faraj, Ahmad

    2013-02-12

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: performing, for each node, a local reduction operation using allreduce contribution data for the cores of that node, yielding, for each node, a local reduction result for one or more representative cores for that node; establishing one or more logical rings among the nodes, each logical ring including only one of the representative cores from each node; performing, for each logical ring, a global allreduce operation using the local reduction result for the representative cores included in that logical ring, yielding a global allreduce result for each representative core included in that logical ring; and performing, for each node, a local broadcast operation using the global allreduce results for each representative core on that node.

  9. High Temperature Fuel Cell Performance High Temperature Fuel Cell

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Performance of of Sulfonated Sulfonated Poly(phenylene Poly(phenylene) Proton) Proton Conducting Conducting Polymers | Department of Energy Cell Performance High Temperature Fuel Cell Performance of of Sulfonated Sulfonated Poly(phenylene Poly(phenylene) Proton) Proton Conducting Conducting Polymers High Temperature Fuel Cell Performance High Temperature Fuel Cell Performance of of Sulfonated Sulfonated Poly(phenylene Poly(phenylene) Proton) Proton Conducting Conducting Polymers Presentation

  10. Large Scale Production Computing and Storage Requirements for High Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 HEPlogo.jpg The NERSC Program Requirements Review "Large Scale Computing and Storage Requirements for High Energy Physics" is organized by the Department of Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to characterize

  11. Department of Energy Laboratories, Researchers to Showcase High Performance

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Expertise at SC07 Conference | U.S. DOE Office of Science (SC) Department of Energy Laboratories, Researchers to Showcase High Performance Computing Expertise at SC07 Conference News News Home Featured Articles Science Headlines 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 Science Highlights Presentations & Testimony News Archives Communications and Public Affairs Contact Information Office of Science U.S. Department of Energy 1000 Independence Ave., SW

  12. Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Computing Center | Department of Energy Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance Computing Center Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance Computing Center Study evaluates the energy efficiency of a new, liquid-cooled computing system applied in a retrofit project compared to the previously used air-cooled system. Download the study. (1.25 MB) More Documents & Publications Energy Efficiency Opportunities in Federal High

  13. HIGH-PERFORMANCE COATING MATERIALS

    SciTech Connect (OSTI)

    SUGAMA,T.

    2007-01-01

    Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

  14. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOE Patents [OSTI]

    Faraj, Ahmad

    2013-07-09

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: establishing, for each node, a plurality of logical rings, each ring including a different set of at least one core on that node, each ring including the cores on at least two of the nodes; iteratively for each node: assigning each core of that node to one of the rings established for that node to which the core has not previously been assigned, and performing, for each ring for that node, a global allreduce operation using contribution data for the cores assigned to that ring or any global allreduce results from previous global allreduce operations, yielding current global allreduce results for each core; and performing, for each node, a local allreduce operation using the global allreduce results.

  15. High Performance Valve Materials | Department of Energy

    Broader source: Energy.gov (indexed) [DOE]

    Energy The High-Performance Green Building Partnership Consortia are groups from the public and private sectors recognized by the U.S. Department of Energy (DOE) for their commitment to high-performance green buildings. Groups that met specific qualifications outlined in the Energy Independence and Security Act of 2007 applied to be recognized as Consortia members through a Federal Register Notice. DOE recognized the following groups: Collaborative for High Performance Schools The

  16. Building America Webinar: High Performance Space Conditioning...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    II - Design Options for Locating Ducts within Conditioned Space Building America Webinar: High Performance Space Conditioning Systems, Part II - Design Options for Locating Ducts ...

  17. High performance carbon nanocomposites for ultracapacitors

    DOE Patents [OSTI]

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  18. Building America Webinar: High Performance Enclosure Strategies...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    High Performance Enclosure Strategies, Part II, on August 13, 2014. BAwebinarbscbaker81314.pdf (1.03 MB) More Documents & Publications Cladding Attachment Over Thick ...

  19. Building America Webinar: High Performance Enclosure Strategies...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Strategies: Part II, New Construction - August 13, 2014 - Introduction This presentation is the Introduction to the Building America webinar, High Performance Enclosure Strategies...

  20. Functionalized High Performance Polymer Membranes for Separation...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Functionalized High Performance Polymer Membranes for Separation of Carbon Dioxide and Methane Previous Next List Natalia Blinova and Frantisek Svec, J. Mater. Chem. A, 2, 600-604...

  1. High Performance Sustainable Building - DOE Directives, Delegations...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    by Adam Pugh Functional areas: Program Management, Project Management This Guide provides approaches for implementing the High Performance Sustainable Building (HPSB) requirements...

  2. Building America Webinar: High Performance Building Enclosures...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Building America Webinar: High Performance Building Enclosures: Part I, Existing Homes The webinar, presented on May 21, 2014, focused on specific Building America projects that ...

  3. Method of making a high performance ultracapacitor

    DOE Patents [OSTI]

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  4. Strategy Guideline: High Performance Residential Lighting

    SciTech Connect (OSTI)

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  5. Energy Proportionality and Performance in Data Parallel Computing Clusters

    SciTech Connect (OSTI)

    Kim, Jinoh; Chou, Jerry; Rotem, Doron

    2011-02-14

    Energy consumption in datacenters has recently become a major concern due to the rising operational costs andscalability issues. Recent solutions to this problem propose the principle of energy proportionality, i.e., the amount of energy consumedby the server nodes must be proportional to the amount of work performed. For data parallelism and fault tolerancepurposes, most common file systems used in MapReduce-type clusters maintain a set of replicas for each data block. A coveringset is a group of nodes that together contain at least one replica of the data blocks needed for performing computing tasks. In thiswork, we develop and analyze algorithms to maintain energy proportionality by discovering a covering set that minimizesenergy consumption while placing the remaining nodes in lowpower standby mode. Our algorithms can also discover coveringsets in heterogeneous computing environments. In order to allow more data parallelism, we generalize our algorithms so that itcan discover k-covering sets, i.e., a set of nodes that contain at least k replicas of the data blocks. Our experimental results showthat we can achieve substantial energy saving without significant performance loss in diverse cluster configurations and workingenvironments.

  6. Large Scale Computing and Storage Requirements for High Energy Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Large Scale Computing and Storage Requirements for High Energy Physics HEPFrontcover.png Large Scale Computing and Storage Requirements for High Energy Physics An HEP / ASCR / NERSC Workshop November 12-13, 2009 Report Large Scale Computing and Storage Requirements for High Energy Physics, Report of the Joint HEP / ASCR / NERSC Workshop conducted Nov. 12-13, 2009 https://www.nersc.gov/assets/HPC-Requirements-for-Science/HEPFrontcover.png Goals This workshop was organized by the Department of

  7. High Level Computational Chemistry Approaches to the Prediction...

    Broader source: Energy.gov (indexed) [DOE]

    Presentation on the High Level Computational Chemistry given at the DOE Theory Focus Session on Hydrogen Storage Materials on May 18, 2006. storagetheorysessiondixon.pdf (692.3 ...

  8. High Performance Plastic DSSC | ANSER Center | Argonne-Northwestern...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Plastic DSSC Home > Research > ANSER Research Highlights > High Performance Plastic DSSC...

  9. high-performance | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High-Performance Sorbents for Carbon Dioxide Capture from Air Project No.: DE-FE0002438 NETL has partnered with the Georgia Institute of Technology to perform a combined experimental and modeling study of air capture of CO2 using low-cost, high-capacity sorbents (a material used to absorb liquid or gas) including, but not limited to, mesoporous (material containing pores with diameters between 2 and 50 nanometers) or solids functionalized with hyperbranched amino-polymers (highly branched,

  10. Strategy Guideline. Partnering for High Performance Homes

    SciTech Connect (OSTI)

    Prahl, Duncan

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  11. Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors

    SciTech Connect (OSTI)

    Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2007-06-01

    Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.

  12. High Level Computational Chemistry Approaches to the Prediction of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energetic Properties of Chemical Hydrogen Storage Systems | Department of Energy Level Computational Chemistry Approaches to the Prediction of Energetic Properties of Chemical Hydrogen Storage Systems High Level Computational Chemistry Approaches to the Prediction of Energetic Properties of Chemical Hydrogen Storage Systems Presentation on the High Level Computational Chemistry given at the DOE Theory Focus Session on Hydrogen Storage Materials on May 18, 2006.

  13. Durham County- High-Performance Building Policy

    Office of Energy Efficiency and Renewable Energy (EERE)

    Durham County adopted a resolution in October 2008 that requires new non-school public buildings and facilities to meet high-performance standards. New construction of public buildings and...

  14. TAP Webinar: High Performance Outdoor Lighting Accelerator

    Broader source: Energy.gov [DOE]

    Hosted by the Technical Assistance Program (TAP), this webinar will cover the recently announced expansion of the Better Buildings platform —the High Performance Outdoor Lighting Accelerator (HPOLA).

  15. High Performance Green Building Partnership Consortia | Department...

    Broader source: Energy.gov (indexed) [DOE]

    The High-Performance Green Building Partnership Consortia are groups from the public and private sectors recognized by the U.S. Department of Energy (DOE) for their commitment to ...

  16. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing and Storage Requirements Computing and Storage Requirements for FES J. Candy General Atomics, San Diego, CA Presented at DOE Technical Program Review Hilton Washington DC/Rockville Rockville, MD 19-20 March 2013 2 Computing and Storage Requirements Drift waves and tokamak plasma turbulence Role in the context of fusion research * Plasma performance: In tokamak plasmas, performance is limited by turbulent radial transport of both energy and particles. * Gradient-driven: This turbulent

  17. Performing a global barrier operation in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-12-09

    Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.

  18. High Performance Dielectrics - Energy Innovation Portal

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Dielectrics Sandia National Laboratories Contact SNL About This Technology Publications: PDF Document Publication Market Sheet (342 KB) Technology Marketing Summary Current dielectric materials are limited and unable to meet all operating, temperature, response frequency, size, and reliability requirements needed for uncooled high-reliability electronics. To address this problem, scientists at Sandia have developed a method for producing dielectric materials using engineered

  19. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

  20. Performance characteristics of recently developed high-performance heat pipes

    SciTech Connect (OSTI)

    Schlitt, R.

    1995-01-01

    For future space projects such as Earth orbiting platforms, space stations, but also Moon or Mars bases, the need to manage waste heat up to 100 kW has been identified. For this purpose large heat pipe radiators have been proposed with heat pipe lengths of 15 m and heat transport capabilities up to 4 kW. It is demonstrated that conventional axially grooved heat pipes can be improved to provide 1 kWm heat transport capability. Higher heat loads can be handled only by high-composite wick designs with large liquid cross sections and circumferential grooves in the evaporator. With these high-performance heat pipes, heat transfer coefficients of about 200 kW/m{sup 2}K and transport capabilities of 2 kW over 15 m can be reached. Configurations with liquid fillets and axially tapered liquid channels are proposed to improve the ability of the highly composite wick to prime.

  1. Building America Roadmap to High Performance Homes

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Building America Technical Update Meeting - April 29, 2013 Building America Roadmap to High Performance Homes Eric Werling Building America Coordinator Denver, CO April 29, 2013 Building Technology Office U.S. Department of Energy EERE's National Mission Mission: To create American leadership in the global transition to a clean energy economy 1) High-Impact Research, Development, and Demonstration to Make Clean Energy as Affordable and Convenient as Traditional Forms of Energy 2) Breaking Down

  2. High Performance Plastic DSSC | ANSER Center | Argonne-Northwestern

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    National Laboratory High Performance Plastic DSSC Home > Research > ANSER Research Highlights > High Performance Plastic DSSC

  3. Strategy Guideline: Partnering for High Performance Homes

    SciTech Connect (OSTI)

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  4. High Performance Builder Spotlight: Imagine Homes

    SciTech Connect (OSTI)

    2011-01-01

    Imagine Homes, working with the DOE's Building America research team member IBACOS, has developed a system that can be replicated by other contractors to build affordable, high-performance homes. Imagine Homes has used the system to produce more than 70 Builders Challenge-certified homes per year in San Antonio over the past five years.

  5. Project materials [Commercial High Performance Buildings Project

    SciTech Connect (OSTI)

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  6. Commercial Buildings High Performance Rooftop Unit Challenge

    SciTech Connect (OSTI)

    2011-12-16

    The U.S. Department of Energy (DOE) and the Commercial Building Energy Alliances (CBEAs) are releasing a new design specification for high performance rooftop air conditioning units (RTUs). Manufacturers who develop RTUs based on this new specification will find strong interest from the commercial sector due to the energy and financial savings.

  7. Strategy Guideline. High Performance Residential Lighting

    SciTech Connect (OSTI)

    Holton, J.

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  8. High voltage electric substation performance in earthquakes

    SciTech Connect (OSTI)

    Eidinger, J.; Ostrom, D.; Matsuda, E.

    1995-12-31

    This paper examines the performance of several types of high voltage substation equipment in past earthquakes. Damage data is provided in chart form. This data is then developed into a tool for estimating the performance of a substation subjected to an earthquake. First, suggests are made about the development of equipment class fragility curves that represent the expected earthquake performance of different voltages and types of equipment. Second, suggestions are made about how damage to individual pieces of equipment at a substation likely affects the post-earthquake performance of the substation as a whole. Finally, estimates are provided as to how quickly a substation, at various levels of damage, can be restored to operational service after the earthquake.

  9. High performance anode for advanced Li batteries

    SciTech Connect (OSTI)

    Lake, Carla

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  10. High Performance Commercial Fenestration Framing Systems

    SciTech Connect (OSTI)

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  11. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

    SciTech Connect (OSTI)

    Corones, James

    2013-09-23

    High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

  12. Design and Development of High-Performance Polymer Fuel Cell...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Design and Development of High-Performance Polymer Fuel Cell Membranes Design and Development of High-Performance Polymer Fuel Cell Membranes A presentation to the High Temperature ...

  13. High Performance Leasing Strategies for State and Local Governments...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    High Performance Leasing Strategies for State and Local Governments High Performance Leasing Strategies for State and Local Governments Presentation for the SEE Action Series: High ...

  14. BG/Q Performance Counters | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Performance Tools & APIs Tuning MPI on BGQ Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BGQ Performance Counters BGPM...

  15. High-Performance Energy Applications and Systems

    SciTech Connect (OSTI)

    Miller, Barton

    2014-05-19

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “Foundational Tools for Petascale Computing”, SC0003922/FG02-10ER25940, UW PRJ27NU.

  16. NREL: Computational Science Home Page

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    high-performance computing, computational science, applied mathematics, scientific data management, visualization, and informatics. NREL is home to the largest high performance...

  17. High Performance Walls in Hot-Dry Climates (Technical Report...

    Office of Scientific and Technical Information (OSTI)

    High Performance Walls in Hot-Dry Climates Citation Details In-Document Search Title: High Performance Walls in Hot-Dry Climates High performance walls represent a high priority...

  18. Proceedings from the conference on high speed computing: High speed computing and national security

    SciTech Connect (OSTI)

    Hirons, K.P.; Vigil, M.; Carlson, R.

    1997-07-01

    This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.

  19. High Performance Preconditioners and Linear Solvers

    Energy Science and Technology Software Center (OSTI)

    2006-07-27

    Hypre is a software library focused on the solution of large, sparse linear systems of equations on massively parallel computers.

  20. Reducing power consumption while performing collective operations on a plurality of compute nodes

    DOE Patents [OSTI]

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  1. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing. The PRIMA Project

    SciTech Connect (OSTI)

    Malony, Allen D.; Wolf, Felix G.

    2014-01-31

    The growing number of cores provided by todays high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish

  2. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing: the PRIMA Project

    SciTech Connect (OSTI)

    Malony, Allen D.; Wolf, Felix G.

    2014-01-31

    The growing number of cores provided by todays high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these

  3. DOE High Performance Concentrator PV Project

    SciTech Connect (OSTI)

    McConnell, R.; Symko-Davies, M.

    2005-08-01

    Much in demand are next-generation photovoltaic (PV) technologies that can be used economically to make a large-scale impact on world electricity production. The U.S. Department of Energy (DOE) initiated the High-Performance Photovoltaic (HiPerf PV) Project to substantially increase the viability of PV for cost-competitive applications so that PV can contribute significantly to both our energy supply and environment. To accomplish such results, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices with the goal of enabling progress of high-efficiency technologies toward commercial-prototype products. We will describe the details of the subcontractor and in-house progress in exploring and accelerating pathways of III-V multijunction concentrator solar cells and systems toward their long-term goals. By 2020, we anticipate that this project will have demonstrated 33% system efficiency and a system price of $1.00/Wp for concentrator PV systems using III-V multijunction solar cells with efficiencies over 41%.

  4. High performance robotic traverse of desert terrain.

    SciTech Connect (OSTI)

    Whittaker, William

    2004-09-01

    This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.

  5. Mira Performance Boot Camp 2014 | Argonne Leadership Computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    4 Mira Performance Boot Camp 2014 This event has expired. See agenda for links to presentations. Ready to take your code to the next level? At the Mira Performance Boot Camp,...

  6. BG/Q DGEMM Performance | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    BGQ DGEMM Performance The table below represents the percentage of peak performance for a matrix-matrix multiply BLAS3 dgemm routine as it is implemented in a BGQ Power A2 core...

  7. High Performance Piezoelectric Actuated Gimbal (HIERAX)

    SciTech Connect (OSTI)

    Charles Tschaggeny; Warren Jones; Eberhard Bamberg

    2007-04-01

    This paper presents a 3-axis gimbal whose three rotational axes are actuated by a novel drive system: linear piezoelectric motors whose linear output is converted to rotation by using drive disks. Advantages of this technology are: fast response, high accelerations, dither-free actuation and backlash-free positioning. The gimbal was developed to house a laser range finder for the purpose of tracking and guiding unmanned aerial vehicles during landing maneuvers. The tilt axis was built and the test results indicate excellent performance that meets design specifications.

  8. High-performance laboratories and cleanrooms

    SciTech Connect (OSTI)

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-07-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations--primarily safety driven--that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities.

  9. High-Performance Leasing for State and Local Governments

    SciTech Connect (OSTI)

    Existing Commercial Buildings Working Group

    2012-05-23

    Describes the value of high-performance leasing and how states can lead by example by using high-performance leases in their facilities and encourage high-performance leasing in the private sector.

  10. Performing a local reduction operation on a parallel computer

    DOE Patents [OSTI]

    Blocksome, Michael A; Faraj, Daniel A

    2013-06-04

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  11. Performing a local reduction operation on a parallel computer

    SciTech Connect (OSTI)

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  12. Mira Performance Boot Camp 2015 | Argonne Leadership Computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Performance Vis - Holger Brunst, Dresden University of Technology 10:00 - 10:30 a.m. TAU Performance System - Sameer Shende, ParaTools, Inc. 10:30 a.m. - 12:00 p.m. Hands-on...

  13. Materials Modeling for High-Performance Radiation Detectors ...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Materials Modeling for High-Performance Radiation Detectors Citation Details In-Document Search Title: Materials Modeling for High-Performance Radiation Detectors ...

  14. Natural Refrigerant High-Performance Heat Pump for Commercial...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Refrigerant High-Performance Heat Pump for Commercial Applications Natural Refrigerant High-Performance Heat Pump for Commercial Applications Credit: S-RAM Credit: S-RAM Lead ...

  15. LBNL: High Performance Active Perimeter Building Systems - 2015...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    LBNL: High Performance Active Perimeter Building Systems - 2015 Peer Review Presenter: Eleanor Lee, LBNL View the Presentation PDF icon LBNL: High Performance Active Perimeter ...

  16. Energy Design Guidelines for High Performance Schools: Hot and...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Design Guidelines for High Performance Schools: Hot and Humid Climates Energy Design Guidelines for High Performance Schools: Hot and Humid Climates School districts around the...

  17. USABC Development of Advanced High-Performance Batteries for...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Development of Advanced High-Performance Batteries for EV Applications USABC Development of Advanced High-Performance Batteries for EV Applications 2012 DOE Hydrogen and Fuel Cells ...

  18. Webinar: ENERGY STAR Hot Water Systems for High Performance Homes...

    Energy Savers [EERE]

    ENERGY STAR Hot Water Systems for High Performance Homes Webinar: ENERGY STAR Hot Water Systems for High Performance Homes This presentation is from the Building America research ...

  19. Memorandum of American High-Performance Buildings Coalition DOE...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Memorandum of American High-Performance Buildings Coalition DOE Meeting August 19, 2013 Memorandum of American High-Performance Buildings Coalition DOE Meeting August 19, 2013 This ...

  20. Enhanced High Temperature Performance of NOx Storage/Reduction...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    More Documents & Publications Enhanced High Temperature Performance of NOx StorageReduction (NSR) Materials Enhanced High Temperature Performance of NOx StorageReduction (NSR) ...

  1. Enhanced High Temperature Performance of NOx Storage/Reduction...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    More Documents & Publications Enhanced High and Low Temperature Performance of NOx Reduction Materials Enhanced High Temperature Performance of NOx StorageReduction (NSR) ...

  2. Business Metrics for High-Performance Homes: A Colorado Springs...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Business Metrics for High-Performance Homes: A Colorado Springs Case Study Citation Details In-Document Search Title: Business Metrics for High-Performance Homes: ...

  3. New rocket propellant and motor design offer high-performance...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    New rocket propellant and motor design offer high-performance and safety New rocket propellant and motor design offer high-performance and safety Scientists recently flight tested ...

  4. Federal Leadership in High Performance and Sustainable Buildings...

    Broader source: Energy.gov (indexed) [DOE]

    and operation of High-Performance and Sustainable Buildings. Federal Leadership in High Performance and Sustainable Buildings Memorandum of Understanding (148.11 KB) ...

  5. Reduced Call-Backs with High Performance Production Builders...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Reduced Call-Backs with High Performance Production Builders - Building America Top Innovation Reduced Call-Backs with High Performance Production Builders - Building America Top ...

  6. Integrated Design: A High-Performance Solution for Affordable...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Design: A High-Performance Solution for Affordable Housing Integrated Design: A High-Performance Solution for Affordable Housing ARIES lab houses. Photo courtesy of The Levy ...

  7. Building America Roadmap to High Performance Homes | Department...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Roadmap to High Performance Homes Building America Roadmap to High Performance Homes This presentation was delivered at the U.S. Department of Energy Building America Technical ...

  8. High-Performance Home Technologies: Solar Thermal & Photovoltaic...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    High-Performance Home Technologies: Solar Thermal & Photovoltaic Systems; Volume 6 Building America Best Practices Series High-Performance Home Technologies: Solar Thermal &...

  9. Halide and Oxy-halide Eutectic Systems for High Performance High...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Halide and Oxy-halide Eutectic Systems for High Performance High Temperature Heat Transfer Fluids Halide and Oxy-halide Eutectic Systems for High Performance High Temperature Heat ...

  10. High-performance Si microwire photovoltaics

    SciTech Connect (OSTI)

    Kelzenberg, Michael D.; Turner-Evans, Daniel B.; Putnam, Morgan C.; Boettcher, Shannon W.; Briggs, Ryan M.; Baek, Jae Y.; Lewis, Nathan S.; Atwater, Harry A.

    2011-01-07

    Crystalline Si wires, grown by the vaporliquidsolid (VLS) process, have emerged as promising candidate materials for low-cost, thin-film photovoltaics. Here, we demonstrate VLS-grown Si microwires that have suitable electrical properties for high-performance photovoltaic applications, including long minority-carrier diffusion lengths (Ln>> 30 m) and low surface recombination velocities (S << 70 cms-1). Single-wire radial pn junction solar cells were fabricated with amorphous silicon and silicon nitride surface coatings, achieving up to 9.0% apparent photovoltaic efficiency, and exhibiting up to ~600 mV open-circuit voltage with over 80% fill factor. Projective single-wire measurements and optoelectronic simulations suggest that large-area Si wire-array solar cells have the potential to exceed 17% energy-conversion efficiency, offering a promising route toward cost-effective crystalline Si photovoltaics.

  11. High-performance, high-volume fly ash concrete

    SciTech Connect (OSTI)

    2008-01-15

    This booklet offers the construction professional an in-depth description of the use of high-volume fly ash in concrete. Emphasis is placed on the need for increased utilization of coal-fired power plant byproducts in lieu of Portland cement materials to eliminate increased CO{sub 2} emissions during the production of cement. Also addressed is the dramatic increase in concrete performance with the use of 50+ percent fly ash volume. The booklet contains numerous color and black and white photos, charts of test results, mixtures and comparisons, and several HVFA case studies.

  12. High performance internal reforming unit for high temperature fuel cells

    DOE Patents [OSTI]

    Ma, Zhiwen; Venkataraman, Ramakrishnan; Novacco, Lawrence J.

    2008-10-07

    A fuel reformer having an enclosure with first and second opposing surfaces, a sidewall connecting the first and second opposing surfaces and an inlet port and an outlet port in the sidewall. A plate assembly supporting a catalyst and baffles are also disposed in the enclosure. A main baffle extends into the enclosure from a point of the sidewall between the inlet and outlet ports. The main baffle cooperates with the enclosure and the plate assembly to establish a path for the flow of fuel gas through the reformer from the inlet port to the outlet port. At least a first directing baffle extends in the enclosure from one of the sidewall and the main baffle and cooperates with the plate assembly and the enclosure to alter the gas flow path. Desired graded catalyst loading pattern has been defined for optimized thermal management for the internal reforming high temperature fuel cells so as to achieve high cell performance.

  13. Computer analysis of sodium cold trap design and performance. [LMFBR

    SciTech Connect (OSTI)

    McPheeters, C.C.; Raue, D.J.

    1983-11-01

    Normal steam-side corrosion of steam-generator tubes in Liquid Metal Fast Breeder Reactors (LMFBRs) results in liberation of hydrogen, and most of this hydrogen diffuses through the tubes into the heat-transfer sodium and must be removed by the purification system. Cold traps are normally used to purify sodium, and they operate by cooling the sodium to temperatures near the melting point, where soluble impurities including hydrogen and oxygen precipitate as NaH and Na/sub 2/O, respectively. A computer model was developed to simulate the processes that occur in sodium cold traps. The Model for Analyzing Sodium Cold Traps (MASCOT) simulates any desired configuration of mesh arrangements and dimensions and calculates pressure drops and flow distributions, temperature profiles, impurity concentration profiles, and impurity mass distributions.

  14. Improving network performance on multicore systems: Impact of core affinities on high throughput flows

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Future Generation Computer Systems ( ) - Contents lists available at ScienceDirect Future Generation Computer Systems journal homepage: www.elsevier.com/locate/fgcs Improving network performance on multicore systems: Impact of core affinities on high throughput flows Nathan Hanford a,∗ , Vishal Ahuja a , Matthew Farrens a , Dipak Ghosal a , Mehmet Balman b , Eric Pouyoul b , Brian Tierney b a Department of Computer Science, University of California, Davis, CA, United States b Energy Sciences

  15. Federal Leadership in High Performance and Sustainable Buildings Memorandum

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Understanding | Department of Energy Leadership in High Performance and Sustainable Buildings Memorandum of Understanding Federal Leadership in High Performance and Sustainable Buildings Memorandum of Understanding With this Memorandum of Understanding (MOU), signatory agencies commit to federal leadership in the design, construction, and operation of High-Performance and Sustainable Buildings. Federal Leadership in High Performance and Sustainable Buildings Memorandum of Understanding

  16. Measuring Human Performance within Computer Security Incident Response Teams

    SciTech Connect (OSTI)

    McClain, Jonathan T.; Silva, Austin Ray; Avina, Glory Emmanuel; Forsythe, James C.

    2015-09-01

    Human performance has become a pertinen t issue within cyber security. However, this research has been stymied by the limited availability of expert cyber security professionals. This is partly attributable to the ongoing workload faced by cyber security professionals, which is compound ed by the limited number of qualified personnel and turnover of p ersonnel across organizations. Additionally, it is difficult to conduct research, and particularly, openly published research, due to the sensitivity inherent to cyber ope rations at most orga nizations. As an alternative, the current research has focused on data collection during cyb er security training exercises. These events draw individuals with a range of knowledge and experience extending from seasoned professionals to recent college gradu ates to college students. The current paper describes research involving data collection at two separate cyber security exercises. This data collection involved multiple measures which included behavioral performance based on human - machine transactions and questionnaire - based assessments of cyber security experience.

  17. Compute Nodes

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    low-overhead operating system optimized for high performance computing called "Cray Linux Environment" (CLE). This OS supports only a limited number of system calls and UNIX...

  18. Building America Webinar: High Performance Enclosure Strategies: Part II,

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    New Construction - August 13, 2014 - Next Gen Advanced Framing for High Performance Homes Integrated System Solutions | Department of Energy Next Gen Advanced Framing for High Performance Homes Integrated System Solutions Building America Webinar: High Performance Enclosure Strategies: Part II, New Construction - August 13, 2014 - Next Gen Advanced Framing for High Performance Homes Integrated System Solutions This presentation, Next Gen Advanced Framing for High Performance Homes -

  19. High-Performance Secure Database Access Technologies for HEP Grids

    SciTech Connect (OSTI)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  20. High-performance commercial building systems

    SciTech Connect (OSTI)

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and

  1. Computational Human Performance Modeling For Alarm System Design

    SciTech Connect (OSTI)

    Jacques Hugo

    2012-07-01

    The introduction of new technologies like adaptive automation systems and advanced alarms processing and presentation techniques in nuclear power plants is already having an impact on the safety and effectiveness of plant operations and also the role of the control room operator. This impact is expected to escalate dramatically as more and more nuclear power utilities embark on upgrade projects in order to extend the lifetime of their plants. One of the most visible impacts in control rooms will be the need to replace aging alarm systems. Because most of these alarm systems use obsolete technologies, the methods, techniques and tools that were used to design the previous generation of alarm system designs are no longer effective and need to be updated. The same applies to the need to analyze and redefine operators alarm handling tasks. In the past, methods for analyzing human tasks and workload have relied on crude, paper-based methods that often lacked traceability. New approaches are needed to allow analysts to model and represent the new concepts of alarm operation and human-system interaction. State-of-the-art task simulation tools are now available that offer a cost-effective and efficient method for examining the effect of operator performance in different conditions and operational scenarios. A discrete event simulation system was used by human factors researchers at the Idaho National Laboratory to develop a generic alarm handling model to examine the effect of operator performance with simulated modern alarm system. It allowed analysts to evaluate alarm generation patterns as well as critical task times and human workload predicted by the system.

  2. Building America Webinar: High Performance Enclosure Strategies...

    Broader source: Energy.gov (indexed) [DOE]

    performance of the building enclosure, reduce the cost of energy-efficient construction, and simplify the construction process, all while accommodating higher levels of insulation. ...

  3. High performance Zintl phase TE materials with embedded nanoparticles |

    Broader source: Energy.gov (indexed) [DOE]

    Department of Energy Performance of zintl phase thermoelectric materials with embedded particles are evaluated shakouri.pdf (2.3 MB) More Documents & Publications High performance Zintl phase TE materials with embedded nanoparticles High Performance Zintl Phase TE Materials with Embedded Particles Thermoelectrics Partnership: High Performance Thermoelectric Waste Heat Recovery System Based on Zintl Phase Materials with Embedded Nanoparticles

  4. UltraSciencenet: High- Performance Network Research Test-Bed

    SciTech Connect (OSTI)

    Rao, Nageswara S; Wing, William R; Poole, Stephen W; Hicks, Susan Elaine; DeNap, Frank A; Carter, Steven M; Wu, Qishi

    2009-04-01

    The high-performance networking requirements for next generation large-scale applications belong to two broad classes: (a) high bandwidths, typically multiples of 10Gbps, to support bulk data transfers, and (b) stable bandwidths, typically at much lower bandwidths, to support computational steering, remote visualization, and remote control of instrumentation. Current Internet technologies, however, are severely limited in meeting these demands because such bulk bandwidths are available only in the backbone, and stable control channels are hard to realize over shared connections. The UltraScience Net (USN) facilitates the development of such technologies by providing dynamic, cross-country dedicated 10Gbps channels for large data transfers, and 150 Mbps channels for interactive and control operations. Contributions of the USN project are two-fold: (a) Infrastructure Technologies for Network Experimental Facility: USN developed and/or demonstrated a number of infrastructure technologies needed for a national-scale network experimental facility. Compared to Internet, USN's data-plane is different in that it can be partitioned into isolated layer-1 or layer-2 connections, and its control-plane is different in the ability of users and applications to setup and tear down channels as needed. Its design required several new components including a Virtual Private Network infrastructure, a bandwidth and channel scheduler, and a dynamic signaling daemon. The control-plane employs a centralized scheduler to compute the channel allocations and a signaling daemon to generate configuration signals to switches. In a nutshell, USN demonstrated the ability to build and operate a stable national-scale switched network. (b) Structured Network Research Experiments: A number of network research experiments have been conducted on USN that cannot be easily supported over existing network facilities, including test-beds and production networks. It settled an open matter by demonstrating

  5. Computational Earth Science

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental ...

  6. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    SciTech Connect (OSTI)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  7. Enhanced High and Low Temperature Performance of NOx Reduction...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    and Low Temperature Performance of NOx Reduction Materials Enhanced High and Low Temperature Performance of NOx Reduction Materials 2013 DOE Hydrogen and Fuel Cells Program and ...

  8. High performance Zintl phase TE materials with embedded nanoparticles...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Zintl phase TE materials with embedded nanoparticles High performance Zintl phase TE materials with embedded nanoparticles Performance of zintl phase thermoelectric ...

  9. Project Profile: Development and Performance Evaluation of High...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Development and Performance Evaluation of High Temperature Concrete for Thermal Energy Storage for Solar Power Generation Project Profile: Development and Performance Evaluation of ...

  10. DOE High Performance Computing for Manufacturing (HPC4Mfg) Program...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    HPC systems, but also for experts in the use of these systems to solve complex problems." ... laboratories will play a key role in solving manufacturing challenges and ...

  11. Toward Codesign in High Performance Computing Systems - 06386705...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    o w a r d C o d e s i g n i n H i g h P e r f o r m a n c e C o m p u t i n g S y s t e m ... S a n t a C l a r a , C A , U S A s p a r k e r @ n v i d i a . c o m J . S h a l f L a w ...

  12. Evaluation of distributed ANSYS for high performance computing...

    Office of Scientific and Technical Information (OSTI)

    DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource Relation: Conference: Proposed for presentation at the Seventh Biennial Tri-Laboratory Engineering Conference ...

  13. Nuclear Forces and High-Performance Computing: The Perfect Match...

    Office of Scientific and Technical Information (OSTI)

    DOE Contract Number: W-7405-ENG-48 Resource Type: Conference ... ELEMENTARY PARTICLES AND FIELDS; GLUONS; NUCLEAR FORCES; NUCLEAR PHYSICS; QUANTUM CHROMODYNAMICS; QUANTUM FIELD THEORY...

  14. High-Performance Computing at Los Alamos announces milestone...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    a supercomputer epicenter where "big data set" really means something, a data ... Statistical analysis generally occurs over the entire data set. But more detailed analysis ...

  15. Toward Codesign in High Performance Computing Systems - 06386705.pdf

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    o w a r d C o d e s i g n i n H i g h P e r f o r m a n c e C o m p u t i n g S y s t e m s [ I n v i t e d S p e c i a l S e s s i o n p a p e r ] R i c h a r d F . B a r r e t t S a n d i a N a t i o n a l L a b o r a t o r i e s A l b u q u e r q u e , N M , U S A r f b a r r e @ s a n d i a . g o v S u d i p S . D o s a n j h S a n d i a N a t i o n a l L a b o r a t o r i e s A l b u q u e r q u e , N M , U S A s s d o s a n @ s a n d i a . g o v M i c h a e l A . H e r o u x S a n d i a N

  16. Coordinated Fault Tolerance for High-Performance Computing

    SciTech Connect (OSTI)

    Dongarra, Jack; Bosilca, George; et al.

    2013-04-08

    Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.

  17. High Performance Computing for Manufacturing Parternship | GE Global

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Research GE, US DOE Partner on HPC4Mfg projects to deliver new capabilities in 3D Printing and higher jet engine efficiency Click to email this to a friend (Opens in new window) Share on Facebook (Opens in new window) Click to share (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Tumblr (Opens in new window) GE, US DOE Partner on HPC4Mfg projects to deliver new capabilities in 3D Printing and higher jet engine efficiency NISKAYUNA, NY, February 17,

  18. Toward a new metric for ranking high performance computing systems...

    Office of Scientific and Technical Information (OSTI)

    Close Cite: Bibtex Format Close 0 pages in this document matching the terms "" Search For Terms: Enter terms in the toolbar above to search the full text of this document for ...

  19. Report from the Next Generation High Performance Computing Task...

    Energy Savers [EERE]

    ... weather forecasting, finance, oil and gas exploration, and a host of other fields. ... designers and manufacturers to reduce costs and improve customer experiences. * ...

  20. John Shalf Gives Talk at San Francisco High Performance Computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials...

  1. A Generalized Portable SHMEM Library for High Performance Computing

    SciTech Connect (OSTI)

    Parzyszek, K.; Nieplocha, J.; Kendall, R.A.

    2000-09-15

    This paper describes a portable one-sided communication library GPSHMEM that follows the interfaces of the successful SHMEM library introduced by Cray Research Inc. for their distributed memory systems: the Cray T3D and T3E. The portability is achieved by relying on ARMCI, a low-level communication library developed to support one-sided communication in distributed array libraries and compiler run-time systems, and the MPI message passing interface. The paper discusses implementation, requirements, and initial experience with GPSHMEM.

  2. Prospects for Accelerated Development of High Performance Structural Materials

    SciTech Connect (OSTI)

    Zinkle, Steven J; Ghoniem, Nasr M.

    2011-01-01

    We present an overview of key aspects for development of steels for fission and fusion energy applications, by linking material fabrication to thermo-mechanical properties through a physical understanding of microstructure evolution. Numerous design constraints (e.g. reduced activation, low ductile-brittle transition temperature, low neutron-induced swelling, good creep resistance, and weldability) need to be considered, which in turn can be controlled through material composition and processing techniques. Recent progress in the development of high-performance steels for fossil and fusion energy systems is summarized, along with progress in multiscale modeling of mechanical behavior in metals. Prospects for future design of optimum structural steels in nuclear applications by utilization of the hierarchy of multiscale experimental and computational strategies are briefly described.

  3. Bedford Farmhouse High Performance Retrofit Prototype

    SciTech Connect (OSTI)

    2010-04-26

    In this case study, Building Science Corporation partnered with Habitat for Humanity of Greater Lowell on a retrofit of a mid-19th century farmhouse into affordable housing meeting Building America performance standards.

  4. High Performance Electrolyzers for Hybrid Thermochemical Cycles

    SciTech Connect (OSTI)

    Dr. John W. Weidner

    2009-05-10

    Extensive electrolyzer testing was performed at the University of South Carolina (USC). Emphasis was given to understanding water transport under various operating (i.e., temperature, membrane pressure differential and current density) and design (i.e., membrane thickness) conditions when it became apparent that water transport plays a deciding role in cell voltage. A mathematical model was developed to further understand the mechanisms of water and SO2 transport, and to predict the effect of operating and design parameters on electrolyzer performance.

  5. Computational Proteomics: High-throughput Analysis for Systems Biology

    SciTech Connect (OSTI)

    Cannon, William R.; Webb-Robertson, Bobbie-Jo M.

    2007-01-03

    High-throughput (HTP) proteomics is a rapidly developing field that offers the global profiling of proteins from a biological system. The HTP technological advances are fueling a revolution in biology, enabling analyses at the scales of entire systems (e.g., whole cells, tumors, or environmental communities). However, simply identifying the proteins in a cell is insufficient for understanding the underlying complexity and operating mechanisms of the overall system. Systems level investigations are relying more and more on computational analyses, especially in the field of proteomics generating large-scale global data.

  6. High Performance and Sustainable Buildings Guidance

    Office of Environmental Management (EM)

    Energy High Efficiency Microturbine with Integral Heat Recovery High Efficiency Microturbine with Integral Heat Recovery Introduction The U.S. economic market potential for distributed generation is significant. This market, however, remains mostly untapped in the commercial and small industrial buildings that are well suited for microturbines. Gas turbines have many advantages, including high power density, light weight, clean emissions, fuel flexibility, low vibration, low maintenance,

  7. Computing Sciences Staff Help East Bay High Schoolers Upgrade...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    from underrepresented groups learn about careers in a variety of IT fields, the Laney College Computer Information Systems Department offered its Upgrade: Computer Science Program. ...

  8. Guiding Market Introduction of High-Performance SSL Products...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Guiding Market Introduction of High-Performance SSL Products Guiding Market Introduction of High-Performance SSL Products 2014 DOE Solid-State Lighting Program Fact Sheet PDF icon...

  9. Seeking Information on Design and Construction of High-Performance...

    Energy Savers [EERE]

    Design and Construction of High-Performance Tenant Spaces Seeking Information on Design and Construction of High-Performance Tenant Spaces August 3, 2015 - 11:27am Addthis VIEW THE ...

  10. High Performance Home Building Guide for Habitat for Humanity Affiliates

    SciTech Connect (OSTI)

    Lindsey Marburger

    2010-10-01

    This guide covers basic principles of high performance Habitat construction, steps to achieving high performance Habitat construction, resources to help improve building practices, materials, etc., and affiliate profiles and recommendations.

  11. New GATEWAY Report Monitors LED System Performance in a High...

    Broader source: Energy.gov (indexed) [DOE]

    the light. The Yuma site is an extreme environment: high ambient temperatures and direct ... Performance in a High-Temperature Environment DOE Publishes GATEWAY Report on ...

  12. ARIES: Building America, High Performance Factory Built Housing...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ARIES: Building America, High Performance Factory Built Housing - 2015 Peer Review Presenter: Jordan Dentz, Levy Partnership View the Presentation ARIES: Building America, High ...

  13. Metaproteomics: Harnessing the power of high performance mass...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: Metaproteomics: Harnessing the power of high performance mass ... Citation Details In-Document Search Title: Metaproteomics: Harnessing the power of high ...

  14. ARIES: Building America, High Performance Factory Built Housing - 2015 Peer

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Review | Department of Energy ARIES: Building America, High Performance Factory Built Housing - 2015 Peer Review ARIES: Building America, High Performance Factory Built Housing - 2015 Peer Review Presenter: Jordan Dentz, Levy Partnership View the Presentation ARIES: Building America, High Performance Factory Built Housing - 2015 Peer Review (3.34 MB) More Documents & Publications ARIES lab houses. Photo courtesy of The Levy Partnership, Inc. Integrated Design: A High-Performance Solution

  15. Systems, methods and computer-readable media to model kinetic performance of rechargeable electrochemical devices

    DOE Patents [OSTI]

    Gering, Kevin L.

    2013-01-01

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a Butler-Volmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulse-time dependence, electrode surface availability, or a combination thereof. A set of sigmoid-based expressions may be included with the modified-BV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modified-BV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.

  16. A High-Performance PHEV Battery Pack | Department of Energy

    Broader source: Energy.gov (indexed) [DOE]

    1 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation es002_alamgir _2011_p.pdf (628.94 KB) More Documents & Publications A High-Performance PHEV Battery Pack A High-Performance PHEV Battery Pack Vehicle Technologies Office Merit Review 2013: A High-Performance PHEV

  17. LBNL: High Performance Active Perimeter Building Systems - 2015 Peer Review

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    | Department of Energy High Performance Active Perimeter Building Systems - 2015 Peer Review LBNL: High Performance Active Perimeter Building Systems - 2015 Peer Review Presenter: Eleanor Lee, LBNL View the Presentation LBNL: High Performance Active Perimeter Building Systems - 2015 Peer Review (2 MB) More Documents & Publications FLEXLAB Connected Buildings Interoperability Vision Webinar 2015 DOE CONNECTED LIGHTING SYSTEMS PRESENTATIONS

  18. NREL: Photovoltaics Research - High-Performance Photovoltaics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The dual-axis tracking modules use small mirrors to focus sunlight on high-efficient multijunction cells... NREL is a national laboratory of the U.S. Department of Energy, Office of ...

  19. High Performance Colloidal Nanocrystals | Department of Energy

    Broader source: Energy.gov (indexed) [DOE]

    Energy Through the High Penetration Solar Deployment program, DOE is funding solar projects that are accelerating the placement of solar photovoltaic (PV) systems into existing and newly designed distribution circuits in the electrical grid. The High Penetration Solar Deployment projects are working with teams that include utility partners to model, test, and evaluate solutions to mitigate the impact of large amounts of PV-generated electricity on the reliability and stability of the

  20. High Performance Green LEDs by Homoepitaxial

    SciTech Connect (OSTI)

    Wetzel, Christian; Schubert, E Fred

    2009-11-22

    This work's objective was the development of processes to double or triple the light output power from green and deep green (525 - 555 nm) AlGaInN light emitting diode (LED) dies within 3 years in reference to the Lumileds Luxeon II. The project paid particular effort to all aspects of the internal generation efficiency of light. LEDs in this spectral region show the highest potential for significant performance boosts and enable the realization of phosphor-free white LEDs comprised by red-green-blue LED modules. Such modules will perform at and outperform the efficacy target projections for white-light LED systems in the Department of Energy's accelerated roadmap of the SSL initiative.

  1. High-performance commercial building facades

    SciTech Connect (OSTI)

    Lee, Eleanor; Selkowitz, Stephen; Bazjanac, Vladimir; Inkarojrit, Vorapat; Kohler, Christian

    2002-06-01

    This study focuses on advanced building facades that use daylighting, sun control, ventilation systems, and dynamic systems. A quick perusal of the leading architectural magazines, or a discussion in most architectural firms today will eventually lead to mention of some of the innovative new buildings that are being constructed with all-glass facades. Most of these buildings are appearing in Europe, although interestingly U.S. A/E firms often have a leading role in their design. This ''emerging technology'' of heavily glazed fagades is often associated with buildings whose design goals include energy efficiency, sustainability, and a ''green'' image. While there are a number of new books on the subject with impressive photos and drawings, there is little critical examination of the actual performance of such buildings, and a generally poor understanding as to whether they achieve their performance goals, or even what those goals might be. Even if the building ''works'' it is often dangerous to take a design solution from one climate and location and transport it to a new one without a good causal understanding of how the systems work. In addition, there is a wide range of existing and emerging glazing and fenestration technologies in use in these buildings, many of which break new ground with respect to innovative structural use of glass. It is unclear as to how well many of these designs would work as currently formulated in California locations dominated by intense sunlight and seismic events. Finally, the costs of these systems are higher than normal facades, but claims of energy and productivity savings are used to justify some of them. Once again these claims, while plausible, are largely unsupported. There have been major advances in glazing and facade technology over the past 30 years and we expect to see continued innovation and product development. It is critical in this process to be able to understand which performance goals are being met by current

  2. Systems, methods and computer-readable media for modeling cell performance fade of rechargeable electrochemical devices

    DOE Patents [OSTI]

    Gering, Kevin L

    2013-08-27

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constant-current pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constant-current pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.

  3. High Thermoelectric Performance in Copper Telluride

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    He, Ying; Zhang, Tiansong; Shi, Xun; Wei, Su-Huai; Chen, Lidong

    2015-06-21

    Recently, Cu 2-δ S and Cu 2-δ Se were reported to have an ultralow thermal conductivity and high thermoelectric figure of merit zT. Thus, as a member of the copper chalcogenide group, Cu 2-δ Te is expected to possess superior zTs because Te is less ionic and heavy. However, the zT value is low in the Cu2Te sintered using spark plasma sintering, which is typically used to fabricate high-density bulk samples. In addition, the extra sintering processes may change the samples’ compositions as well as their physical properties, especially for Cu2Te, which has many stable and meta-stable phasesmore » as well as weaker ionic bonding between Cu and Te as compared with Cu2S and Cu2Se. In this study, high-density Cu2Te samples were obtained using direct annealing without a sintering process. In the absence of sintering processes, the samples’ compositions could be well controlled, leading to substantially reduced carrier concentrations that are close to the optimal value. The electrical transports were optimized, and the thermal conductivity was considerably reduced. The zT values were significantly improved—to 1.1 at 1000 K—which is nearly 100% improvement. Furthermore, this method saves substantial time and cost during the sample’s growth. The study demonstrates that Cu 2-δ X (X=S, Se and Te) is the only existing system to show high zTs in the series of compounds composed of three sequential primary group elements.« less

  4. High Thermoelectric Performance in Copper Telluride

    SciTech Connect (OSTI)

    He, Ying; Zhang, Tiansong; Shi, Xun; Wei, Su-Huai; Chen, Lidong

    2015-06-21

    Recently, Cu 2-δ S and Cu 2-δ Se were reported to have an ultralow thermal conductivity and high thermoelectric figure of merit zT. Thus, as a member of the copper chalcogenide group, Cu 2-δ Te is expected to possess superior zTs because Te is less ionic and heavy. However, the zT value is low in the Cu2Te sintered using spark plasma sintering, which is typically used to fabricate high-density bulk samples. In addition, the extra sintering processes may change the samples’ compositions as well as their physical properties, especially for Cu2Te, which has many stable and meta-stable phases as well as weaker ionic bonding between Cu and Te as compared with Cu2S and Cu2Se. In this study, high-density Cu2Te samples were obtained using direct annealing without a sintering process. In the absence of sintering processes, the samples’ compositions could be well controlled, leading to substantially reduced carrier concentrations that are close to the optimal value. The electrical transports were optimized, and the thermal conductivity was considerably reduced. The zT values were significantly improved—to 1.1 at 1000 K—which is nearly 100% improvement. Furthermore, this method saves substantial time and cost during the sample’s growth. The study demonstrates that Cu 2-δ X (X=S, Se and Te) is the only existing system to show high zTs in the series of compounds composed of three sequential primary group elements.

  5. Memorandum of American High-Performance Buildings Coalition DOE Meeting

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    August 19, 2013 | Department of Energy Memorandum of American High-Performance Buildings Coalition DOE Meeting August 19, 2013 Memorandum of American High-Performance Buildings Coalition DOE Meeting August 19, 2013 This memorandum is intended to provide a summary of a meeting between the American HighPerformance Buildings Coalition (AHBPC), a coalition of industry organizations committed to promoting performance-based energy efficiency and sustainable building standards developed through

  6. A system analysis computer model for the High Flux Isotope Reactor (HFIRSYS Version 1)

    SciTech Connect (OSTI)

    Sozer, M.C.

    1992-04-01

    A system transient analysis computer model (HFIRSYS) has been developed for analysis of small break loss of coolant accidents (LOCA) and operational transients. The computer model is based on the Advanced Continuous Simulation Language (ACSL) that produces the FORTRAN code automatically and that provides integration routines such as the Gear`s stiff algorithm as well as enabling users with numerous practical tools for generating Eigen values, and providing debug outputs and graphics capabilities, etc. The HFIRSYS computer code is structured in the form of the Modular Modeling System (MMS) code. Component modules from MMS and in-house developed modules were both used to configure HFIRSYS. A description of the High Flux Isotope Reactor, theoretical bases for the modeled components of the system, and the verification and validation efforts are reported. The computer model performs satisfactorily including cases in which effects of structural elasticity on the system pressure is significant; however, its capabilities are limited to single phase flow. Because of the modular structure, the new component models from the Modular Modeling System can easily be added to HFIRSYS for analyzing their effects on system`s behavior. The computer model is a versatile tool for studying various system transients. The intent of this report is not to be a users manual, but to provide theoretical bases and basic information about the computer model and the reactor.

  7. A high performance field-reversed configuration

    SciTech Connect (OSTI)

    Binderbauer, M. W.; Tajima, T.; Steinhauer, L. C.; Garate, E.; Tuszewski, M.; Smirnov, A.; Gota, H.; Barnes, D.; Deng, B. H.; Thompson, M. C.; Trask, E.; Yang, X.; Putvinski, S.; Rostoker, N.; Andow, R.; Aefsky, S.; Bolte, N.; Bui, D. Q.; Ceccherini, F.; Clary, R.; and others

    2015-05-15

    Conventional field-reversed configurations (FRCs), high-beta, prolate compact toroids embedded in poloidal magnetic fields, face notable stability and confinement concerns. These can be ameliorated by various control techniques, such as introducing a significant fast ion population. Indeed, adding neutral beam injection into the FRC over the past half-decade has contributed to striking improvements in confinement and stability. Further, the addition of electrically biased plasma guns at the ends, magnetic end plugs, and advanced surface conditioning led to dramatic reductions in turbulence-driven losses and greatly improved stability. Together, these enabled the build-up of a well-confined and dominant fast-ion population. Under such conditions, highly reproducible, macroscopically stable hot FRCs (with total plasma temperature of ∼1 keV) with record lifetimes were achieved. These accomplishments point to the prospect of advanced, beam-driven FRCs as an intriguing path toward fusion reactors. This paper reviews key results and presents context for further interpretation.

  8. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  9. Performing three-dimensional neutral particle transport calculations on tera scale computers

    SciTech Connect (OSTI)

    Woodward, C S; Brown, P N; Chang, B; Dorr, M R; Hanebutte, U R

    1999-01-12

    A scalable, parallel code system to perform neutral particle transport calculations in three dimensions is presented. To utilize the hyper-cluster architecture of emerging tera scale computers, the parallel code successfully combines the MPI message passing and paradigms. The code's capabilities are demonstrated by a shielding calculation containing over 14 billion unknowns. This calculation was accomplished on the IBM SP ''ASCI-Blue-Pacific computer located at Lawrence Livermore National Laboratory (LLNL).

  10. Why High-performance Clouds are Best Kept In-house

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Why High-performance Clouds are Best Kept In-house Why High-performance Clouds are Best Kept In-house March 28, 2011 Most commercial entities don't have the infrastructure to handle the intensive workloads of high-performance computing, And, if they do, it will probably be more expensive - in one case, 10 times more expensive - for them to run dedicated services than for some agencies to run their own private clouds. At least that's the case for officials from Lawrence Berkeley National

  11. Frontiers in Planetary and Stellar Magnetism through High-Performance

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing | Argonne Leadership Computing Facility image is a schematic of Earth interior and some magnetic field lines from a model This image is a schematic of Earth interior and some magnetic field lines from a model of planetary dynamo action. We will be carrying out super high res simulations of such processes via our INCITE award, allowing us for the first time to simulate planetary and astrophysical dynamo action in turbulent fluids with realistic fluid properties. Credit: Lorraine

  12. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  13. Reduced Call-Backs with High Performance Production Builders - Building

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    America Top Innovation | Department of Energy Reduced Call-Backs with High Performance Production Builders - Building America Top Innovation Reduced Call-Backs with High Performance Production Builders - Building America Top Innovation Photo of a home with a fence. Engaging production builders to build high-performance homes is key to successfully transforming the market. For this Top Innovation, Building America has effectively addressed this challenge by demonstrating the compelling

  14. Development of Alternative and Durable High Performance Cathode Supporst

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    for PEM Fuel Cells | Department of Energy Alternative and Durable High Performance Cathode Supporst for PEM Fuel Cells Development of Alternative and Durable High Performance Cathode Supporst for PEM Fuel Cells Part of a $100 million fuel cell award announced by DOE Secretary Bodman on Oct. 25, 2006. 3_pnnl.pdf (21.99 KB) More Documents & Publications Development of Alternative and Durable High Performance Cathode Supports for PEM Fuel Cells Fuel Cell Kickoff Meeting Agenda 2015 Pathways

  15. NRC Leadership Expectations and Practices for Sustaining a High Performing

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Organization | Department of Energy NRC Leadership Expectations and Practices for Sustaining a High Performing Organization NRC Leadership Expectations and Practices for Sustaining a High Performing Organization May 16, 2012 Presenter: William C. Ostendorff, NRC Commissioner Topics Covered: NRC Mission Safety Culture NRC Oversight NRC Inspection Program Technical Qualification Continuous Learning NRC Leadership Expectations and Practices for Sustaining a High Performing Organization (4.15

  16. Funding Opportunity: Building America High Performance Housing Innovation |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Opportunity: Building America High Performance Housing Innovation Funding Opportunity: Building America High Performance Housing Innovation November 19, 2015 - 11:51am Addthis The Building Technologies Office (BTO) Residential Buildings Integration Program has announced the availability of $5.5 million for Funding Opportunity Announcement (FOA) DE-FOA-0001395, "Building America Industry Partnerships for High Performance Housing Innovation." DOE seeks to fund up

  17. High Performance Leasing Strategies for State and Local Governments |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy High Performance Leasing Strategies for State and Local Governments High Performance Leasing Strategies for State and Local Governments Presentation for the SEE Action Series: High Performance Leasing Strategies for State and Local Governments webinar, presented on January 26, 2013 as part of the U.S. Department of Energy's Technical Assistance Program (TAP). Presentation (5.98 MB) Transcript (93 KB) More Documents & Publications

  18. Text-Alternative Version of High Performance Space Conditioning Systems:

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Part II | Department of Energy II Text-Alternative Version of High Performance Space Conditioning Systems: Part II High Performance Space Conditioning Systems: Part II November 18, 2014 William Zoeller, Stephen Winter Associates Dave Mallay, Home Innovation Research Labs Jordan Dentz, The Levy Partnership Francis Conlin, High Performance Building Solutions Hello everyone! I am Gail Werren with the National Renewable Energy Laboratory, and I'd like to welcome you to today's webinar hosted by

  19. High Performance Builder Spotlight: Green Coast Enterprises - New Orleans,

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Louisiana | Department of Energy High Performance Builder Spotlight: Green Coast Enterprises - New Orleans, Louisiana High Performance Builder Spotlight: Green Coast Enterprises - New Orleans, Louisiana This four-page case study describes Green Coast Enterprises efforts to rebuild hurricane-ravaged New Orleans through Project Home Again. green_coast_enterprises.pdf (3 MB) More Documents & Publications High Performance Builder Spotlight: Green Coast Enterprises - New Orleans, Louisiana

  20. High Performance Without Increased Cost: Urbane Homes, Louisville, KY -

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Building America Top Innovation | Department of Energy High Performance Without Increased Cost: Urbane Homes, Louisville, KY - Building America Top Innovation High Performance Without Increased Cost: Urbane Homes, Louisville, KY - Building America Top Innovation Photo of a Housing Award logo with a home. This Top Innovation highlights Building America field projects that demonstrated minimal or cost-neutral impacts for high-performance homes and that have significantly influenced the housing

  1. Building America Webinar: High-Performance Enclosure Strategies, Part I:

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Unvented Roof Systems and Innovative Advanced Framing Strategies | Department of Energy High-Performance Enclosure Strategies, Part I: Unvented Roof Systems and Innovative Advanced Framing Strategies Building America Webinar: High-Performance Enclosure Strategies, Part I: Unvented Roof Systems and Innovative Advanced Framing Strategies This webinar, held on February 12, 2015, focused on methods to design and build roof and wall systems for high performance homes that optimize energy and

  2. Building America Webinar: Ventilation Strategies for High Performance

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Homes, Part I: Application-Specific Ventilation Guidelines | Department of Energy Ventilation Strategies for High Performance Homes, Part I: Application-Specific Ventilation Guidelines Building America Webinar: Ventilation Strategies for High Performance Homes, Part I: Application-Specific Ventilation Guidelines This webinar, held on Aug. 26, 2015, covered what makes high-performance homes different from a ventilation perspective and how they might need to be treated differently than

  3. Building America's Top Innovations Advance High Performance Homes |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Building America's Top Innovations Advance High Performance Homes Building America's Top Innovations Advance High Performance Homes Innovations sponsored by the U.S. Department of Energy's (DOE) Building America program and its teams of building science experts continue to have a transforming impact, leading our nation's home building industry to high-performance homes. Building America researchers have worked directly with more than 300 U.S. production home builders and

  4. The Gadonanotubes: Structural Origin of their High-Performance...

    Office of Scientific and Technical Information (OSTI)

    Title: The Gadonanotubes: Structural Origin of their High-Performance MRI Contrast Agent Behavior Authors: Ma, Qing ; Jebb, Meghan ; Tweedle, Michael F. ; Wilson, Lon J. 1 ; NWU) ...

  5. Building America Webinar: High-Performance Enclosure Strategies...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Advanced Framing Strategies Building America Webinar: High-Performance Enclosure Strategies, Part I: Unvented Roof Systems and Innovative Advanced Framing Strategies This ...

  6. Building America Webinar: Ventilation Strategies for High Performance...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Building America Webinar: High-Performance Enclosure Strategies, Part I: Unvented Roof Systems and Innovative Advanced Framing Strategies Building America Webinar: Retrofit ...

  7. Technology Transfer Webinar on November 12: High-Performance...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Technology Transfer Webinar on November 12: High-Performance Hybrid SimulationMeasurement-Based Tools for Proactive Operator Decision-Support Technology Transfer Webinar on...

  8. Moderate Doping Leads to High Performance of Semiconductor/Insulator...

    Office of Scientific and Technical Information (OSTI)

    Title: Moderate Doping Leads to High Performance of SemiconductorInsulator Polymer Blend Transistors Authors: Lu, Guanghao ; Blakesley, James ; Himmelberger, Scott ; Pingel, ...

  9. ESCC Evening Discussion: High Performance Data Transfer Eli Dart...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ESCC Evening Discussion: High Performance Data Transfer Eli Dart, Network Engineer ESnet Network Engineering Group Summer ESCCJoint Techs Columbus, OH July 14, 2010 Lawrence ...

  10. High-Performance Home Technologies: Solar Thermal & Photovoltaic...

    Energy Savers [EERE]

    Solar Thermal & Photovoltaic Systems; Volume 6 Building America Best Practices Series High-Performance Home Technologies: Solar Thermal & Photovoltaic Systems; Volume 6 ...

  11. DOE ZERH Webinar: High-Performance Home Sales Training, Part...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    number of other green and high-performance home programs, these skills will be critical. ... DOE ZERH Webinar: Technical Resources for Marketing and Selling Zero Energy Ready Homes ...

  12. Overcoming Processing Cost Barriers of High-Performance Lithium...

    Broader source: Energy.gov (indexed) [DOE]

    Lithium-Ion Battery Electrodes Vehicle Technologies Office Merit Review 2014: Overcoming Processing Cost Barriers of High-Performance Lithium-Ion Battery Electrodes ...

  13. High Performance Zintl Phase TE Materials with Embedded Particles...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Presents results from embedding nanoparticles in magnesium silicide alloy matrix ... Zintl Phase Materials with Embedded Nanoparticles High performance Zintl phase TE ...

  14. Affordable High Performance in Production Homes: Artistic Homes...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    extraordinary impact, demonstrating the mainstream builder's business case for adopting ... that demonstrate how high performance homes can be affordable for the mainstream market. ...

  15. High Performance Without Increased Cost: Urbane Homes, Louisville...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    In this profile, Urbane Homes of Louisville, KY, worked with Building America team National Association of Home Builders Research Center to build its first high performance home at ...

  16. High Performance Mica-based Compressive Seals for Solid Oxide...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Mica-based Compressive Seals for Solid Oxide Fuel Cells Pacific Northwest National Laboratory Contact PNNL About This Technology In their work, PNNL researchers...

  17. High Performance Photovoltaic Project: Identifying Critical Paths; Preprint

    SciTech Connect (OSTI)

    Symko-Davies, M.; Zweibel, K.; Benner, J.; Sheldon, P.; Noufi, R.; Kurtz, S.; Coutts, T.; Hulstrom, R.

    2001-10-01

    Presented at the 2001 NCPV Program Review Meeting: Describes recent research accomplishments in in-house and subcontracted work in the High-Performance PV Project.

  18. Development of Alternative and Durable High Performance Cathode...

    Broader source: Energy.gov (indexed) [DOE]

    Development of Alternative and Durable High Performance Cathode Supporst for PEM Fuel Cells Fuel Cell Kickoff Meeting Agenda Energy Storage Systems 2012 Peer Review Presentations - ...

  19. Rethinking the idealized morphology in high-performance organic...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Rethinking the idealized morphology in high-performance organic photovoltaics December 9, 2011 Tweet EmailPrint Traditionally, organic photovoltaic (OPV) active layers are viewed...

  20. OLEDWORKS DEVELOPS INNOVATIVE HIGH-PERFORMANCE DEPOSITION TECHNOLOGY...

    Energy Savers [EERE]

    high-performance deposition technology that addresses two major aspects of this manufacturing cost: the expense of organic materials per area of useable product, and the...

  1. Enhanced High Temperature Performance of NOx Storage/Reduction...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    (LNT) Materials Enhanced High Temperature Performance of NOx StorageReduction (NSR) Materials Deactivation Mechanisms of Base MetalZeolite Urea Selective Catalytic Reduction...

  2. Direct Probe Mounted High-Performance Amplifiers for Pulsed Measuremen...

    Office of Scientific and Technical Information (OSTI)

    Direct Probe Mounted High-Performance Amplifiers for Pulsed Measurement Citation Details ... Visit OSTI to utilize additional information resources in energy science and technology. A ...

  3. Direct Probe Mounted High-Performance Amplifiers for Pulsed Measuremen...

    Office of Scientific and Technical Information (OSTI)

    Direct Probe Mounted High-Performance Amplifiers for Pulsed Measurement Citation Details ... Country of Publication: United States Language: English Subject: Materials Science(36) ...

  4. DOE Announces Webinars on High Performance Enclosure Strategies...

    Energy Savers [EERE]

    for Buildings, Fuel Cell Forklifts and Energy Management, and More DOE Announces Webinars on High Performance Enclosure Strategies for Buildings, Fuel Cell Forklifts and Energy ...

  5. Rebuilding It Better: Greensburg, Kansas, High Performance Buildings...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    REBUILDING IT BETTER: GREENSBURG, KANSAS High Performance Buildings Meeting Energy Savings Goals Rebuilding Green: From Vision to Reality Greensburg gathered a diverse group of ...

  6. Chemically Bonded Phosphorus/Graphene Hybrid as a High Performance...

    Office of Scientific and Technical Information (OSTI)

    Room temperature sodium-ion batteries are of great interest for high-energy-density energy ... anode for high performance sodium-ion batteries though a facile ball-milling of red ...

  7. Computing Sciences Staff Help East Bay High Schoolers Upgrade their Summer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Sciences Staff Help East Bay High Schoolers Upgrade their Summer Computing Sciences Staff Help East Bay High Schoolers Upgrade their Summer August 6, 2015 Jon Bashor, jbashor@lbl.gov, +1 510 486 5849 To help prepare students from underrepresented groups learn about careers in a variety of IT fields, the Laney College Computer Information Systems Department offered its Upgrade: Computer Science Program. Thirty-eight students from 10 East Bay high schools registered for the eight-week

  8. Quantitative evaluation of wrist posture and typing performance: A comparative study of 4 computer keyboards

    SciTech Connect (OSTI)

    Burastero, S.

    1994-05-01

    The present study focuses on an ergonomic evaluation of 4 computer keyboards, based on subjective analyses of operator comfort and on a quantitative analysis of typing performance and wrist posture during typing. The objectives of this study are (1) to quantify differences in the wrist posture and in typing performance when the four different keyboards are used, and (2) to analyze the subjective preferences of the subjects for alternative keyboards compared to the standard flat keyboard with respect to the quantitative measurements.

  9. User's manual for the vertical axis wind turbine performance computer code darter

    SciTech Connect (OSTI)

    Klimas, P. C.; French, R. E.

    1980-05-01

    The computer code DARTER (DARrieus, Turbine, Elemental Reynolds number) is an aerodynamic performance/loads prediction scheme based upon the conservation of momentum principle. It is the latest evolution in a sequence which began with a model developed by Templin of NRC, Canada and progressed through the Sandia National Laboratories-developed SIMOSS (SSImple MOmentum, Single Streamtube) and DART (SARrieus Turbine) to DARTER.

  10. High-Performance I/O: HDF5 for Lattice QCD

    SciTech Connect (OSTI)

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav; Syritsyn, Sergey; Walker-Loud, Andre

    2015-01-01

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design and implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.

  11. Project Profile: High Performance Reduction/Oxidation Metal Oxides for

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Thermochemical Energy Storage | Department of Energy Project Profile: High Performance Reduction/Oxidation Metal Oxides for Thermochemical Energy Storage Project Profile: High Performance Reduction/Oxidation Metal Oxides for Thermochemical Energy Storage Sandia National Laboratory Logo Sandia National Lab (Sandia), through the Concentrating Solar Power: Efficiently Leveraging Equilibrium Mechanisms for Engineering New Thermochemical Storage (CSP: ELEMENTS) funding program, is systematically

  12. Building America Webinar: High Performance Enclosure Strategies: Part II,

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    New Construction - August 13, 2014 - Cladding Attachment Over Thick Exterior Rigid Insulation | Department of Energy Cladding Attachment Over Thick Exterior Rigid Insulation Building America Webinar: High Performance Enclosure Strategies: Part II, New Construction - August 13, 2014 - Cladding Attachment Over Thick Exterior Rigid Insulation This presentation, Cladding Attachment Over Thick Rigid Exterior Insulation, was delivered at the Building America webinar, High Performance Enclosure

  13. Building America Webinar: High Performance Space Conditioning Systems, Part

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    II - Air Distribution Retrofit Strategies for Affordable Housing | Department of Energy Air Distribution Retrofit Strategies for Affordable Housing Building America Webinar: High Performance Space Conditioning Systems, Part II - Air Distribution Retrofit Strategies for Affordable Housing Jordan Dentz, Advanced Residential Integrated Energy Solutions (ARIES), and Francis Conlin, High Performance Building Solutions, Inc., presenting Air Distribution Retrofit Strategies for Affordable Housing.

  14. Flourescent Pigments for High-Performance Cool Roofing and Facades |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Flourescent Pigments for High-Performance Cool Roofing and Facades Flourescent Pigments for High-Performance Cool Roofing and Facades Addthis 1 of 3 PPG Industries and Lawrence Berkeley National Laboratory are partnering to develop a new class of dark-colored pigments for cool metal roof and façade coatings that incorporate near-infrared fluorescence and reflectance to improve energy performance. Image: PPG Industries 2 of 3 Berkeley Lab Heat Island Group physicist Paul

  15. Innovative High-Performance Deposition Technology for Low-Cost

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Manufacturing of OLED Lighting | Department of Energy Innovative High-Performance Deposition Technology for Low-Cost Manufacturing of OLED Lighting Innovative High-Performance Deposition Technology for Low-Cost Manufacturing of OLED Lighting Lead Performer: OLEDWorks, LLC - Rochester, NY DOE Total Funding: $1,046,452 Cost Share: $1,046,452 Project Term: October 1, 2013 - December 31, 2015 Funding Opportunity: SSL Manufacturing R&D Funding Opportunity Announcement (FOA) DE-FOA-000079

  16. Integrated Design: A High-Performance Solution for Affordable Housing |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Integrated Design: A High-Performance Solution for Affordable Housing Integrated Design: A High-Performance Solution for Affordable Housing ARIES lab houses. Photo courtesy of The Levy Partnership, Inc. ARIES lab houses. Photo courtesy of The Levy Partnership, Inc. Lead Performer: The Levy Partnership, Inc.-New York, NY Partners: Habitat for Humanity International /Habitat Research Foundation, Atlanta, GA Columbia Count Habitat, NY Habitat of Newburgh, NY Habitat Greater

  17. High Performance Walls in Hot-Dry Climates

    SciTech Connect (OSTI)

    Hoeschele, M.; Springer, D.; Dakin, B.; German, A.

    2015-01-01

    High performance walls represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. The primary goal in improving wall thermal performance revolves around increasing the wall framing from 2x4 to 2x6, adding more cavity and exterior rigid insulation, achieving insulation installation criteria meeting ENERGY STAR's thermal bypass checklist, and reducing the amount of wood penetrating the wall cavity.

  18. PERFORMANCE

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Jee Choi Kent Czechowski Cong Hou Chris McClanahan David S. Noble, Jr. Richard (Rich) Vuduc Salishan Conference on High-Speed Computing Gleneden Beach, Oregon -...

  19. Scalable Computational Methods for the Analysis of High-Throughput Biological Data

    SciTech Connect (OSTI)

    Langston, Michael A

    2012-09-06

    This primary focus of this research project is elucidating genetic regulatory mechanisms that control an organism?¢????s responses to low-dose ionizing radiation. Although low doses (at most ten centigrays) are not lethal to humans, they elicit a highly complex physiological response, with the ultimate outcome in terms of risk to human health unknown. The tools of molecular biology and computational science will be harnessed to study coordinated changes in gene expression that orchestrate the mechanisms a cell uses to manage the radiation stimulus. High performance implementations of novel algorithms that exploit the principles of fixed-parameter tractability will be used to extract gene sets suggestive of co-regulation. Genomic mining will be performed to scrutinize, winnow and highlight the most promising gene sets for more detailed investigation. The overall goal is to increase our understanding of the health risks associated with exposures to low levels of radiation.

  20. A computational study of x-ray emission from high-Z x-ray sources...

    Office of Scientific and Technical Information (OSTI)

    A computational study of x-ray emission from high-Z x-ray sources on the National Ignition Facility laser Citation Details In-Document Search Title: A computational study of x-ray ...

  1. Rebuilding It Better: Greensburg, Kansas, High Performance Buildings

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Meeting Energy Savings Goals (Brochure) | Department of Energy Rebuilding It Better: Greensburg, Kansas, High Performance Buildings Meeting Energy Savings Goals (Brochure) Rebuilding It Better: Greensburg, Kansas, High Performance Buildings Meeting Energy Savings Goals (Brochure) This fact sheet provides a summary of how NREL's technical assistance in Greensburg, Kansas, helped the town rebuild green after recovering from a tornado in May 2007. Rebuilding It Better: Greensburg, Kansas, High

  2. Final Report- Low Cost High Performance Nanostructured Spectrally Selective Coating

    Broader source: Energy.gov [DOE]

    Solar absorbing coating is a key enabling technology to achieve hightemperature high-efficiency concentrating solar power operation. A high-performance solar absorbing material must simultaneously meet all the following three stringent requirements: high thermal efficiency (usually measured by figure of merit), hightemperature durability, and oxidation resistance. The objective of this research is to employ a highly scalable process to fabricate and coat black oxide nanoparticles onto solar absorber surface to achieve ultra-high thermal efficiency.

  3. Computer-Aided Design of Materials for use under High Temperature Operating Condition

    SciTech Connect (OSTI)

    Rajagopal, K. R.; Rao, I. J.

    2010-01-31

    The procedures in place for producing materials in order to optimize their performance with respect to creep characteristics, oxidation resistance, elevation of melting point, thermal and electrical conductivity and other thermal and electrical properties are essentially trial and error experimentation that tend to be tremendously time consuming and expensive. A computational approach has been developed that can replace the trial and error procedures in order that one can efficiently design and engineer materials based on the application in question can lead to enhanced performance of the material, significant decrease in costs and cut down the time necessary to produce such materials. The work has relevance to the design and manufacture of turbine blades operating at high operating temperature, development of armor and missiles heads; corrosion resistant tanks and containers, better conductors of electricity, and the numerous other applications that are envisaged for specially structured nanocrystalline solids. A robust thermodynamic framework is developed within which the computational approach is developed. The procedure takes into account microstructural features such as the dislocation density, lattice mismatch, stacking faults, volume fractions of inclusions, interfacial area, etc. A robust model for single crystal superalloys that takes into account the microstructure of the alloy within the context of a continuum model is developed. Having developed the model, we then implement in a computational scheme using the software ABAQUS/STANDARD. The results of the simulation are compared against experimental data in realistic geometries.

  4. Computing and Computational Sciences Directorate - Computer Science and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mathematics Division Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Our mission includes basic research in computational sciences and application of advanced computing systems, computational, mathematical and analysis techniques to the solution of scientific problems of national importance. We seek to work

  5. Local Option- Property Tax Credit for High Performance Buildings

    Broader source: Energy.gov [DOE]

    Similar to Maryland's Local Option Property Tax Credit for Renewable Energy, Title 9 of Maryland's property tax code creates an optional property tax credit for high performance buildings. This...

  6. Anne Arundel County- High Performance Dwelling Property Tax Credit

    Office of Energy Efficiency and Renewable Energy (EERE)

    The state of Maryland permits local governments (Md Code: Property Tax § 9-242) to offer property tax credits for high performance buildings if they choose to do so. In October 2010 Anne Arundel...

  7. Montgomery County- High Performance Building Property Tax Credit

    Office of Energy Efficiency and Renewable Energy (EERE)

    The state of Maryland permits local governments (Md Code: Property Tax § 9-242) to offer property tax credits for high performance buildings if they choose to do so. Montgomery County has...

  8. Howard County- High Performance and Green Building Property Tax Credit

    Office of Energy Efficiency and Renewable Energy (EERE)

    The state of Maryland permits local governments (Md Code: Property Tax § 9-242) to offer property tax credits for high performance buildings and energy conservation devices (Md Code: Property Tax §...

  9. A High-Performance Recycling Solution for PolystyreneAchieved...

    Office of Scientific and Technical Information (OSTI)

    A High-Performance Recycling Solution for PolystyreneAchieved by the Synthesis of Renewable Poly(thioether) Networks Derived from D -Limonene Citation Details In-Document Search ...

  10. A Comprehensive Look at High Performance Parallel I/O

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    In this era of "big data," high performance parallel IO-the way disk drives efficiently read and write information on HPC systems-is extremely important. Yet the last book to ...

  11. Energy Design Guidelines for High Performance Schools: Tropical Island Climates

    SciTech Connect (OSTI)

    2004-11-01

    Design guidelines outline high performance principles for the new or retrofit design of K-12 schools in tropical island climates. By incorporating energy improvements into construction or renovation plans, schools can reduce energy consumption and costs.

  12. Development of Alternative and Durable High Performance Cathode...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Supporst for PEM Fuel Cells Development of Alternative and Durable High Performance Cathode Supporst for PEM Fuel Cells Part of a 100 million fuel cell award announced by DOE ...

  13. Building America Webinar: High Performance Space Conditioning Systems, Part

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    I | Department of Energy I Building America Webinar: High Performance Space Conditioning Systems, Part I The webinar on Oct. 23, 2014, focused on strategies to improve the performance of HVAC systems for low load homes and home performance retrofits. Presenters and specific topics for this webinar will be: * Andrew Poerschke, IBACOS, presenting Simplified Space Conditioning in Low-load Homes. The presentation will focus on what is "simple" when it comes to space conditioning?

  14. Building America Webinar: High Performance Space Conditioning Systems, Part

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    II | Department of Energy II Building America Webinar: High Performance Space Conditioning Systems, Part II The webinar on Nov. 18, 2014, continued the series on strategies to improve the performance of HVAC systems for low load homes and home performance retrofits. Presenters and specific topics for this webinar included: William Zoeller, Consortium for Advanced Residential Retrofit (CARB), presented Design Options for Locating Ducts within Conditioned Space. The presentation provided an

  15. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  16. Seven NNSS buildings achieve High Performance Sustainable Building status |

    National Nuclear Security Administration (NNSA)

    National Nuclear Security Administration | (NNSA) Seven NNSS buildings achieve High Performance Sustainable Building status Monday, March 21, 2016 - 2:15pm Nevada Support Facility (NSF), Nevada National Security Site administrative headquarters. Nevada National Security Site (NNSS) - The National Nuclear Security Administration announced the award today of seven High Performance Sustainable Building (HPSB) plaques to the NNSS team for seven "green" buildings. The buildings are:

  17. Project Profile: Development and Performance Evaluation of High Temperature

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Concrete for Thermal Energy Storage for Solar Power Generation | Department of Energy Development and Performance Evaluation of High Temperature Concrete for Thermal Energy Storage for Solar Power Generation Project Profile: Development and Performance Evaluation of High Temperature Concrete for Thermal Energy Storage for Solar Power Generation Arkansas logo The University of Arkansas, under the Thermal Storage FOA, is developing a novel concrete material that can withstand operating

  18. Project Profile: Dish Stirling High-Performance Thermal Storage |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Dish Stirling High-Performance Thermal Storage Project Profile: Dish Stirling High-Performance Thermal Storage Sandia National Laboratories logo -- This project is inactive -- Sandia National Laboratories (SNL) is working with the National Renewable Energy Laboratory (NREL) and the University of Connecticut, under the National Laboratory R&D competitive funding opportunity, to demonstrate key thermal energy storage (TES) system components for dish Stirling power

  19. Text-Alternative Version of High Performance Space Conditioning Systems:

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Part I | Department of Energy I Text-Alternative Version of High Performance Space Conditioning Systems: Part I High Performance Space Conditioning Systems: Part I October 21, 2014 Andrew Poerschke, Research Initiatives Specialist, IBACOS Kohta Ueno, Senior Associate, Building Science Corporation Gail: Hello everyone. I am Gail Werren with the National Renewable Energy Laboratory. And I'd like to welcome you to today's webinar hosted by the Building America Program. We are excited to have

  20. Reliable, High Performance Transistors on Flexible Substrates - Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Innovation Portal Advanced Materials Advanced Materials Find More Like This Return to Search Reliable, High Performance Transistors on Flexible Substrates Lawrence Berkeley National Laboratory Contact LBL About This Technology Publications: PDF Document Publication Backplanes for Conformal Electronics and Sensors, "Nano Lett., 2011, 11, 5408-5413 (924 KB) Technology Marketing Summary Researchers at Berkeley Lab have produced uniform, high performance transistors on mechanically

    1. Vehicle Technologies Office Merit Review 2016: Advanced High-Performance

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Batteries for Electric Vehicle (EV) Applications | Department of Energy Advanced High-Performance Batteries for Electric Vehicle (EV) Applications Vehicle Technologies Office Merit Review 2016: Advanced High-Performance Batteries for Electric Vehicle (EV) Applications Presentation given by Amprius at the 2016 DOE Vehicle Technologies Office and Hydrogen and Fuel Cells Program Annual Merit Review and Peer Evaluation Meeting about Batteries es241_stefan_2016_p_web.pdf (739.96 KB) More

    2. Building America Webinar: High Performance Enclosure Strategies: Part II,

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      New Construction - August 13, 2014 - Introduction | Department of Energy Introduction Building America Webinar: High Performance Enclosure Strategies: Part II, New Construction - August 13, 2014 - Introduction This presentation is the Introduction to the Building America webinar, High Performance Enclosure Strategies, Part II, held on August 13, 2014. BA webinar_intro_8_13_14.pdf (969.17 KB) More Documents & Publications Building America Webinar: Retrofitting Central Space Conditioning

    3. High-Performance Refrigerator Using Novel Rotating Heat Exchanger |

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Department of Energy High-Performance Refrigerator Using Novel Rotating Heat Exchanger High-Performance Refrigerator Using Novel Rotating Heat Exchanger Rotating heat exchangers installed in appliances and heat pumps have the potentially to reduce energy costs and refrigerant charge in a compact space. Rotating heat exchangers installed in appliances and heat pumps have the potentially to reduce energy costs and refrigerant charge in a compact space. Sandia-developed rotating heat exchanger

    4. Building America Webinar: High Performance Building Enclosures: Part I,

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Existing Homes | Department of Energy High Performance Building Enclosures: Part I, Existing Homes Building America Webinar: High Performance Building Enclosures: Part I, Existing Homes The webinar, presented on May 21, 2014, focused on specific Building America projects that have implemented technical solutions to retrofit building enclosures to reduce energy use and improve durability. Presenters answered tough questions such as: How can builders deal with increasing exterior foundation

    5. Building America Webinar: High Performance Enclosure Strategies: Part II,

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      New Construction | Department of Energy Strategies: Part II, New Construction Building America Webinar: High Performance Enclosure Strategies: Part II, New Construction The webinar is the second in the series on designing and constructing high performance building enclosures, and will focus on effective strategies to address moisture and thermal needs. Peter Baker, Building Science Corporation, will discuss results of 3 years of laboratory and field exposure testing that examined the

    6. PORST: a computer code to analyze the performance of retrofitted steam turbines

      SciTech Connect (OSTI)

      Lee, C.; Hwang, I.T.

      1980-09-01

      The computer code PORST was developed to analyze the performance of a retrofitted steam turbine that is converted from a single generating to a cogenerating unit for purposes of district heating. Two retrofit schemes are considered: one converts a condensing turbine to a backpressure unit; the other allows the crossover extraction of steam between turbine cylinders. The code can analyze the performance of a turbine operating at: (1) valve-wide-open condition before retrofit, (2) partial load before retrofit, (3) valve-wide-open after retrofit, and (4) partial load after retrofit.

    7. Performance Characterization

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Thus, the high-fidelity modeling to come from exascale computing will provide major ... methods lest future performance be limited by the lethargic trends inmemory bandwidth. ...

    8. Project Profile: High Performance Reflector Panels for CSP Assemblies |

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Department of Energy Concentrating Solar Power » Project Profile: High Performance Reflector Panels for CSP Assemblies Project Profile: High Performance Reflector Panels for CSP Assemblies PPG logo PPG, under the CSP R&D FOA, is aiming to develop and commercialize large-area second-surface glass mirrors that are superior in value, cost, and performance, to existing mirrors on the market today. Approach Photo of a metal stand with flat square-shaped pieces lined up in three rows. This

    9. In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation

      SciTech Connect (OSTI)

      G. R. Odette; G. E. Lucas

      2005-11-15

      This final report on "In-Service Design & Performance Prediction of Advanced Fusion Material Systems by Computational Modeling and Simulation" (DE-FG03-01ER54632) consists of a series of summaries of work that has been published, or presented at meetings, or both. It briefly describes results on the following topics: 1) A Transport and Fate Model for Helium and Helium Management; 2) Atomistic Studies of Point Defect Energetics, Dynamics and Interactions; 3) Multiscale Modeling of Fracture consisting of: 3a) A Micromechanical Model of the Master Curve (MC) Universal Fracture Toughness-Temperature Curve Relation, KJc(T - To), 3b) An Embrittlement DTo Prediction Model for the Irradiation Hardening Dominated Regime, 3c) Non-hardening Irradiation Assisted Thermal and Helium Embrittlement of 8Cr Tempered Martensitic Steels: Compilation and Analysis of Existing Data, 3d) A Model for the KJc(T) of a High Strength NFA MA957, 3e) Cracked Body Size and Geometry Effects of Measured and Effective Fracture Toughness-Model Based MC and To Evaluations of F82H and Eurofer 97, 3-f) Size and Geometry Effects on the Effective Toughness of Cracked Fusion Structures; 4) Modeling the Multiscale Mechanics of Flow Localization-Ductility Loss in Irradiation Damaged BCC Alloys; and 5) A Universal Relation Between Indentation Hardness and True Stress-Strain Constitutive Behavior. Further details can be found in the cited references or presentations that generally can be accessed on the internet, or provided upon request to the authors. Finally, it is noted that this effort was integrated with our base program in fusion materials, also funded by the DOE OFES.

    10. High-Resolution Computational Algorithms for Simulating Offshore Wind Farms

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Algorithms for Simulating Offshore Wind Farms - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense

    11. High-Performance Photovoltaic Project: Identifying Critical Pathways; Kickoff Meeting

      SciTech Connect (OSTI)

      Symko-Davis, M.

      2001-11-07

      The High Performance Photovoltaic Project held a Kickoff Meeting in October, 2001. This booklet contains the presentations given by subcontractors and in-house teams at that meeting. The areas of subcontracted research under the HiPer project include Polycrystalline Thin Films and Multijunction Concentrators. The in-house teams in this initiative will focus on three areas: (1) High-Performance Thin-Film Team-leads the investigation of tandem structures and low-flux concentrators, (2) High-Efficiency Concepts and Concentrators Team-an expansion of an existing team that leads the development of high-flux concentrators, and (3) Thin-Film Process Integration Team-will perform fundamental process and characterization research, to resolve the complex issues of making thin-film multijunction devices.

    12. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

      SciTech Connect (OSTI)

      1997-03-01

      This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

    13. High-pressure X-ray diffraction, Raman, and computational studies...

      Office of Scientific and Technical Information (OSTI)

      High-pressure X-ray diffraction, Raman, and computational studies of MgCl2 up to 1 Mbar: ... Citation Details In-Document Search Title: High-pressure X-ray diffraction, Raman, and ...

    14. Expert Meeting: Recommended Approaches to Humidity Control in High Performance Homes

      SciTech Connect (OSTI)

      Rudd, A.

      2013-07-01

      The topic of this Building America expert meeting was 'Recommended Approaches to Humidity Control in High Performance Homes,' which was held on October 16, 2012, in Westford, MA, and brought together experts in the field of residential humidity control to address modeling issues for dehumidification. The presentations and discussions centered on computer simulation and field experience with these systems, with the goal of developing foundational information to support the development of a Building America Measure Guideline on this topic.

    15. Building America Expert Meeting: Recommended Approaches to Humidity Control in High Performance Homes

      Broader source: Energy.gov [DOE]

      The topic of this Building America expert meeting was Recommended Approaches to Humidity Control in High Performance Homes,Ž which was held on October 16, 2012, in Westford, MA, and brought together experts in the field of residential humidity control to address modeling issues for dehumidification. The presentations and discussions centered on computer simulation and field experience with these systems, with the goal of developing foundational information to support the development of a Building America Measure Guideline on this topic.

    16. Project Profile: High-Performance Nanostructured Coating | Department of

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Energy Performance Nanostructured Coating Project Profile: High-Performance Nanostructured Coating Two illustrations side by side showing how sunlight is absorbed through layers on the left, and on the right, blue dots are above rectangular slab with two layers. --This project is inactive -- The University of California San Diego, under the 2012 SunShot Concentrating Solar Power (CSP) R&D funding opportunity announcement (FOA), is developing a new low-cost and scalable process for

    17. Building America Webinar: High-Performance Enclosure Strategies, Part I:

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Unvented Roof Systems and Innovative Advanced Framing Strategies | Department of Energy Vladimir Kochkin, Home Innovation Research Labs, will focus on approaches for climate zones 3-5 that increase energy performance and reduce moisture issues in walls. The presentation is based on the Builder's Guide to High Performance Walls, which will be published in 2015 Construction Guide: Energy Efficient, Durable Walls (2.15 MB) More Documents & Publications Race to Zero 2015 Design Excellence

    18. High Performance, Low Cost Hydrogen Generation from Renewable Energy |

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Department of Energy Performance, Low Cost Hydrogen Generation from Renewable Energy High Performance, Low Cost Hydrogen Generation from Renewable Energy 2011 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation pd071_ayers_2011_o.pdf (1.38 MB) More Documents & Publications Catalysis Working Group Meeting: June 2015 2014 Pathways to Commercial Success: Technologies and Products Supported by the Fuel Cell Technologies Office 2015

    19. Low latency, high bandwidth data communications between compute nodes in a parallel computer

      DOE Patents [OSTI]

      Blocksome, Michael A

      2014-04-01

      Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.

    20. Low latency, high bandwidth data communications between compute nodes in a parallel computer

      DOE Patents [OSTI]

      Blocksome, Michael A

      2013-07-02

      Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.

    1. Low latency, high bandwidth data communications between compute nodes in a parallel computer

      DOE Patents [OSTI]

      Blocksome, Michael A

      2014-04-22

      Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.

    2. High Performance Walls in Hot-Dry Climates

      SciTech Connect (OSTI)

      Hoeschele, Marc; Springer, David; Dakin, Bill; German, Alea

      2015-01-01

      High performance walls represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. The primary goal in improving wall thermal performance revolves around increasing the wall framing from 2x4 to 2x6, adding more cavity and exterior rigid insulation, achieving insulation installation criteria meeting ENERGY STAR's thermal bypass checklist. To support this activity, in 2013 the Pacific Gas & Electric Company initiated a project with Davis Energy Group (lead for the Building America team, Alliance for Residential Building Innovation) to solicit builder involvement in California to participate in field demonstrations of high performance wall systems. Builders were given incentives and design support in exchange for providing site access for construction observation, cost information, and builder survey feedback. Information from the project was designed to feed into the 2016 Title 24 process, but also to serve as an initial mechanism to engage builders in more high performance construction strategies. This Building America project utilized information collected in the California project.

    3. Identifying Critical Pathways to High-Performance PV: Preprint

      SciTech Connect (OSTI)

      Symko-Davies, M.; Noufi, R.; Kurtz, S.

      2002-05-01

      This conference paper describes the High-Performance Photovoltaic (HiPerf PV)Project was initiated by the U.S. Department of Energy to substantially increase the viability of photovoltaics (PV) for cost-competitive applications so that PV can contribute significantly to our energy supply and our environment in the 21st century. To accomplish this, the NCPV directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices. Details of the subcontractor and in-house progress will be described toward identifying critical pathways of 25% polycrystalline thin-film tandem cells and developing multijunction concentrator modules to 33%.

    4. Energy Performance Testing of Asetek's RackCDU System at NREL's High Performance Computing Data Center

      SciTech Connect (OSTI)

      Sickinger, D.; Van Geet, O.; Ravenscroft, C.

      2014-11-01

      In this study, we report on the first tests of Asetek's RackCDU direct-to-chip liquid cooling system for servers at NREL's ESIF data center. The system was simple to install on the existing servers and integrated directly into the data center's existing hydronics system. The focus of this study was to explore the total cooling energy savings and potential for waste-heat recovery of this warm-water liquid cooling system. RackCDU captured up to 64% of server heat into the liquid stream at an outlet temperature of 89 degrees F, and 48% at outlet temperatures approaching 100 degrees F. This system was designed to capture heat from the CPUs only, indicating a potential for increased heat capture if memory cooling was included. Reduced temperatures inside the servers caused all fans to reduce power to the lowest possible BIOS setting, indicating further energy savings potential if additional fan control is included. Preliminary studies manually reducing fan speed (and even removing fans) validated this potential savings but could not be optimized for these working servers. The Asetek direct-to-chip liquid cooling system has been in operation with users for 16 months with no necessary maintenance and no leaks.

    5. Large Scale Computing and Storage Requirements for High Energy...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      for High Energy Physics Accelerator Physics P. Spentzouris, Fermilab Motivation ... Project-X http:www.er.doe.govhepHEPAPreportsP5Report%2006022008.pdf ComPASS The SciDAC2 ...

    6. Report of the Task Force on Next Generation High Performance...

      Office of Environmental Management (EM)

      computing ecosystem, which includes investment in people, and in mathematics, computer science, software engineering, basic sciences, and materials science and engineering. ...

    7. High performance materials in coal conversion utilization. Final report, October 1, 1993--September 30, 1996

      SciTech Connect (OSTI)

      McCay, T.D.; Boss, W.H.; Dahotre, N.

      1996-12-01

      This report describes the research conducted at the University of Tennessee Space Institute on high performance materials for use in corrosive environments. The work was supported by a US Department of Energy University Coal Research grant. Particular attention was given to the silicon carbide particulate reinforced alumina matrix ceramic composite manufactured by Lanxide Corporation as a potential tubular component in a coal-fired recuperative high-temperature air heater. Extensive testing was performed to determine the high temperature corrosion effects on the strength of the material. A computer modeling of the corrosion process was attempted but the problem proved to be too complex and was not successful. To simplify the situation, a computer model was successfully produced showing the corrosion thermodynamics involved on a monolithic ceramic under the High Performance Power System (HIPPS) conditions (see Appendix A). To seal the material surface and thus protect the silicon carbide particulate from corrosive attack, a dense non porous alumina coating was applied to the material surface. The coating was induced by a defocused carbon dioxide laser beam. High temperature corrosion and strength tests proved the effectiveness of the coating. The carbon dioxide laser was also used to successfully join two pieces of the Lanxide material, however, resources did not allow for the testing of the resulting joint.

    8. High-Performance Thermoelectric Devices Based on Abundant Silicide

      Broader source: Energy.gov (indexed) [DOE]

      Materials for Vehicle Waste Heat Recovery | Department of Energy Development of high-performance thermoelectric devices for vehicle waste heat recovery will include fundamental research to use abundant promising low-cost thermoelectric materials, thermal management and interfaces design, and metrology shi.pdf (4.76

    9. High performance protection circuit for power electronics applications

      SciTech Connect (OSTI)

      Tudoran, Cristian D. Dădârlat, Dorin N.; Toşa, Nicoleta; Mişan, Ioan

      2015-12-23

      In this paper we present a high performance protection circuit designed for the power electronics applications where the load currents can increase rapidly and exceed the maximum allowed values, like in the case of high frequency induction heating inverters or high frequency plasma generators. The protection circuit is based on a microcontroller and can be adapted for use on single-phase or three-phase power systems. Its versatility comes from the fact that the circuit can communicate with the protected system, having the role of a “sensor” or it can interrupt the power supply for protection, in this case functioning as an external, independent protection circuit.

    10. High performance hybrid magnetic structure for biotechnology applications

      DOE Patents [OSTI]

      Humphries, David E.; Pollard, Martin J.; Elkin, Christopher J.

      2006-12-12

      The present disclosure provides a high performance hybrid magnetic structure made from a combination of permanent magnets and ferromagnetic pole materials which are assembled in a predetermined array. The hybrid magnetic structure provides for separation and other biotechnology applications involving holding, manipulation, or separation of magnetic or magnetizable molecular structures and targets. Also disclosed are: a method of assembling the hybrid magnetic plates, a high throughput protocol featuring the hybrid magnetic structure, and other embodiments of the ferromagnetic pole shape, attachment and adapter interfaces for adapting the use of the hybrid magnetic structure for use with liquid handling and other robots for use in high throughput processes.

    11. High performance hybrid magnetic structure for biotechnology applications

      DOE Patents [OSTI]

      Humphries, David E; Pollard, Martin J; Elkin, Christopher J

      2005-10-11

      The present disclosure provides a high performance hybrid magnetic structure made from a combination of permanent magnets and ferromagnetic pole materials which are assembled in a predetermined array. The hybrid magnetic structure provides means for separation and other biotechnology applications involving holding, manipulation, or separation of magnetizable molecular structures and targets. Also disclosed are: a method of assembling the hybrid magnetic plates, a high throughput protocol featuring the hybrid magnetic structure, and other embodiments of the ferromagnetic pole shape, attachment and adapter interfaces for adapting the use of the hybrid magnetic structure for use with liquid handling and other robots for use in high throughput processes.

    12. Theory, Modeling and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Theory, Modeling and Computation Theory, Modeling and Computation The sophistication of modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme computing. CONTACT Jack Shlachter (505) 665-1888 Email Extreme Computing to Power Accurate Atomistic Simulations Advances in high-performance computing and theory allow longer and larger atomistic simulations than currently possible.

    13. Multijunction Photovoltaic Technologies for High-Performance Concentrators

      SciTech Connect (OSTI)

      McConnell, R.; Symko-Davies, M.

      2006-01-01

      Multijunction solar cells provide high-performance technology pathways leading to potentially low-cost electricity generated from concentrated sunlight. The National Center for Photovoltaics at the National Renewable Energy Laboratory has funded different III-V multijunction solar cell technologies and various solar concentration approaches. Within this group of projects, III-V solar cell efficiencies of 41% are close at hand and will likely be reported in these conference proceedings. Companies with well-developed solar concentrator structures foresee installed system costs of $3/watt--half of today's costs--within the next 2 to 5 years as these high-efficiency photovoltaic technologies are incorporated into their concentrator photovoltaic systems. These technology improvements are timely as new large-scale multi-megawatt markets, appropriate for high performance PV concentrators, open around the world.

    14. Multijunction Photovoltaic Technologies for High-Performance Concentrators: Preprint

      SciTech Connect (OSTI)

      McConnell, R.; Symko-Davies, M.

      2006-05-01

      Multijunction solar cells provide high-performance technology pathways leading to potentially low-cost electricity generated from concentrated sunlight. The National Center for Photovoltaics at the National Renewable Energy Laboratory has funded different III-V multijunction solar cell technologies and various solar concentration approaches. Within this group of projects, III-V solar cell efficiencies of 41% are close at hand and will likely be reported in these conference proceedings. Companies with well-developed solar concentrator structures foresee installed system costs of $3/watt--half of today's costs--within the next 2 to 5 years as these high-efficiency photovoltaic technologies are incorporated into their concentrator photovoltaic systems. These technology improvements are timely as new large-scale multi-megawatt markets, appropriate for high performance PV concentrators, open around the world.

    15. Experimental Evaluation of High Performance Integrated Heat Pump

      SciTech Connect (OSTI)

      Miller, William A; Berry, Robert; Durfee, Neal; Baxter, Van D

      2016-01-01

      Integrated heat pump (IHP) technology provides significant potential for energy savings and comfort improvement for residential buildings. In this study, we evaluate the performance of a high performance IHP that provides space heating, cooling, and water heating services. Experiments were conducted according to the ASHRAE Standard 206-2013 where 24 test conditions were identified in order to evaluate the IHP performance indices based on the airside performance. Empirical curve fits of the unit s compressor maps are used in conjunction with saturated condensing and evaporating refrigerant conditions to deduce the refrigerant mass flowrate, which, in turn was used to evaluate the refrigerant side performance as a check on the airside performance. Heat pump (compressor, fans, and controls) and water pump power were measured separately per requirements of Standard 206. The system was charged per the system manufacturer s specifications. System test results are presented for each operating mode. The overall IHP performance metrics are determined from the test results per the Standard 206 calculation procedures.

    16. High performance capacitors using nano-structure multilayer materials fabrication

      DOE Patents [OSTI]

      Barbee, T.W. Jr.; Johnson, G.W.; O`Brien, D.W.

      1996-01-23

      A high performance capacitor is described which is fabricated from nano-structure multilayer materials, such as by controlled, reactive sputtering, and having very high energy-density, high specific energy and high voltage breakdown. The multilayer capacitors, for example, may be fabricated in a ``notepad`` configuration composed of 200--300 alternating layers of conductive and dielectric materials so as to have a thickness of 1 mm, width of 200 mm, and length of 300 mm, with terminals at each end of the layers suitable for brazing, thereby guaranteeing low contact resistance and high durability. The ``notepad`` capacitors may be stacked in single or multiple rows (series-parallel banks) to increase the voltage and energy density. 5 figs.

    17. High performance capacitors using nano-structure multilayer materials fabrication

      DOE Patents [OSTI]

      Barbee, Jr., Troy W.; Johnson, Gary W.; O'Brien, Dennis W.

      1996-01-01

      A high performance capacitor fabricated from nano-structure multilayer materials, such as by controlled, reactive sputtering, and having very high energy-density, high specific energy and high voltage breakdown. The multilayer capacitors, for example, may be fabricated in a "notepad" configuration composed of 200-300 alternating layers of conductive and dielectric materials so as to have a thickness of 1 mm, width of 200 mm, and length of 300 mm, with terminals at each end of the layers suitable for brazing, thereby guaranteeing low contact resistance and high durability. The "notepad" capacitors may be stacked in single or multiple rows (series-parallel banks) to increase the voltage and energy density.

    18. High performance capacitors using nano-structure multilayer materials fabrication

      DOE Patents [OSTI]

      Barbee, T.W. Jr.; Johnson, G.W.; O`Brien, D.W.

      1995-05-09

      A high performance capacitor is fabricated from nano-structure multilayer materials, such as by controlled, reactive sputtering, and having very high energy-density, high specific energy and high voltage breakdown. The multilayer capacitors, for example, may be fabricated in a ``notepad`` configuration composed of 200-300 alternating layers of conductive and dielectric materials so as to have a thickness of 1 mm, width of 200 mm, and length of 300 mm, with terminals at each end of the layers suitable for brazing, thereby guaranteeing low contact resistance and high durability. The notepad capacitors may be stacked in single or multiple rows (series-parallel banks) to increase the voltage and energy density. 5 figs.

    19. High performance capacitors using nano-structure multilayer materials fabrication

      DOE Patents [OSTI]

      Barbee, Jr., Troy W.; Johnson, Gary W.; O'Brien, Dennis W.

      1995-01-01

      A high performance capacitor fabricated from nano-structure multilayer materials, such as by controlled, reactive sputtering, and having very high energy-density, high specific energy and high voltage breakdown. The multilayer capacitors, for example, may be fabricated in a "notepad" configuration composed of 200-300 alternating layers of conductive and dielectric materials so as to have a thickness of 1 mm, width of 200 mm, and length of 300 mm, with terminals at each end of the layers suitable for brazing, thereby guaranteeing low contact resistance and high durability. The "notepad" capacitors may be stacked in single or multiple rows (series-parallel banks) to increase the voltage and energy density.

    20. Designing high power targets with computational fluid dynamics (CFD)

      SciTech Connect (OSTI)

      Covrig, S. D.

      2013-11-07

      High power liquid hydrogen (LH2) targets, up to 850 W, have been widely used at Jefferson Lab for the 6 GeV physics program. The typical luminosity loss of a 20 cm long LH2 target was 20% for a beam current of 100 μA rastered on a square of side 2 mm on the target. The 35 cm long, 2500 W LH2 target for the Qweak experiment had a luminosity loss of 0.8% at 180 μA beam rastered on a square of side 4 mm at the target. The Qweak target was the highest power liquid hydrogen target in the world and with the lowest noise figure. The Qweak target was the first one designed with CFD at Jefferson Lab. A CFD facility is being established at Jefferson Lab to design, build and test a new generation of low noise high power targets.