National Library of Energy BETA

Sample records for high performance computer

  1. High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

  2. Sandia Energy - High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

  3. Introduction to High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Introduction to High Performance Computing Introduction to High Performance Computing June 10, 2013 Photo on 7 30 12 at 7.10 AM Downloads Download File Gerber-HPC-2.pdf...

  4. Software and High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Software and High Performance Computing Software and High Performance Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest Contact thumbnail of Kathleen McDonald Head of Intellectual Property, Business Development Executive Kathleen McDonald Richard P. Feynman Center for Innovation (505) 667-5844 Email Software Computational physics, computer science, applied mathematics, statistics and the

  5. Thrusts in High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    in HPC 1 Thrusts in High Performance Computing Science at Scale Petaflops to Exaflops Science through Volume Thousands to Millions of Simulations Science in Data Petabytes to ...

  6. Presentation: High Performance Computing Applications

    Office of Energy Efficiency and Renewable Energy (EERE)

    A briefing to the Secretary's Energy Advisory Board on High Performance Computing Applications delivered by Frederick H. Streitz, Lawrence Livermore National Laboratory.

  7. Software and High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computational physics, computer science, applied mathematics, statistics and the ... a fully operational supercomputing environment Providing Current Capability Scientific ...

  8. in High Performance Computing Computer System, Cluster, and Networking...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

  9. High Performance Computing Student Career Resources

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    HPC » Students High Performance Computing Student Career Resources Explore the multiple dimensions of a career at Los Alamos Lab: work with the best minds on the planet in an inclusive environment that is rich in intellectual vitality and opportunities for growth. Contact Us Student Liaison Josephine Kilde (505) 667-5086 Email High Performance Computing Capabilities The High Performance Computing (HPC) Division supports the Laboratory mission by managing world-class Supercomputing Centers. Our

  10. SciTech Connect: "high performance computing"

    Office of Scientific and Technical Information (OSTI)

    Advanced Search Term Search Semantic Search Advanced Search All Fields: "high performance computing" Semantic Semantic Term Title: Full Text: Bibliographic Data: Creator ...

  11. Introduction to High Performance Computing Using GPUs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    HPC Using GPUs Introduction to High Performance Computing Using GPUs July 11, 2013 NERSC, NVIDIA, and The Portland Group presented a one-day workshop "Introduction to High Performance Computing Using GPUs" on July 11, 2013 in Room 250 of Sutardja Dai Hall on the University of California, Berkeley, campus. Registration was free and open to all NERSC users; Berkeley Lab Researchers; UC students, faculty, and staff; and users of the Oak Ridge Leadership Computing Facility. This workshop

  12. High Performance Computing Data Center (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2014-08-01

    This two-page fact sheet describes the new High Performance Computing Data Center in the ESIF and talks about some of the capabilities and unique features of the center.

  13. High Performance Computing Data Center (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2012-08-01

    This two-page fact sheet describes the new High Performance Computing Data Center being built in the ESIF and talks about some of the capabilities and unique features of the center.

  14. OCIO Technology Summit: High Performance Computing

    Office of Energy Efficiency and Renewable Energy (EERE)

    Last week, the Office of the Chief Information Officer sponsored a Technology Summit on High Performance Computing (HPC), hosted by the Chief Technology Officer.  This was the eleventh in a series...

  15. Collaboration to advance high-performance computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Collaboration to advance high-performance computing Collaboration to advance high-performance computing LANL and EMC will enhance, design, build, test, and deploy new cutting-edge technologies to meet some of the most difficult information technology challenges. December 21, 2011 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy

  16. Debugging a high performance computing program

    DOE Patents [OSTI]

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  17. Debugging a high performance computing program

    DOE Patents [OSTI]

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  18. Fermilab | Science at Fermilab | Computing | High-performance...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Lattice QCD Farm at the Grid Computing Center at Fermilab. Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer ...

  19. Climate Modeling using High-Performance Computing

    SciTech Connect (OSTI)

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  20. High Performance Computing at the Oak Ridge Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Computing at the Oak Ridge Leadership Computing Facility Go to Menu Page 2 Outline * Our Mission * Computer Systems: Present, Past, Future * Challenges Along the Way * Resources for Users Go to Menu Page 3 Our Mission Go to Menu Page 4 * World's most powerful computing facility * Nation's largest concentration of open source materials research * $1.3B budget * 4,250 employees * 3,900 research guests annually * $350 million invested in modernization * Nation's most diverse energy

  1. High-performance computing for airborne applications

    SciTech Connect (OSTI)

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  2. High Performance Computing | Argonne National Laboratory

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Computing A visualization of a simulated collision event in the ATLAS detector. This simulation, containing a Z boson and five hadronic jets, is an example of an event that is too complex to be simulated in bulk using ordinary PC-based computing grids. A visualization of a simulated collision event in the ATLAS detector. This simulation, containing a Z boson and five hadronic jets, is an example of an event that is too complex to be simulated in bulk using ordinary PC-based

  3. OSTIblog Articles in the High-performance computing Topic | OSTI...

    Office of Scientific and Technical Information (OSTI)

    Research, ASCR, climate change, earth systems modeling, High-performance computing, ... ORNL's National Center for Computational Sciences... Related Topics: High-performance ...

  4. Nuclear Forces and High-Performance Computing: The Perfect Match...

    Office of Scientific and Technical Information (OSTI)

    Conference: Nuclear Forces and High-Performance Computing: The Perfect Match Citation Details In-Document Search Title: Nuclear Forces and High-Performance Computing: The Perfect ...

  5. High Performance Computing Richard F. BARRETT

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Role of Co-design in High Performance Computing Richard F. BARRETT a,1 , Shekhar BORKAR b , Sudip S. DOSANJH c , Simon D. HAMMOND a , Michael A. HEROUX a , X. Sharon HU d , Justin LUITJENS e , Steven G. PARKER e , John SHALF c , and Li TANG d a Sandia National Laboratories, Albuquerque, NM, USA b Intel Corporation c Lawrence Berkeley National Laboratory, Berkeley, CA, USA d University of Notre Dame, South Bend, IN, USA e Nvidia, Inc., Santa Clara, CA, USA Abstract. Preparations for Exascale

  6. Computational Performance of Ultra-High-Resolution Capability...

    Office of Scientific and Technical Information (OSTI)

    Computational Performance of Ultra-High-Resolution Capability in the Community Earth System Model Citation Details In-Document Search Title: Computational Performance of ...

  7. High Performance Computing Facility Operational Assessment, CY...

    Office of Scientific and Technical Information (OSTI)

    At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational ...

  8. Energy Efficiency Opportunities in Federal High Performance Computing...

    Broader source: Energy.gov (indexed) [DOE]

    Efficiency Opportunities in Federal High Performance Computing Data Centers Prepared for .........9 EEMs for HPC Data Centers ......

  9. High-performance computer system installed at Los Alamos National

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Laboratory High-performance computer system installed at Lab High-performance computer system installed at Los Alamos National Laboratory New high-performance computer system, called Wolf, will be used for unclassified research. June 17, 2014 The Wolf computer system modernizes mid-tier resources for Los Alamos scientists. The Wolf computer system modernizes mid-tier resources for Los Alamos scientists. Contact Nancy Ambrosiano Communications Office (505) 667-0471 Email "This machine

  10. High-Performance Computing at Los

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Performance Computing at Los Alamos announces milestone for key/ value middleware May 26, 2014 Billion inserts-per-second data milestone reached for supercomputing tool LOS ALAMOS, N.M., May 29, 2014-At Los Alamos, a supercomputer epicenter where "big data set" really means something, a data middleware project has achieved a milestone for specialized information organization and storage. The Multi-dimensional Hashed Indexed Middleware (MDHIM) project at Los Alamos National Laboratory

  11. Bill Carlson IDA Center for Computing Sciences Making High Performance

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Integration Establish correlation between database tables and data structures in memory. ... ctime(t->b)); t++; High Performance computing is in trouble Not because of performance ...

  12. High-Performance Computing Data Center Metering Protocol | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Energy High-Performance Computing Data Center Metering Protocol High-Performance Computing Data Center Metering Protocol Guide details the methods for measurement in High-Performance Computing (HPC) data center facilities and documents system strategies that have been used in Department of Energy data centers to increase data center energy efficiency. Download the guide. (1.34 MB) More Documents & Publications Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance

  13. High-performance computer system installed at Los Alamos National

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Laboratory High-performance computer system installed at Los Alamos National Laboratory Alumni Link: Opportunities, News and Resources for Former Employees Latest Issue:September 2015 all issues All Issues » submit High-performance computer system installed at Los Alamos National Laboratory New high-performance computer system, called Wolf, will be used for unclassified research September 2, 2014 New insights to changing the atomic structure of metals The Wolf computer system modernizes

  14. Energy Efficiency Opportunities in Federal High Performance Computing Data

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Centers | Department of Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Case study describes an outline of energy efficiency opportunities in federal high-performance computing data centers. Download the case study. (1.05 MB) More Documents & Publications Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers Case Study: Innovative Energy

  15. NNSA Awards Contract for High-Performance Computers | National...

    National Nuclear Security Administration (NNSA)

    Awards Contract for High-Performance Computers October 02, 2007 Contract Highlights Efforts to Integrate Nuclear Weapons Complex WASHINGTON, D.C. -- The Department of Energy's ...

  16. High-Performance Computing for Advanced Smart Grid Applications...

    Office of Scientific and Technical Information (OSTI)

    Title: High-Performance Computing for Advanced Smart Grid Applications The power grid is becoming far more complex as a result of the grid evolution meeting an information ...

  17. Webinar "Applying High Performance Computing to Engine Design...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Webinar "Applying High Performance Computing to Engine Design Using Supercomputers" Share ... Study Benefits of Bioenergy Crop Integration Video: Biofuel technology at Argonne

  18. high performance computing | National Nuclear Security Administration

    National Nuclear Security Administration (NNSA)

    Livermore National Laboratory (LLNL), announced her retirement last week after 15 years of leading Livermore's Computation Directorate. "Dona has successfully led a ...

  19. High Performance Computing Data Center Metering Protocol

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    1.5% of all electricity used in the US at that time. The report then suggested that the overall consumption would rise ... computers utilized by end users, and servers and ...

  20. DOE ASSESSMENT SEAB Recommendations Related to High Performance Computing

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of 10 DOE ASSESSMENT SEAB Recommendations Related to High Performance Computing 1. Introduction The Department of Energy (DOE) is planning to develop and deliver capable exascale computing systems by 2023-24. These systems are expected to have a one-hundred to one-thousand-fold increase in sustained performance over today's computing capabilities, capabilities critical to enabling the next-generation computing for national security, science, engineering, and large- scale data analytics needed to

  1. High Performance Computational Biology: A Distributed computing Perspective (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Konerding, David [Google, Inc

    2011-06-08

    David Konerding from Google, Inc. gives a presentation on "High Performance Computational Biology: A Distributed Computing Perspective" at the JGI/Argonne HPC Workshop on January 26, 2010.

  2. 100 supercomputers later, Los Alamos high-performance computing still

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    supports national security mission High-performance computing supports national security 100 supercomputers later, Los Alamos high-performance computing still supports national security mission Los Alamos National Laboratory has deployed 100 supercomputers in the last 60 years. November 12, 2014 1952 MANIAC-I supercomputer 1952 MANIAC-I supercomputer Contact Nancy Ambrosiano Communications Office (505) 667-0471 Email "Computing power for our Laboratory's national security mission is a

  3. High-performance computing of electron microstructures

    SciTech Connect (OSTI)

    Bishop, A. [Los Alamos National Lab., NM (United States); Birnir, B.; Galdrikian, B.; Wang, L. [Univ. of California, Santa Barbara, CA (United States)

    1998-12-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The project was a collaboration between the Quantum Institute at the University of California-Santa Barbara (UCSB) and the Condensed Matter and Statistical Physics Group at LANL. The project objective, which was successfully accomplished, was to model quantum properties of semiconductor nanostructures that were fabricated and measured at UCSB using dedicated molecular-beam epitaxy and free-electron laser facilities. A nonperturbative dynamic quantum theory was developed for systems driven by time-periodic external fields. For such systems, dynamic energy spectra of electrons and photons and their corresponding wave functions were obtained. The results are in good agreement with experimental investigations. The algorithms developed are ideally suited for massively parallel computing facilities and provide a fundamental advance in the ability to predict quantum-well properties and guide their engineering. This is a definite step forward in the development of nonlinear optical devices.

  4. Intro - High Performance Computing for 2015 HPC Annual Report

    SciTech Connect (OSTI)

    Klitsner, Tom

    2015-10-01

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  5. High-Performance Computing and Visualization | Energy Systems Integration |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    NREL High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a research tool by which scientists and engineers find new ways to tackle our nation's energy challenges-challenges that cannot be addressed through traditional experimentation alone. Photo of two men standing in front of a 3D visualization screen These research efforts will save time and money and significantly improve the likelihood of breakthroughs

  6. Continuous Monitoring And Cyber Security For High Performance Computing

    Office of Scientific and Technical Information (OSTI)

    (Conference) | SciTech Connect Conference: Continuous Monitoring And Cyber Security For High Performance Computing Citation Details In-Document Search Title: Continuous Monitoring And Cyber Security For High Performance Computing Authors: Malin, Alex B. [1] ; Van Heule, Graham K. [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2013-08-02 OSTI Identifier: 1089452 Report Number(s): LA-UR-13-21921 DOE Contract Number: AC52-06NA25396 Resource Type: Conference

  7. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect (OSTI)

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  8. LANL installs high-performance computer system | National Nuclear Security

    National Nuclear Security Administration (NNSA)

    Administration | (NNSA) LANL installs high-performance computer system Friday, June 20, 2014 - 10:29am Los Alamos National Laboratory recently installed a new high-performance computer system, called Wolf, which will be used for unclassified research. Wolf will help modernize mid-tier resources available to the lab and can be used to advance many fields of science. Wolf, manufactured by Cray Inc., has 616 compute nodes, each with two 8-core 2.6 GHz Intel "Sandybridge" processors,

  9. High-Performance Computing for Advanced Smart Grid Applications

    SciTech Connect (OSTI)

    Huang, Zhenyu; Chen, Yousu

    2012-07-06

    The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

  10. High performance computing and communications: FY 1997 implementation plan

    SciTech Connect (OSTI)

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  11. High Performance Parallel Computing of Flows in Complex Geometries |

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Argonne Leadership Computing Facility Geometries Authors: Gicquela, L.Y.M., Gourdaina, N., Boussugea, J.F., Deniaua, H., Staffelbach, G., Wolf, P., Poinsot, T. Efficient numerical tools taking advantage of the ever increasing power of high-performance computers, become key elements in the fields of energy supply and transportation, not only from a purely scientific point of view, but also at the design stage in industry. Indeed, flow phenomena that occur in or around the industrial

  12. High Performance Parallel Computing of Flows in Complex Geometries: I.

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Methods | Argonne Leadership Computing Facility I. Methods Authors: Gourdain, N., Gicquel, L., Montagnac, M., Vermorel, O., Gazaix, M., Staffelbach, G., Garcia, M., Boussuge, J-F, Poinsot, T. Efficient numerical tools coupled with high-performance computers, have become a key element of the design process in the fields of energy supply and transportation. However flow phenomena that occur in complex systems such as gas turbines and aircrafts are still not understood mainly because of the

  13. The role of interpreters in high performance computing

    SciTech Connect (OSTI)

    Naumann, Axel; Canal, Philippe; /Fermilab

    2008-01-01

    Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. We show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.

  14. High performance computing and communications: FY 1996 implementation plan

    SciTech Connect (OSTI)

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  15. Toward a new metric for ranking high performance computing systems...

    Office of Scientific and Technical Information (OSTI)

    performance for a growing collection of important science and engineering applications. ... performance and expect to drive computer system design and implementation in ...

  16. A directory service for configuring high-performance distributed computations

    SciTech Connect (OSTI)

    Fitzgerald, S.; Kesselman, C.; Foster, I.

    1997-08-01

    High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

  17. SC15 High Performance Computing (HPC) Transforms Batteries - Joint Center

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    for Energy Storage Research September 21, 2015, Videos SC15 High Performance Computing (HPC) Transforms Batteries A new breakthrough battery-one that has significantly higher energy, lasts longer, and is cheaper and safer-will likely be impossible without a new material discovery. Kristin Persson and other JCESR scientists at Lawrence Berkeley National Laboratory are taking some of the guesswork out of the discovery process with the Electrolyte Genome Project. Electrolyte Genome

  18. 100 supercomputers later, Los Alamos high-performance computing still

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    supports national security mission 100 supercomputers later Alumni Link: Opportunities, News and Resources for Former Employees Latest Issue:September 2015 all issues All Issues » submit 100 supercomputers later, Los Alamos high-performance computing still supports national security mission Los Alamos National Laboratory has deployed 100 supercomputers in the last 60 years January 1, 2015 1952 MANIAC-I supercomputer 1952 MANIAC-I supercomputer Contact Linda Anderman Email From the 1952

  19. Transforming Power Grid Operations via High Performance Computing

    SciTech Connect (OSTI)

    Huang, Zhenyu; Nieplocha, Jarek

    2008-07-31

    Past power grid blackout events revealed the adequacy of grid operations in responding to adverse situations partially due to low computational efficiency in grid operation functions. High performance computing (HPC) provides a promising solution to this problem. HPC applications in power grid computation also become necessary to take advantage of parallel computing platforms as the computer industry is undergoing a significant change from the traditional single-processor environment to an era for multi-processor computing platforms. HPC applications to power grid operations are multi-fold. HPC can improve todays grid operation functions like state estimation and contingency analysis and reduce the solution time from minutes to seconds, comparable to SCADA measurement cycles. HPC also enables the integration of dynamic analysis into real-time grid operations. Dynamic state estimation, look-ahead dynamic simulation and real-time dynamic contingency analysis can be implemented and would be three key dynamic functions in future control centers. HPC applications call for better decision support tools, which also need HPC support to handle large volume of data and large number of cases. Given the complexity of the grid and the sheer number of possible configurations, HPC is considered to be an indispensible element in the next generation control centers.

  20. High Performance Computing with Harness over InfiniBand

    SciTech Connect (OSTI)

    Valentini, Alessandro; Di Biagio, Christian; Batino, Fabrizio; Pennella, Guido; Palma, Fabrizio; Engelmann, Christian

    2009-01-01

    Harness is an adaptable and plug-in-based middleware framework able to support distributed parallel computing. By now, it is based on the Ethernet protocol which cannot guarantee high performance throughput and real time (determinism) performance. During last years, both, the research and industry environments have developed new network architectures (InfiniBand, Myrinet, iWARP, etc.) to avoid those limits. This paper concerns the integration between Harness and InfiniBand focusing on two solutions: IP over InfiniBand (IPoIB) and Socket Direct Protocol (SDP) technology. They allow the Harness middleware to take advantage of the enhanced features provided by the InfiniBand Architecture.

  1. High performance computing and communications: FY 1995 implementation plan

    SciTech Connect (OSTI)

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  2. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect (OSTI)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  3. In the OSTI Collections: High-Performance Computing | OSTI, US...

    Office of Scientific and Technical Information (OSTI)

    ... approach to exascale-computing resilience, but choosing one approach now would ... opportunities for low-power, high-resilience technology, aiming for an early ...

  4. High-performance Computing Applied to Semantic Databases

    SciTech Connect (OSTI)

    Goodman, Eric L.; Jimenez, Edward; Mizell, David W.; al-Saffar, Sinan; Adolf, Robert D.; Haglin, David J.

    2011-06-02

    To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.

  5. High-performance computing applied to semantic databases.

    SciTech Connect (OSTI)

    al-Saffar, Sinan; Jimenez, Edward Steven, Jr.; Adolf, Robert; Haglin, David; Goodman, Eric L.; Mizell, David

    2010-12-01

    To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.

  6. Multicore Challenges and Benefits for High Performance Scientific Computing

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Nielsen, Ida M.B.; Janssen, Curtis L.

    2008-01-01

    Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less

  7. Toward a new metric for ranking high performance computing systems...

    Office of Scientific and Technical Information (OSTI)

    as a true measure of system performance for a growing collection of important science and engineering applications. In this paper we describe a new high performance conjugate...

  8. Power/energy use cases for high performance computing.

    SciTech Connect (OSTI)

    Laros, James H.,; Kelly, Suzanne M; Hammond, Steven; Elmore, Ryan; Munch, Kristin

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  9. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    SciTech Connect (OSTI)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  10. High Performance Computing at TJNAF| U.S. DOE Office of Science...

    Office of Science (SC) Website

    Applications of Nuclear Science Archives High Performance Computing at TJNAF Print Text ... collaboration with other institutions, computer scientists and physicists are exploiting ...

  11. Arthur B. (Barney) Maccabe Computer Science Department Center for High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Linux never has been and never will be "Extreme" Arthur B. (Barney) Maccabe Computer Science Department Center for High Performance Computing The University of New Mexico Salishan April 23, 2003 Salishan April 23, 2003 1 This talk was prepared on a Debain Linux box http://www.debian.org using OpenOffice http://www.openoffice.org Salishan April 23, 2003 1 Outline ● My background: lightweight operating systems ● Linux and world domination ● Adapting to innovative technologies ●

  12. DOE Science Showcase - High-Performance Computing | OSTI, US Dept of Energy

    Office of Scientific and Technical Information (OSTI)

    Office of Scientific and Technical Information High-Performance Computing Supercomputers or massively parallel high-performance computers (HPCs) are machines that employ very large numbers of processors in parallel to address scientific and engineering challenges. HPCs carry out trillions or even quadrillions of calculations each second - current high-performance computers are powerful enough to simulate some of the most complex physical, biological, and chemical phenomena. High-performance

  13. John Shalf Gives Talk at San Francisco High Performance Computing Meetup

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    John Shalf Gives Talk at San Francisco High Performance Computing Meetup John Shalf Gives Talk at San Francisco High Performance Computing Meetup September 17, 2014 XBD200503 00083 John Shalf In his role as NERSC's chief technology officer, John Shalf gave a talk on "Converging Interconnect Requirements for HPC and Warehouse Scale Computing at San Francisco High Performance Computing Meetup. The Sept 17 meeting was held at GeekdomSF in downtown San Francisco. The group, which describes

  14. ALCF summer students gain experience with high-performance computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of computing that my textbooks couldn't keep up with," said Brown, who is majoring in computer science and computer game design. "Getting exposed to many-core machines and...

  15. A secure communications infrastructure for high-performance distributed computing

    SciTech Connect (OSTI)

    Foster, I.; Koenig, G.; Tuecke, S.

    1997-08-01

    Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentially of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. The authors address these requirements via a security-enhanced version of the Nexus communication library; which they use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine-grained security/performance tradeoffs. The authors present performance results that quantify the performance of their infrastructure.

  16. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    SciTech Connect (OSTI)

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  17. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    SciTech Connect (OSTI)

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  18. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    SciTech Connect (OSTI)

    Bland, Arthur S Buddy; Hack, James J; Baker, Ann E; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; White, Julia C

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next

  19. Towards an Abstraction-Friendly Programming Model for High Productivity and High Performance Computing

    SciTech Connect (OSTI)

    Liao, C; Quinlan, D; Panas, T

    2009-10-06

    General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will

  20. DOE Science Showcase - High-Performance Computing | OSTI, US...

    Office of Scientific and Technical Information (OSTI)

    DOE Computing, Energy.gov DOE Office of Science Advanced Scientific Computing Research ... SciTech Connect National Library of EnergyBeta Science.gov Ciencia.Science.gov ...

  1. Webinar: High Performance Computing For Manufacturing Spring Solicitation, March 24, 2016

    Broader source: Energy.gov [DOE]

    The Energy Department's Lawrence Livermore National Laboratory will be hosting an informational webinar on the High Performance Computing for Manufacturing (HPC4Mfg) spring solicitation on March...

  2. Webinar: High Performance Computing For Manufacturing Spring Solicitation, April 5, 2016

    Broader source: Energy.gov [DOE]

    The Energy Department's Lawrence Livermore National Laboratory will be hosting an informational webinar on the High Performance Computing for Manufacturing (HPC4Mfg) spring solicitation on April...

  3. Energy Efficiency Opportunities in Federal High Performance Computing...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Case Study: Opportunities to Improve Energy Efficiency in Three Federal Data Centers Case Study: Innovative Energy Efficiency Approaches in NOAA's Environmental Security Computing ...

  4. Introduction to High Performance Computers Richard Gerber NERSC User Services

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    What are the main parts of a computer? Merit Badge Requirements ... 4. Explain the following to your counselor: a. The five major parts of a computer. ... Boy Scouts of America Offer a Computers Merit Badge 5 What are the "5 major parts"? 6 Five Major Parts eHow.com Answers.com Fluther.com Yahoo! Wikipedia CPU CPU CPU CPU Motherboard RAM Monitor RAM RAM Power Supply Hard Drive Printer Storage Power Supply Removable Media Video Card Mouse Keyboard/ Mouse Video Card Secondary Storage

  5. High-performance, distributed computing software libraries and services

    Energy Science and Technology Software Center (OSTI)

    2002-01-24

    The Globus toolkit provides basic Grid software infrastructure (i.e. middleware), to facilitate the development of applications which securely integrate geographically separated resources, including computers, storage systems, instruments, immersive environments, etc.

  6. Simulation and High-Performance Computing | Department of Energy

    Office of Environmental Management (EM)

    and Former Under Secretary for Science What are the key facts? The Chinese's Tianhe-1A machine is now the world's most powerful computer, 40% faster than the fastest ...

  7. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect (OSTI)

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.

  8. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-11-01

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation packagemorecapable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).less

  9. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  10. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    SciTech Connect (OSTI)

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-11-01

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).

  11. High Performance Parallel Computing of Flows in Complex Geometries: II.

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Applications | Argonne Leadership Computing Facility II. Applications Authors: Gourdain, N., Gicquel, L., Staffelbach, G., Vermorel, O., Duchaine, F., Boussuge, J-F, Poinsot, T. Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the

  12. Scalable File Systems for High Performance Computing Final Report

    SciTech Connect (OSTI)

    Brandt, S A

    2007-10-03

    Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-state testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.

  13. High performance computing and communications grand challenges program

    SciTech Connect (OSTI)

    Solomon, J.E.; Barr, A.; Chandy, K.M.; Goddard, W.A., III; Kesselman, C.

    1994-10-01

    The so-called protein folding problem has numerous aspects, however it is principally concerned with the {ital de novo} prediction of three-dimensional (3D) structure from the protein primary amino acid sequence, and with the kinetics of the protein folding process. Our current project focuses on the 3D structure prediction problem which has proved to be an elusive goal of molecular biology and biochemistry. The number of local energy minima is exponential in the number of amino acids in the protein. All current methods of 3D structure prediction attempt to alleviate this problem by imposing various constraints that effectively limit the volume of conformational space which must be searched. Our Grand Challenge project consists of two elements: (1) a hierarchical methodology for 3D protein structure prediction; and (2) development of a parallel computing environment, the Protein Folding Workbench, for carrying out a variety of protein structure prediction/modeling computations. During the first three years of this project, we are focusing on the use of two proteins selected from the Brookhaven Protein Data Base (PDB) of known structure to provide validation of our prediction algorithms and their software implementation, both serial and parallel. Both proteins, protein L from {ital peptostreptococcus magnus}, and {ital streptococcal} protein G, are known to bind to IgG, and both have an {alpha} {plus} {beta} sandwich conformation. Although both proteins bind to IgG, they do so at different sites on the immunoglobin and it is of considerable biological interest to understand structurally why this is so. 12 refs., 1 fig.

  14. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect (OSTI)

    Baker, Ann E; Bland, Arthur S Buddy; Hack, James J; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where

  15. Reliable High Performance Peta- and Exa-Scale Computing

    SciTech Connect (OSTI)

    Bronevetsky, G

    2012-04-02

    As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continue to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty

  16. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    SciTech Connect (OSTI)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  17. A High Performance Computing Platform for Performing High-Volume Studies With Windows-based Power Grid Tools

    SciTech Connect (OSTI)

    Chen, Yousu; Huang, Zhenyu

    2014-08-31

    Serial Windows-based programs are widely used in power utilities. For applications that require high volume simulations, the single CPU runtime can be on the order of days or weeks. The lengthy runtime, along with the availability of low cost hardware, is leading utilities to seriously consider High Performance Computing (HPC) techniques. However, the vast majority of the HPC computers are still Linux-based and many HPC applications have been custom developed external to the core simulation engine without consideration for ease of use. This has created a technical gap for applying HPC-based tools to todays power grid studies. To fill this gap and accelerate the acceptance and adoption of HPC for power grid applications, this paper presents a prototype of generic HPC platform for running Windows-based power grid programs on Linux-based HPC environment. The preliminary results show that the runtime can be reduced from weeks to hours to improve work efficiency.

  18. Workshop on programming languages for high performance computing (HPCWPL): final report.

    SciTech Connect (OSTI)

    Murphy, Richard C.

    2007-05-01

    This report summarizes the deliberations and conclusions of the Workshop on Programming Languages for High Performance Computing (HPCWPL) held at the Sandia CSRI facility in Albuquerque, NM on December 12-13, 2006.

  19. Energy Department Announces Ten New Projects to Apply High-Performance Computing to Manufacturing Challenges

    Broader source: Energy.gov [DOE]

    The Energy Department today announced $3 million for ten new projects that will enable private-sector companies to use high-performance computing resources at the department's national laboratories to tackle major manufacturing challenges.

  20. Energy Department Announces $3 Million for Industry Access to High Performance Computing

    Broader source: Energy.gov [DOE]

    The Energy Department today announced up to $3 million in available funding for manufacturers to use high-performance computing resources at the Department's national laboratories to tackle major manufacturing challenges.

  1. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Oehmen, Chris [PNNL

    2011-06-08

    Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

  2. Vehicle Technologies Office Merit Review 2013: Accelerating Predictive Simulation of IC Engines with High Performance Computing

    Office of Energy Efficiency and Renewable Energy (EERE)

    Presentation given by Oak Ridge National Laboratory at the 2013 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting about simulating internal combustion engines using high performance computing.

  3. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

    SciTech Connect (OSTI)

    Oehmen, Chris [PNNL] [PNNL

    2010-01-25

    Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

  4. An evaluation of Java's I/O capabilities for high-performance computing.

    SciTech Connect (OSTI)

    Dickens, P. M.; Thakur, R.

    2000-11-10

    Java is quickly becoming the preferred language for writing distributed applications because of its inherent support for programming on distributed platforms. In particular, Java provides compile-time and run-time security, automatic garbage collection, inherent support for multithreading, support for persistent objects and object migration, and portability. Given these significant advantages of Java, there is a growing interest in using Java for high-performance computing applications. To be successful in the high-performance computing domain, however, Java must have the capability to efficiently handle the significant I/O requirements commonly found in high-performance computing applications. While there has been significant research in high-performance I/O using languages such as C, C++, and Fortran, there has been relatively little research into the I/O capabilities of Java. In this paper, we evaluate the I/O capabilities of Java for high-performance computing. We examine several approaches that attempt to provide high-performance I/O--many of which are not obvious at first glance--and investigate their performance in both parallel and multithreaded environments. We also provide suggestions for expanding the I/O capabilities of Java to better support the needs of high-performance computing applications.

  5. High-Performance Computing at Los Alamos announces milestone for key/value

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    middleware High-Performance Computing announces milestone High-Performance Computing at Los Alamos announces milestone for key/value middleware Billion inserts-per-second data milestone reached for supercomputing tool. May 26, 2014 Billion inserts-per-second data milestone reached for supercomputing tool Billion inserts-per-second data milestone reached for supercomputing tool. Contact Nancy Ambrosiano Communications Office (505) 667-0471 Email "This milestone was achieved by a

  6. NREL Selects Partners for New High Performance Computer Data Center - News

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Releases | NREL Selects Partners for New High Performance Computer Data Center NREL to work with HP and Intel to create one of the world's most energy efficient data centers. September 5, 2012 The U.S. Department of Energy's National Renewable Energy Laboratory (NREL) has selected HP and Intel to provide a new energy-efficient high performance computer (HPC) system dedicated to energy systems integration, renewable energy research, and energy efficiency technologies. The new center will

  7. Webinar "Applying High Performance Computing to Engine Design Using

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Supercomputers" | Argonne National Laboratory Webinar "Applying High Performance Computing to Engine Design Using Supercomputers" Share Description Video from the February 25, 2016 Convergent Science/Argonne National Laboratory webinar "Applying High Performance Computing to Engine Design using Supercomputers," featuring Janardhan Kodavasal of Argonne National Laboratory Speakers Janardhan Kodavasal, Argonne National Laboratory Duration 52:26 Topic Energy Energy

  8. DOE High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Fund New Proposals To Jumpstart Energy Technologies | Department of Energy High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To Fund New Proposals To Jumpstart Energy Technologies DOE High Performance Computing for Manufacturing (HPC4Mfg) Program Seeks To Fund New Proposals To Jumpstart Energy Technologies March 18, 2016 - 3:31pm Addthis News release from Lawrence Livermore National Laboratory, March 17 2016 LIVERMORE, Calif - A new U.S. Department of Energy (DOE) program

  9. Process for selecting NEAMS applications for access to Idaho National Laboratory high performance computing resources

    SciTech Connect (OSTI)

    Michael Pernice

    2010-09-01

    INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

  10. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    SciTech Connect (OSTI)

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on high performance computing platforms.

  11. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    SciTech Connect (OSTI)

    Baker, Ann E; Barker, Ashley D; Bland, Arthur S Buddy; Boudwin, Kathlyn J.; Hack, James J; Kendall, Ricky A; Messer, Bronson; Rogers, James H; Shipman, Galen M; Wells, Jack C; White, Julia C; Hudson, Douglas L

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of

  12. Chapter 9: Enabling Capabilities for Science and Energy | High-Performance Computing Capabilities and Allocations Supplemental Information

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Capabilities and Allocations User Facility Statistics Examples and Case Studies ENERGY U.S. DEPARTMENT OF Quadrennial Technology Review 2015 1 Quadrennial Technology Review 2015 High Performance Computing Capabilities and Resource Allocations Chapter 9: Enabling Capabilities for Science and Energy High Performance Computing Capabilities The Department of Energy (DOE) laboratories integrate high performance computing (HPC) capabilities into their energy, science, and national security missions.

  13. High performance computing in chemistry and massively parallel computers: A simple transition?

    SciTech Connect (OSTI)

    Kendall, R.A.

    1993-03-01

    A review of the various problems facing any software developer targeting massively parallel processing (MPP) systems is presented. Issues specific to computational chemistry application software will be also outlined. Computational chemistry software ported to and designed for the Intel Touchstone Delta Supercomputer will be discussed. Recommendations for future directions will also be made.

  14. Failure detection in high-performance clusters and computers using chaotic map computations

    SciTech Connect (OSTI)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  15. Investigating methods of supporting dynamically linked executables on high performance computing platforms.

    SciTech Connect (OSTI)

    Kelly, Suzanne Marie; Laros, James H., III; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2009-09-01

    Shared libraries have become ubiquitous and are used to achieve great resource efficiencies on many platforms. The same properties that enable efficiencies on time-shared computers and convenience on small clusters prove to be great obstacles to scalability on large clusters and High Performance Computing platforms. In addition, Light Weight operating systems such as Catamount have historically not supported the use of shared libraries specifically because they hinder scalability. In this report we will outline the methods of supporting shared libraries on High Performance Computing platforms using Light Weight kernels that we investigated. The considerations necessary to evaluate utility in this area are many and sometimes conflicting. While our initial path forward has been determined based on this evaluation we consider this effort ongoing and remain prepared to re-evaluate any technology that might provide a scalable solution. This report is an evaluation of a range of possible methods of supporting dynamically linked executables on capability class1 High Performance Computing platforms. Efforts are ongoing and extensive testing at scale is necessary to evaluate performance. While performance is a critical driving factor, supporting whatever method is used in a production environment is an equally important and challenging task.

  16. HPC4Mfg: Boosting American Competiveness in Clean Energy Manufacturing through High Performance Computing

    Broader source: Energy.gov [DOE]

    Higher efficiency jet engines to save fuel; stronger fiberglass made with less energy for wind turbines and lightweight vehicles; next generation semiconductor devices for more efficient data centers: these are just a few of the manufacturing challenges that the Energy Department's ten new High Performance Computing for Manufacturing (HPC4Mfg) projects will tackle over the next year.

  17. Energy Department's High Performance Computing for Manufacturing Program Seeks to Fund New Industry Proposals

    Broader source: Energy.gov [DOE]

    The U.S. Department of Energy (DOE) is seeking concept proposals from qualified U.S. manufacturers to participate in short-term, collaborative projects. Selectees will be given access to High Performance Computing facilities and will work with experienced DOE National Laboratories staff in addressing challenges in U.S. manufacturing.

  18. High performance computing and communications: Advancing the frontiers of information technology

    SciTech Connect (OSTI)

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  19. An Overview of High Performance Computing and Challenges for the Future

    ScienceCinema (OSTI)

    Google Tech Talks

    2009-09-01

    In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and lgorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research

  20. Cloud object store for archive storage of high performance computing data using decoupling middleware

    DOE Patents [OSTI]

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  1. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    DOE Patents [OSTI]

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  2. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    SciTech Connect (OSTI)

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  3. OSTIblog Articles in the High-performance computing Topic | OSTI, US Dept

    Office of Scientific and Technical Information (OSTI)

    of Energy Office of Scientific and Technical Information High-performance computing Topic ACME - Perfecting Earth System Models by Kathy Chambers 29 Oct, 2014 in Earth system modeling as we know it and how it benefits climate change research is about to transform with the newly launched Accelerated Climate Modeling for Energy (ACME) project sponsored by the Earth System Modeling program within the Department of Energy's (DOE) Office of Biological and Environmental Research. ACME is an

  4. High-Performance Computing for Alloy Development | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High-Performance Computing for Alloy Development alloy-development.jpg Tomorrow's fossil-fuel based power plants will achieve higher efficiencies by operating at higher pressures and temperatures and under harsher and more corrosive conditions. Unfortunately, conventional metals simply cannot withstand these extreme environments, so advanced alloys must be designed and fabricated to meet the needs of these advanced systems. The properties of metal alloys, which are mixtures of metallic elements,

  5. Opening Remarks from the Joint Genome Institute and Argonne Lab High Performance Computing Workshop (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Rubin, Eddy

    2011-06-03

    DOE JGI Director Eddy Rubin gives opening remarks at the JGI/Argonne High Performance Computing (HPC) Workshop on January 25, 2010.

  6. Opening Remarks from the Joint Genome Institute and Argonne Lab High Performance Computing Workshop (2010 JGI/ANL HPC Workshop)

    SciTech Connect (OSTI)

    Rubin, Eddy

    2010-01-25

    DOE JGI Director Eddy Rubin gives opening remarks at the JGI/Argonne High Performance Computing (HPC) Workshop on January 25, 2010.

  7. Towards Real-Time High Performance Computing For Power Grid Analysis

    SciTech Connect (OSTI)

    Hui, Peter SY; Lee, Barry; Chikkagoudar, Satish

    2012-11-16

    Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored. Imposing real-time constraints on a parallel (cluster) computing environment introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we give a motivating example to demonstrate the need for such a system--- an application to estimate the electromechanical states of the power grid--- and we introduce a formal method for performing verification of certain temporal properties within a system of parallel processes. We describe our work towards a full real-time implementation of the target application--- namely, our progress towards extracting a key mathematical kernel from the application, the formal process by which we analyze the intricate timing behavior of the processes on the cluster, as well as timing measurements taken on our test cluster to demonstrate use of these concepts.

  8. Matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    DOE Patents [OSTI]

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2013-11-05

    Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.

  9. Evaluating Performance, Power, and Cooling in High Performance Computing (HPC) Data Centers

    SciTech Connect (OSTI)

    Evans, Jeffrey; Sandeep, Gupta; Karavanic, Karen; Marquez, Andres; Varsamopoulos, Girogios

    2012-01-24

    This chapter explores current research focused on developing our understanding of the interrelationships involved with HPC performance and energy management. The first section explores data center instrumentation, measurement, and performance analysis techniques, followed by a section focusing on work in data center thermal management and resource allocation. This is followed by an exploration of emerging techniques to identify application behavioral attributes that can provide clues and advice to HPC resource and energy management systems for the purpose of balancing HPC performance and energy efficiency.

  10. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect (OSTI)

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  11. Toward a Performance/Resilience Tool for Hardware/Software Co-Design of High-Performance Computing Systems

    SciTech Connect (OSTI)

    Engelmann, Christian; Naughton, III, Thomas J

    2013-01-01

    xSim is a simulation-based performance investigation toolkit that permits running high-performance computing (HPC) applications in a controlled environment with millions of concurrent execution threads, while observing application performance in a simulated extreme-scale system for hardware/software co-design. The presented work details newly developed features for xSim that permit the injection of MPI process failures, the propagation/detection/notification of such failures within the simulation, and their handling using application-level checkpoint/restart. These new capabilities enable the observation of application behavior and performance under failure within a simulated future-generation HPC system using the most common fault handling technique.

  12. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    SciTech Connect (OSTI)

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele; Timm, Steven; Kim, Hyun-Woo; Noh, Seo-Young; Raicu, Ioan

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56 virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).

  13. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    SciTech Connect (OSTI)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  14. High-performance computational and geostatistical experiments for testing the capabilities of 3-d electrical tomography

    SciTech Connect (OSTI)

    Carle, S. F.; Daily, W. D.; Newmark, R. L.; Ramirez, A.; Tompson, A.

    1999-01-19

    This project explores the feasibility of combining geologic insight, geostatistics, and high-performance computing to analyze the capabilities of 3-D electrical resistance tomography (ERT). Geostatistical methods are used to characterize the spatial variability of geologic facies that control sub-surface variability of permeability and electrical resistivity Synthetic ERT data sets are generated from geostatistical realizations of alluvial facies architecture. The synthetic data sets enable comparison of the "truth" to inversion results, quantification of the ability to detect particular facies at particular locations, and sensitivity studies on inversion parameters

  15. GridPACK Toolkit for Developing Power Grid Simulations on High Performance Computing Platforms

    SciTech Connect (OSTI)

    Palmer, Bruce J.; Perkins, William A.; Glass, Kevin A.; Chen, Yousu; Jin, Shuangshuang; Callahan, Charles D.

    2013-11-30

    This paper describes the GridPACK framework, which is designed to help power grid engineers develop modeling software capable of running on todays high performance computers. The framework contains modules for setting up distributed power grid networks, assigning buses and branches with arbitrary behaviors to the network, creating distributed matrices and vectors, using parallel linear and non-linear solvers to solve algebraic equations, and mapping functionality to create matrices and vectors based on properties of the network. In addition, the framework contains additional functionality to support IO and to manage errors.

  16. High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017

    SciTech Connect (OSTI)

    Gerber, Richard; Wasserman, Harvey

    2015-01-20

    In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review

  17. Investigating Operating System Noise in Extreme-Scale High-Performance Computing Systems using Simulation

    SciTech Connect (OSTI)

    Engelmann, Christian

    2013-01-01

    Hardware/software co-design for future-generation high-performance computing (HPC) systems aims at closing the gap between the peak capabilities of the hardware and the performance realized by applications (application-architecture performance gap). Performance profiling of architectures and applications is a crucial part of this iterative process. The work in this paper focuses on operating system (OS) noise as an additional factor to be considered for co-design. It represents the first step in including OS noise in HPC hardware/software co-design by adding a noise injection feature to an existing simulation-based co-design toolkit. It reuses an existing abstraction for OS noise with frequency (periodic recurrence) and period (duration of each occurrence) to enhance the processor model of the Extreme-scale Simulator (xSim) with synchronized and random OS noise simulation. The results demonstrate this capability by evaluating the impact of OS noise on MPI_Bcast() and MPI_Reduce() in a simulated future-generation HPC system with 2,097,152 compute nodes.

  18. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    SciTech Connect (OSTI)

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K; Wagner, Robert M

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allow rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark

  19. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect (OSTI)

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  20. A Lightweight, High-performance I/O Management Package for Data-intensive Computing

    SciTech Connect (OSTI)

    Wang, Jun

    2011-06-22

    Our group has been working with ANL collaborators on the topic ??bridging the gap between parallel file system and local file system? during the course of this project period. We visited Argonne National Lab -- Dr. Robert Ross??s group for one week in the past summer 2007. We looked over our current project progress and planned the activities for the incoming years 2008-09. The PI met Dr. Robert Ross several times such as HEC FSIO workshop 08, SC??08 and SC??10. We explored the opportunities to develop a production system by leveraging our current prototype to (SOGP+PVFS) a new PVFS version. We delivered SOGP+PVFS codes to ANL PVFS2 group in 2008.We also talked about exploring a potential project on developing new parallel programming models and runtime systems for data-intensive scalable computing (DISC). The methodology is to evolve MPI towards DISC by incorporating some functions of Google MapReduce parallel programming model. More recently, we are together exploring how to leverage existing works to perform (1) coordination/aggregation of local I/O operations prior to movement over the WAN, (2) efficient bulk data movement over the WAN, (3) latency hiding techniques for latency-intensive operations. Since 2009, we start applying Hadoop/MapReduce to some HEC applications with LANL scientists John Bent and Salman Habib. Another on-going work is to improve checkpoint performance at I/O forwarding Layer for the Road Runner super computer with James Nuetz and Gary Gridder at LANL. Two senior undergraduates from our research group did summer internships about high-performance file and storage system projects in LANL since 2008 for consecutive three years. Both of them are now pursuing Ph.D. degree in our group and will be 4th year in the PhD program in Fall 2011 and go to LANL to advance two above-mentioned works during this winter break. Since 2009, we have been collaborating with several computer scientists (Gary Grider, John bent, Parks Fields, James Nunez

  1. High-Performance Computing for Real-Time Grid Analysis and Operation

    SciTech Connect (OSTI)

    Huang, Zhenyu; Chen, Yousu; Chavarría-Miranda, Daniel

    2013-10-31

    Power grids worldwide are undergoing an unprecedented transition as a result of grid evolution meeting information revolution. The grid evolution is largely driven by the desire for green energy. Emerging grid technologies such as renewable generation, smart loads, plug-in hybrid vehicles, and distributed generation provide opportunities to generate energy from green sources and to manage energy use for better system efficiency. With utility companies actively deploying these technologies, a high level of penetration of these new technologies is expected in the next 5-10 years, bringing in a level of intermittency, uncertainties, and complexity that the grid did not see nor design for. On the other hand, the information infrastructure in the power grid is being revolutionized with large-scale deployment of sensors and meters in both the transmission and distribution networks. The future grid will have two-way flows of both electrons and information. The challenge is how to take advantage of the information revolution: pull the large amount of data in, process it in real time, and put information out to manage grid evolution. Without addressing this challenge, the opportunities in grid evolution will remain unfulfilled. This transition poses grand challenges in grid modeling, simulation, and information presentation. The computational complexity of underlying power grid modeling and simulation will significantly increase in the next decade due to an increased model size and a decreased time window allowed to compute model solutions. High-performance computing is essential to enable this transition. The essential technical barrier is to vastly increase the computational speed so operation response time can be reduced from minutes to seconds and sub-seconds. The speed at which key functions such as state estimation and contingency analysis are conducted (typically every 3-5 minutes) needs to be dramatically increased so that the analysis of contingencies is both

  2. Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center

    SciTech Connect (OSTI)

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew; Cader, Tahir; Fox, Kevin M.; Gustafson, William I.; Mundy, Christopher J.

    2013-06-30

    As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented, high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.

  3. Subsurface Multiphase Flow and Multicomponent Reactive Transport Modeling using High-Performance Computing

    SciTech Connect (OSTI)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

    2007-07-16

    Numerical modeling has become a critical tool to the U.S. Department of Energy for evaluating the environmental impact of alternative energy sources and remediation strategies for legacy waste sites. Unfortunately, the physical and chemical complexity of many sites overwhelms the capabilities of even most state of the art groundwater models. Of particular concern are the representation of highly-heterogeneous stratified rock/soil layers in the subsurface and the biological and geochemical interactions of chemical species within multiple fluid phases. Clearly, there is a need for higher-resolution modeling (i.e. more spatial, temporal, and chemical degrees of freedom) and increasingly mechanistic descriptions of subsurface physicochemical processes. We present SciDAC-funded research being performed in the development of PFLOTRAN, a parallel multiphase flow and multicomponent reactive transport model. Written in Fortran90, PFLOTRAN is founded upon PETSc data structures and solvers. We are employing PFLOTRAN in the simulation of uranium transport at the Hanford 300 Area, a contaminated site of major concern to the Department of Energy, the State of Washington, and other government agencies. By leveraging the billions of degrees of freedom available through high-performance computation using tens of thousands of processors, we can better characterize the release of uranium into groundwater and its subsequent transport to the Columbia River, and thereby better understand and evaluate the effectiveness of various proposed remediation strategies.

  4. Development of high performance scientific components for interoperability of computing packages

    SciTech Connect (OSTI)

    Gulabani, Teena Pratap

    2008-12-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  5. A High Performance Computing Network and System Simulator for the Power Grid: NGNS^2

    SciTech Connect (OSTI)

    Villa, Oreste; Tumeo, Antonino; Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.

    2012-11-11

    Designing and planing next generation power grid sys- tems composed of large power distribution networks, monitoring and control networks, autonomous generators and consumers of power requires advanced simulation infrastructures. The objective is to predict and analyze in time the behavior of networks of systems for unexpected events such as loss of connectivity, malicious attacks and power loss scenarios. This ultimately allows one to answer questions such as: What could happen to the power grid if .... We want to be able to answer as many questions as possible in the shortest possible time for the largest possible systems. In this paper we present a new High Performance Computing (HPC) oriented simulation infrastructure named Next Generation Network and System Simulator (NGNS2 ). NGNS2 allows for the distribution of a single simulation among multiple computing elements by using MPI and OpenMP threads. NGNS2 provides extensive configuration, fault tolerant and load balancing capabilities needed to simulate large and dynamic systems for long periods of time. We show the preliminary results of the simulator running approximately two million simulated entities both on a 64-node commodity Infiniband cluster and a 48-core SMP workstation.

  6. In the OSTI Collections: High-Performance Computing | OSTI, US Dept of

    Office of Scientific and Technical Information (OSTI)

    Energy Office of Scientific and Technical Information Performance Computing Computing efficiently Programming efficiently Correcting mistakes, avoiding failures Projections References Research Organizations Reports Available through OSTI's SciTech Connect Reports Available through OSTI's DOepatents Additional Reference What's happening in one current research field can be guessed from these recent report title excerpts: "Global Simulation of Plasma Microturbulence at the Petascale &

  7. Microsoft Word - The Essential Role of New Network Services for High Performance Distributed Computing - PARENG.CivilComp.2011.

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Second International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering 12-15 April 2011, Ajaccio - Corsica - France In "Trends in Parallel, Distributed, Grid and Cloud Computing for Engineering," Edited by: P. Iványi, B.H.V. Topping, Civil-Comp Press. Network Services for High Performance Distributed Computing and Data Management W. E. Johnston, C. Guok, J. Metzger, and B. Tierney ESnet and Lawrence Berkeley National Laboratory, Berkeley California, U.S.A

  8. National cyber defense high performance computing and analysis : concepts, planning and roadmap.

    SciTech Connect (OSTI)

    Hamlet, Jason R.; Keliiaa, Curtis M.

    2010-09-01

    There is a national cyber dilemma that threatens the very fabric of government, commercial and private use operations worldwide. Much is written about 'what' the problem is, and though the basis for this paper is an assessment of the problem space, we target the 'how' solution space of the wide-area national information infrastructure through the advancement of science, technology, evaluation and analysis with actionable results intended to produce a more secure national information infrastructure and a comprehensive national cyber defense capability. This cybersecurity High Performance Computing (HPC) analysis concepts, planning and roadmap activity was conducted as an assessment of cybersecurity analysis as a fertile area of research and investment for high value cybersecurity wide-area solutions. This report and a related SAND2010-4765 Assessment of Current Cybersecurity Practices in the Public Domain: Cyber Indications and Warnings Domain report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.

  9. Subsurface Multiphase Flow and Multicomponent Reactive Transport Modeling using High-Performance Computing

    SciTech Connect (OSTI)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan

    2007-08-01

    Numerical modeling has become a critical tool to the Department of Energy for evaluating the environmental impact of alternative energy sources and remediation strategies for legacy waste sites. Unfortunately, the physical and chemical complexity of many sites overwhelms the capabilities of even most state of the art groundwater models. Of particular concern are the representation of highly-heterogeneous stratified rock/soil layers in the subsurface and the biological and geochemical interactions of chemical species within multiple fluid phases. Clearly, there is a need for higher-resolution modeling (i.e. more spatial, temporal, and chemical degrees of freedom) and increasingly mechanistic descriptions of subsurface physicochemical processes. We present research being performed in the development of PFLOTRAN, a parallel multiphase flow and multicomponent reactive transport model. Written in Fortran90, PFLOTRAN is founded upon PETSc data structures and solvers and has exhibited impressive strong scalability on up to 4000 processors on the ORNL Cray XT3. We are employing PFLOTRAN in the simulation of uranium transport at the Hanford 300 Area, a contaminated site of major concern to the Department of Energy, the State of Washington, and other government agencies where overly-simplistic historical modeling erroneously predicted decade removal times for uranium by ambient groundwater flow. By leveraging the billions of degrees of freedom available through high-performance computation using tens of thousands of processors, we can better characterize the release of uranium into groundwater and its subsequent transport to the Columbia River, and thereby better understand and evaluate the effectiveness of various proposed remediation strategies.

  10. High-Performance Computer Modeling of the Cosmos-Iridium Collision

    SciTech Connect (OSTI)

    Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W

    2009-08-28

    This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

  11. iSSH v. Auditd: Intrusion Detection in High Performance Computing

    SciTech Connect (OSTI)

    Karns, David M.; Protin, Kathryn S.; Wolf, Justin G.

    2012-07-30

    The goal is to provide insight into intrusions in high performance computing, focusing on tracking intruders motions through the system. The current tools, such as pattern matching, do not provide sufficient tracking capabilities. We tested two tools: an instrumented version of SSH (iSSH) and Linux Auditing Framework (Auditd). First discussed is Instrumented Secure Shell (iSSH): a version of SSH developed at Lawrence Berkeley National Laboratory. The goal is to audit user activity within a computer system to increase security. Capabilities are: Keystroke logging, Records user names and authentication information, and Catching suspicious remote and local commands. Strengths for iSSH are: (1) Good for keystroke logging, making it easier to track malicious users by catching suspicious commands; (2) Works with Bro to send alerts; could be configured to send pages to systems administrators; and (3) Creates visibility into SSH sessions. Weaknesses are: (1) Relatively new, so not very well documented; and (2) No capabilities to see if files have been edited, moved, or copied within the system. Second we discuss Auditd, the user component of the Linux Auditing System. It creates logs of user behavior, and monitors systems calls and file accesses. Its goal is to improve system security by keeping track of users actions within the system. Strenghts of Auditd are: (1) Very thorough logs; (2) Wider variety of tracking abilities than iSSH; and (3) Older, so better documented. Weaknesses are: (1) Logs record everything, not just malicious behavior; (2) The size of the logs can lead to overflowing directories; and (3) This level of logging leads to a lot of false alarms. Auditd is better documented than iSSH, which would help administrators during set up and troubleshooting. iSSH has a cleaner notification system, but the logs are not as detailed as Auditd. From our performance testing: (1) File transfer speed using SCP is increased when using iSSH; and (2) Network benchmarks

  12. LIAR -- A computer program for the modeling and simulation of high performance linacs

    SciTech Connect (OSTI)

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm.

  13. High Performance Computing at TJNAF| U.S. DOE Office of Science (SC)

    Office of Science (SC) Website

    Performance Computing at TJNAF Nuclear Physics (NP) NP Home About Research Facilities Science Highlights Benefits of NP Applications of Nuclear Science Applications of Nuclear Science Archives Small Business Innovation Research / Small Business Technology Transfer Funding Opportunities Nuclear Science Advisory Committee (NSAC) Community Resources Contact Information Nuclear Physics U.S. Department of Energy SC-26/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301)

  14. High performance systems

    SciTech Connect (OSTI)

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  15. Technologies and tools for high-performance distributed computing. Final report

    SciTech Connect (OSTI)

    Karonis, Nicholas T.

    2000-05-01

    In this project we studied the practical use of the MPI message-passing interface in advanced distributed computing environments. We built on the existing software infrastructure provided by the Globus Toolkit{trademark}, the MPICH portable implementation of MPI, and the MPICH-G integration of MPICH with Globus. As a result of this project we have replaced MPICH-G with its successor MPICH-G2, which is also an integration of MPICH with Globus. MPICH-G2 delivers significant improvements in message passing performance when compared to its predecessor MPICH-G and was based on superior software design principles resulting in a software base that was much easier to make the functional extensions and improvements we did. Using Globus services we replaced the default implementation of MPI's collective operations in MPICH-G2 with more efficient multilevel topology-aware collective operations which, in turn, led to the development of a new timing methodology for broadcasts [8]. MPICH-G2 was extended to include client/server functionality from the MPI-2 standard [23] to facilitate remote visualization applications and, through the use of MPI idioms, MPICH-G2 provided application-level control of quality-of-service parameters as well as application-level discovery of underlying Grid-topology information. Finally, MPICH-G2 was successfully used in a number of applications including an award-winning record-setting computation in numerical relativity. In the sections that follow we describe in detail the accomplishments of this project, we present experimental results quantifying the performance improvements, and conclude with a discussion of our applications experiences. This project resulted in a significant increase in the utility of MPICH-G2.

  16. Fair share on high performance computing systems : what does fair really mean?

    SciTech Connect (OSTI)

    Clearwater, Scott Harvey; Kleban, Stephen David

    2003-03-01

    We report on a performance evaluation of a Fair Share system at the ASCI Blue Mountain supercomputer cluster. We study the impacts of share allocation under Fair Share on wait times and expansion factor. We also measure the Service Ratio, a typical figure of merit for Fair Share systems, with respect to a number of job parameters. We conclude that Fair Share does little to alter important performance metrics such as expansion factor. This leads to the question of what Fair Share means on cluster machines. The essential difference between Fair Share on a uni-processor and a cluster is that the workload on a cluster is not fungible in space or time. We find that cluster machines must be highly utilized and support checkpointing in order for Fair Share to function more closely to the spirit in which it was originally developed.

  17. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    SciTech Connect (OSTI)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  18. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    SciTech Connect (OSTI)

    Panda, Dhabaleswar Kumar; Beckman, Pete

    2011-07-28

    implementations on top of existing publish-subscribe tools. #15; We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. #15; Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. #15; Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.

  19. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation

    SciTech Connect (OSTI)

    Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn

    2014-11-14

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.

  20. Complex matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    DOE Patents [OSTI]

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2014-02-11

    Mechanisms for performing a complex matrix multiplication operation are provided. A vector load operation is performed to load a first vector operand of the complex matrix multiplication operation to a first target vector register. The first vector operand comprises a real and imaginary part of a first complex vector value. A complex load and splat operation is performed to load a second complex vector value of a second vector operand and replicate the second complex vector value within a second target vector register. The second complex vector value has a real and imaginary part. A cross multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the complex matrix multiplication operation. The partial product is accumulated with other partial products and a resulting accumulated partial product is stored in a result vector register.

  1. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    SciTech Connect (OSTI)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  2. High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    when possible - Automatically using optimization methods. CONSTRUCTING REDUCED SCHEMES: GENETIC ALGORITHM PRICIPLE Initial population FITNESS EVALUATION of each individual F ...

  3. High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    activities span repeated lifetimes of supercomputing systems and infrastructure: Defining Future Environments Communication and collaborations with industry and academia to follow...

  4. Harnessing the Department of Energy’s High-Performance Computing Expertise to Strengthen the U.S. Chemical Enterprise

    SciTech Connect (OSTI)

    Dixon, David A.; Dupuis, Michel; Garrett, Bruce C.; Neaton, Jeffrey B.; Plata, Charity; Tarr, Matthew A.; Tomb, Jean-Francois; Golab, Joseph T.

    2012-01-17

    High-performance computing (HPC) is one area where the DOE has developed extensive expertise and capability. However, this expertise currently is not properly shared with or used by the private sector to speed product development, enable industry to move rapidly into new areas, and improve product quality. Such use would lead to substantial competitive advantages in global markets and yield important economic returns for the United States. To stimulate the dissemination of DOE's HPC expertise, the Council for Chemical Research (CCR) and the DOE jointly held a workshop on this topic. Four important energy topic areas were chosen as the focus of the meeting: Biomass/Bioenergy, Catalytic Materials, Energy Storage, and Photovoltaics. Academic, industrial, and government experts in these topic areas participated in the workshop to identify industry needs, evaluate the current state of expertise, offer proposed actions and strategies, and forecast the expected benefits of implementing those strategies.

  5. SU-E-T-531: Performance Evaluation of Multithreaded Geant4 for Proton Therapy Dose Calculations in a High Performance Computing Facility

    SciTech Connect (OSTI)

    Shin, J; Coss, D; McMurry, J; Farr, J [St. Jude Children's Research Hospital, Memphis, TN (United States); Faddegon, B [UC San Francisco, San Francisco, CA (United States)

    2014-06-01

    Purpose: To evaluate the efficiency of multithreaded Geant4 (Geant4-MT, version 10.0) for proton Monte Carlo dose calculations using a high performance computing facility. Methods: Geant4-MT was used to calculate 3D dose distributions in 111 mm3 voxels in a water phantom and patient's head with a 150 MeV proton beam covering approximately 55 cm2 in the water phantom. Three timestamps were measured on the fly to separately analyze the required time for initialization (which cannot be parallelized), processing time of individual threads, and completion time. Scalability of averaged processing time per thread was calculated as a function of thread number (1, 100, 150, and 200) for both 1M and 50 M histories. The total memory usage was recorded. Results: Simulations with 50 M histories were fastest with 100 threads, taking approximately 1.3 hours and 6 hours for the water phantom and the CT data, respectively with better than 1.0 % statistical uncertainty. The calculations show 1/N scalability in the event loops for both cases. The gains from parallel calculations started to decrease with 150 threads. The memory usage increases linearly with number of threads. No critical failures were observed during the simulations. Conclusion: Multithreading in Geant4-MT decreased simulation time in proton dose distribution calculations by a factor of 64 and 54 at a near optimal 100 threads for water phantom and patient's data respectively. Further simulations will be done to determine the efficiency at the optimal thread number. Considering the trend of computer architecture development, utilizing Geant4-MT for radiotherapy simulations is an excellent cost-effective alternative for a distributed batch queuing system. However, because the scalability depends highly on simulation details, i.e., the ratio of the processing time of one event versus waiting time to access for the shared event queue, a performance evaluation as described is recommended.

  6. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    SciTech Connect (OSTI)

    Clouse, C. J.; Edwards, M. J.; McCoy, M. G.; Marinak, M. M.; Verdon, C. P.

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  7. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

    SciTech Connect (OSTI)

    Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

    2011-02-01

    Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

  8. Geant4 Computing Performance Benchmarking and Monitoring

    SciTech Connect (OSTI)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  9. Geant4 Computing Performance Benchmarking and Monitoring

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

  10. High Performance Sustainable Buildings

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    science and bioscience capabiities. Occupational Medicine will become a High Performance Sustainable Building in 2013. On the former County landfill, a photovoltaic array field...

  11. High Performance Network Monitoring

    SciTech Connect (OSTI)

    Martinez, Jesse E

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  12. Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation and Completion of Episodic Information.

    SciTech Connect (OSTI)

    Aimone, James Bradley; Bernard, Michael Lewis; Vineyard, Craig Michael; Verzi, Stephen Joseph

    2014-10-01

    Adult neurogenesis in the hippocampus region of the brain is a neurobiological process that is believed to contribute to the brain's advanced abilities in complex pattern recognition and cognition. Here, we describe how realistic scale simulations of the neurogenesis process can offer both a unique perspective on the biological relevance of this process and confer computational insights that are suggestive of novel machine learning techniques. First, supercomputer based scaling studies of the neurogenesis process demonstrate how a small fraction of adult-born neurons have a uniquely larger impact in biologically realistic scaled networks. Second, we describe a novel technical approach by which the information content of ensembles of neurons can be estimated. Finally, we illustrate several examples of broader algorithmic impact of neurogenesis, including both extending existing machine learning approaches and novel approaches for intelligent sensing.

  13. System Software and Tools for High Performance Computing Environments: A report on the findings of the Pasadena Workshop, April 14--16, 1992

    SciTech Connect (OSTI)

    Sterling, T.; Messina, P.; Chen, M.

    1993-04-01

    The Pasadena Workshop on System Software and Tools for High Performance Computing Environments was held at the Jet Propulsion Laboratory from April 14 through April 16, 1992. The workshop was sponsored by a number of Federal agencies committed to the advancement of high performance computing (HPC) both as a means to advance their respective missions and as a national resource to enhance American productivity and competitiveness. Over a hundred experts in related fields from industry, academia, and government were invited to participate in this effort to assess the current status of software technology in support of HPC systems. The overall objectives of the workshop were to understand the requirements and current limitations of HPC software technology and to contribute to a basis for establishing new directions in research and development for software technology in HPC environments. This report includes reports written by the participants of the workshop`s seven working groups. Materials presented at the workshop are reproduced in appendices. Additional chapters summarize the findings and analyze their implications for future directions in HPC software technology development.

  14. High Performance Sustainable Building

    Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

    2011-11-09

    This Guide provides approaches for implementing the High Performance Sustainable Building (HPSB) requirements of DOE Order 413.3B, Program and Project Management for the Acquisition of Capital Assets. Supersedes DOE G 413.3-6.

  15. High Performance Sustainable Building

    Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

    2011-11-09

    This Guide highlights the DOE O 413.3B drivers for incorporating high performance sustainable building (HPSB) principles into Critical Decisions 1 through 4 and provides guidance for implementing the Order's HPSB requirements.

  16. High Performance Sustainable Buildings

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Buildings Goal 3: High Performance Sustainable Buildings Maintaining the conditions of a building improves the health of not only the surrounding ecosystems, but also the well-being of its occupants. Energy Conservation» Efficient Water Use & Management» High Performance Sustainable Buildings» Greening Transportation» Green Purchasing & Green Technology» Pollution Prevention» Science Serving Sustainability» ENVIRONMENTAL SUSTAINABILITY GOALS at LANL The Radiological Laboratory

  17. Application of high performance computing to automotive design and manufacturing: Composite materials modeling task technical manual for constitutive models for glass fiber-polymer matrix composites

    SciTech Connect (OSTI)

    Simunovic, S; Zacharia, T

    1997-11-01

    This report provides a theoretical background for three constitutive models for a continuous strand mat (CSM) glass fiber-thermoset polymer matrix composite. The models were developed during fiscal years 1994 through 1997 as a part of the Cooperative Research and Development Agreement, "Application of High-Performance Computing to Automotive Design and Manufacturing." The full derivation of constitutive relations in the framework of the continuum program DYNA3D and have been used for the simulation and impact analysis of CSM composite tubes. The analysis of simulation and experimental results show that the model based on strain tensor split yields the most accurate results of the three implemented models. The parameters used in the models and their derivation from the physical tests are documented.

  18. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

    SciTech Connect (OSTI)

    Joseph, Earl C.; Conway, Steve; Dekate, Chirag

    2013-09-30

    This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size.  A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

  19. High Performance Sustainable Building

    Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

    2008-06-20

    The guide supports DOE O 413.3A and provides useful information on the incorporation of high performance sustainable building principles into building-related General Plant Projects and Institutional General Plant Projects at DOE sites. Canceled by DOE G 413.3-6A. Does not cancel other directives.

  20. Task-parallel message passing interface implementation of Autodock4 for docking of very large databases of compounds using high-performance super-computers

    SciTech Connect (OSTI)

    Collignon, Barbara C; Schultz, Roland; Smith, Jeremy C; Baudry, Jerome Y

    2011-01-01

    A message passing interface (MPI)-based implementation (Autodock4.lga.MPI) of the grid-based docking program Autodock4 has been developed to allow simultaneous and independent docking of multiple compounds on up to thousands of central processing units (CPUs) using the Lamarkian genetic algorithm. The MPI version reads a single binary file containing precalculated grids that represent the protein-ligand interactions, i.e., van der Waals, electrostatic, and desolvation potentials, and needs only two input parameter files for the entire docking run. In comparison, the serial version of Autodock4 reads ASCII grid files and requires one parameter file per compound. The modifications performed result in significantly reduced input/output activity compared with the serial version. Autodock4.lga.MPI scales up to 8192 CPUs with a maximal overhead of 16.3%, of which two thirds is due to input/output operations and one third originates from MPI operations. The optimal docking strategy, which minimizes docking CPU time without lowering the quality of the database enrichments, comprises the docking of ligands preordered from the most to the least flexible and the assignment of the number of energy evaluations as a function of the number of rotatable bounds. In 24 h, on 8192 high-performance computing CPUs, the present MPI version would allow docking to a rigid protein of about 300K small flexible compounds or 11 million rigid compounds.

  1. A Comparison of Library Tracking Methods in High Performance

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Library Tracking Methods in High Performance Computing Computer System Cluster and Networking Summer Institute 2013 Poster Seminar William Rosenberger (New Mexico Tech), Dennis...

  2. High Performance Window Retrofit

    SciTech Connect (OSTI)

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  3. High Performance Buildings Database

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  4. Misleading Performance Claims in Parallel Computations

    SciTech Connect (OSTI)

    Bailey, David H.

    2009-05-29

    In a previous humorous note entitled 'Twelve Ways to Fool the Masses,' I outlined twelve common ways in which performance figures for technical computer systems can be distorted. In this paper and accompanying conference talk, I give a reprise of these twelve 'methods' and give some actual examples that have appeared in peer-reviewed literature in years past. I then propose guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion, not only in the world of device simulation but also in the larger arena of technical computing.

  5. High-performance steels

    SciTech Connect (OSTI)

    Barsom, J.M.

    1996-03-01

    Steel is the material of choice in structures such as storage tanks, gas and oil distribution pipelines, high-rise buildings, and bridges because of its strength, ductility, and fracture toughness, as well as its repairability and recyclability. Furthermore, these properties are continually being improved via advances in steelmaking, casting, rolling, and chemistry. Developments in steelmaking have led to alloys having low sulfur, sulfide shape control, and low hydrogen. They provide reduced chemical segregation, higher fracture toughness, better through-thickness and weld heat-affected zone properties, and lower susceptibility to hydrogen cracking. Processing has moved beyond traditional practices to designed combinations of controlled rolling and cooling known as thermomechanical control processes (TMCP). In fact, chemical composition control and TMCP now enable such precise adjustment of final properties that these alloys are now known as high-performance steels (HPS), engineered materials having properties tailored for specific applications.

  6. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    SciTech Connect (OSTI)

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin; Pasccci, Valerio; Gamblin, Todd; Brunst, Holger

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  7. Evaluating iterative reconstruction performance in computed tomography

    SciTech Connect (OSTI)

    Chen, Baiyu Solomon, Justin; Ramirez Giraldo, Juan Carlos; Samei, Ehsan

    2014-12-15

    Purpose: Iterative reconstruction (IR) offers notable advantages in computed tomography (CT). However, its performance characterization is complicated by its potentially nonlinear behavior, impacting performance in terms of specific tasks. This study aimed to evaluate the performance of IR with both task-specific and task-generic strategies. Methods: The performance of IR in CT was mathematically assessed with an observer model that predicted the detection accuracy in terms of the detectability index (d′). d′ was calculated based on the properties of the image noise and resolution, the observer, and the detection task. The characterizations of image noise and resolution were extended to accommodate the nonlinearity of IR. A library of tasks was mathematically modeled at a range of sizes (radius 1–4 mm), contrast levels (10–100 HU), and edge profiles (sharp and soft). Unique d′ values were calculated for each task with respect to five radiation exposure levels (volume CT dose index, CTDI{sub vol}: 3.4–64.8 mGy) and four reconstruction algorithms (filtered backprojection reconstruction, FBP; iterative reconstruction in imaging space, IRIS; and sinogram affirmed iterative reconstruction with strengths of 3 and 5, SAFIRE3 and SAFIRE5; all provided by Siemens Healthcare, Forchheim, Germany). The d′ values were translated into the areas under the receiver operating characteristic curve (AUC) to represent human observer performance. For each task and reconstruction algorithm, a threshold dose was derived as the minimum dose required to achieve a threshold AUC of 0.9. A task-specific dose reduction potential of IR was calculated as the difference between the threshold doses for IR and FBP. A task-generic comparison was further made between IR and FBP in terms of the percent of all tasks yielding an AUC higher than the threshold. Results: IR required less dose than FBP to achieve the threshold AUC. In general, SAFIRE5 showed the most significant dose reduction

  8. Elucidating geochemical response of shallow heterogeneous aquifers to CO2 leakage using high-performance computing: Implications for monitoring of CO2 sequestration

    SciTech Connect (OSTI)

    Navarre-Sitchler, Alexis K.; Maxwell, Reed M.; Siirila, Erica R.; Hammond, Glenn E.; Lichtner, Peter C.

    2013-03-01

    Predicting and quantifying impacts of potential carbon dioxide (CO2) leakage into shallow aquifers that overlie geologic CO2 storage formations is an important part of developing reliable carbon storage techniques. Leakage of CO2 through fractures, faults or faulty wellbores can reduce groundwater pH, inducing geochemical reactions that release solutes into the groundwater and pose a risk of degrading groundwater quality. In order to help quantify this risk, predictions of metal concentrations are needed during geologic storage of CO2. Here, we present regional-scale reactive transport simulations, at relatively fine-scale, of CO2 leakage into shallow aquifers run on the PFLOTRAN platform using high-performance computing. Multiple realizations of heterogeneous permeability distributions were generated using standard geostatistical methods. Increased statistical anisotropy of the permeability field resulted in more lateral and vertical spreading of the plume of impacted water, leading to increased Pb2+ (lead) concentrations and lower pH at a well down gradient of the CO2 leak. Pb2+ concentrations were higher in simulations where calcite was the source of Pb2+ compared to galena. The low solubility of galena effectively buffered the Pb2+ concentrations as galena reached saturation under reducing conditions along the flow path. In all cases, Pb2+ concentrations remained below the maximum contaminant level set by the EPA. Results from this study, compared to natural variability observed in aquifers, suggest that bicarbonate (HCO3) concentrations may be a better geochemical indicator of a CO2 leak under the conditions simulated here.

  9. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    SciTech Connect (OSTI)

    Brown, Maxine D.; Leigh, Jason

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascale computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.

  10. Computational Tools to Assess Turbine Biological Performance

    SciTech Connect (OSTI)

    Richmond, Marshall C.; Serkowski, John A.; Rakowski, Cynthia L.; Strickler, Brad; Weisbeck, Molly; Dotson, Curtis L.

    2014-07-24

    Public Utility District No. 2 of Grant County (GCPUD) operates the Priest Rapids Dam (PRD), a hydroelectric facility on the Columbia River in Washington State. The dam contains 10 Kaplan-type turbine units that are now more than 50 years old. Plans are underway to refit these aging turbines with new runners. The Columbia River at PRD is a migratory pathway for several species of juvenile and adult salmonids, so passage of fish through the dam is a major consideration when upgrading the turbines. In this paper, a method for turbine biological performance assessment (BioPA) is demonstrated. Using this method, a suite of biological performance indicators is computed based on simulated data from a CFD model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. Using known relationships between the dose of an injury mechanism and frequency of injury (dose–response) from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from proposed designs, the engineer can identify the more-promising alternatives. We present an application of the BioPA method for baseline risk assessment calculations for the existing Kaplan turbines at PRD that will be used as the minimum biological performance that a proposed new design must achieve.

  11. High Performance Energy Management

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Performance Energy Management Reduce energy use and meet your business objectives By applying continuous improvement practices similar to Lean and Six Sigma, the BPA Energy Smart...

  12. High Performance Window Attachments

    Broader source: Energy.gov (indexed) [DOE]

    Statement: * A wide range of residential window attachments are available, but they ... to model wide range of window coverings * Performed window coverings ...

  13. Using High Performance Libraries and Tools

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Libraries and Tools Using High Performance Libraries and Tools Memkind Library on Edison The memkind library is a user extensible heap manager built on top of jemalloc which enables control of memory characteristics and a partitioning of the heap between kinds of memory (including user defined kinds of memory). This library can be used to simulate the benefit of the high bandwidth memory that will be available on KNL system on the dual socket Edison compute nodes (the two

  14. Salishan: Conference on High Speed Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... of Notre Dame, and William Harrod, DARPA Exascale Ambitions What, me worry? : S > ... Systems (HPCS) (pdf), Robert Graybill, DARPA High-End Computing Revitalization (pdf), ...

  15. Connecting HPC and High Performance

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    HPC and High Performance Networks for Scientists and Researchers SC15 Austin, Texas November 18, 2015 1 Agenda 2 * Welcome and introductions * BoF Goals * Overview of National Research & Education Networks at work Globally * Discuss needs, challenges for leveraging HPC and high-performance networks * HPC/HTC pre-SC15 ESnet/GEANT/Internet2 survey results overview * Next steps discussion * Closing and Thank You BoF: Connecting HPC and High Performance Networks for Scientists and Researchers

  16. Thermoelectrics Partnership: High Performance Thermoelectric...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Embedded Nanoparticles Thermoelectrics Partnership: High Performance Thermoelectric Waste Heat Recovery System Based on Zintl Phase Materials with Embedded Nanoparticles 2011 DOE ...

  17. Continuous Monitoring And Cyber Security For High Performance...

    Office of Scientific and Technical Information (OSTI)

    Continuous Monitoring And Cyber Security For High Performance Computing Malin, Alex B. Los Alamos National Laboratory; Van Heule, Graham K. Los Alamos National Laboratory...

  18. Exploration of multi-block polymer morphologies using high performance...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Exploration of multi-block polymer morphologies using high performance computing Modern material design increasingly relies on controlling small scale morphologies. Multi-block...

  19. Illustrating the future prediction of performance based on computer...

    Office of Scientific and Technical Information (OSTI)

    Illustrating the future prediction of performance based on computer code, physical experiments, and critical performance parameter samples Citation Details In-Document Search ...

  20. High Performance Home Cost Performance Trade-Offs: Production...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    High Performance Home Cost Performance Trade-Offs: Production Builders - Building America Top Innovation High Performance Home Cost Performance Trade-Offs: Production Builders - ...

  1. Virtual Design Studio (VDS) - Development of an Integrated Computer Simulation Environment for Performance Based Design of Very-Low Energy and High IEQ Buildings

    SciTech Connect (OSTI)

    Chen, Yixing; Zhang, Jianshun; Pelken, Michael; Gu, Lixing; Rice, Danial; Meng, Zhaozhou; Semahegn, Shewangizaw; Feng, Wei; Ling, Francesca; Shi, Jun; Henderson, Hugh

    2013-09-01

    Executive Summary The objective of this study was to develop a “Virtual Design Studio (VDS)”: a software platform for integrated, coordinated and optimized design of green building systems with low energy consumption, high indoor environmental quality (IEQ), and high level of sustainability. This VDS is intended to assist collaborating architects, engineers and project management team members throughout from the early phases to the detailed building design stages. It can be used to plan design tasks and workflow, and evaluate the potential impacts of various green building strategies on the building performance by using the state of the art simulation tools as well as industrial/professional standards and guidelines for green building system design. Engaged in the development of VDS was a multi-disciplinary research team that included architects, engineers, and software developers. Based on the review and analysis of how existing professional practices in building systems design operate, particularly those used in the U.S., Germany and UK, a generic process for performance-based building design, construction and operation was proposed. It distinguishes the whole process into five distinct stages: Assess, Define, Design, Apply, and Monitoring (ADDAM). The current VDS is focused on the first three stages. The VDS considers building design as a multi-dimensional process, involving multiple design teams, design factors, and design stages. The intersection among these three dimensions defines a specific design task in terms of “who”, “what” and “when”. It also considers building design as a multi-objective process that aims to enhance the five aspects of performance for green building systems: site sustainability, materials and resource efficiency, water utilization efficiency, energy efficiency and impacts to the atmospheric environment, and IEQ. The current VDS development has been limited to energy efficiency and IEQ performance, with particular focus

  2. High energy neutron Computed Tomography developed

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High energy neutron Computed Tomography developed High energy neutron Computed Tomography developed LANSCE now has a high-energy neutron imaging capability that can be deployed on WNR flight paths for unclassified and classified objects. May 9, 2014 Neutron tomography horizontal "slice" of a tungsten and polyethylene test object containing tungsten carbide BBs. Neutron tomography horizontal "slice" of a tungsten and polyethylene test object containing tungsten carbide BBs.

  3. Hydro Review: Computational Tools to Assess Turbine Biological Performance

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    | Department of Energy Hydro Review: Computational Tools to Assess Turbine Biological Performance Hydro Review: Computational Tools to Assess Turbine Biological Performance This review covers the BioPA method used to analyze the biological performance of proposed designs to help ensure the safety of fish passing through the turbines at the Priest Rapids Dam in Grant County, Washington. Computational Tools to Assess Turbine Biological Performance (483.71 KB) More Documents & Publications

  4. Performance Tools & APIs | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    & Profiling Performance Tools & APIs Tuning MPI on BGQ Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BGQ Performance Counters...

  5. A High Performance Computing Platform for Performing High-Volume...

    Office of Scientific and Technical Information (OSTI)

    developed external to the core simulation engine without consideration for ease of use. This has created a technical gap for applying HPC-based tools to today's power grid studies. ...

  6. Performance comparison of desktop multiprocessing and workstation cluster computing

    SciTech Connect (OSTI)

    Crandall, P.E.; Sumithasri, E.V.; Clement, M.A.

    1996-12-31

    This paper describes our initial findings regarding the performance trade-offs between cluster computing where the participating processors are independent machines connected by a high speed switch and desktop multiprocessing where the processors reside within a single workstation and share a common memory. While interprocessor communication time has typically been cited as the limiting force on performance in the cluster, bus and memory contention have had similar effects in shared memory systems. The advent of high speed interconnects and improved bus and memory access speeds have enhanced the performance curves of both platforms. We present comparisons of the execution times of three applications with varying levels of data dependencies-numerical integration, matrix multiplication, and Jacobi iteration across three environment: the PVM distributed memory model, the PVM shared memory model, and the Solaris threads package.

  7. High Performance and Sustainable Buildings Guidance | Department...

    Office of Environmental Management (EM)

    High Performance and Sustainable Buildings Guidance High Performance and Sustainable Buildings Guidance High Performance and Sustainable Buildings Guidance (192.76 KB) More ...

  8. High Performance Sustainable Building Design RM | Department...

    Office of Environmental Management (EM)

    High Performance Sustainable Building Design RM High Performance Sustainable Building Design RM The High Performance Sustainable Building Design (HPSBD) Review Module (RM) is a ...

  9. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOE Patents [OSTI]

    Faraj, Ahmad

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  10. INL High Performance Building Strategy

    SciTech Connect (OSTI)

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  11. Illustrating the future prediction of performance based on computer...

    Office of Scientific and Technical Information (OSTI)

    Illustrating the future prediction of performance based on computer code, physical ... Citation Details In-Document Search Title: Illustrating the future prediction of ...

  12. High Performance Photovoltaic Project Overview

    SciTech Connect (OSTI)

    Symko-Davies, M.; McConnell, R.

    2005-01-01

    The High-Performance Photovoltaic (HiPerf PV) Project was initiated by the U.S. Department of Energy to substantially increase the viability of photovoltaics (PV) for cost-competitive applications so that PV can contribute significantly to our energy supply and environment in the 21st century. To accomplish this, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices. In this paper, we describe the recent research accomplishments in the in-house directed efforts and the research efforts under way in the subcontracted area.

  13. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing.

    SciTech Connect (OSTI)

    Bartlett, Roscoe Ainsworth

    2010-05-01

    The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classes is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory

  14. Computing in high-energy physics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mount, Richard P.

    2016-05-31

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  15. Scalable Computer Performance and Analysis (Hierarchical INTegration)

    Energy Science and Technology Software Center (OSTI)

    1999-09-02

    HINT is a program to measure a wide variety of scalable computer systems. It is capable of demonstrating the benefits of using more memory or processing power, and of improving communications within the system. HINT can be used for measurement of an existing system, while the associated program ANALYTIC HINT can be used to explain the measurements or as a design tool for proposed systems.

  16. High Performance Outdoor Lighting Accelerator

    Broader source: Energy.gov [DOE]

    Hosted by the U.S. Department of Energy (DOE)’s Weatherization and Intergovernmental Programs Office (WIPO), this webinar covered the expansion of the Better Buildings platform to include the newest initiative for the public sector: the High Performance Outdoor Lighting Accelerator (HPOLA).

  17. High Performance Bulk Thermoelectric Materials

    SciTech Connect (OSTI)

    Ren, Zhifeng

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  18. High-Performance Nanostructured Coating

    Broader source: Energy.gov [DOE]

    The High-Performance Nanostructured Coating fact sheet details a SunShot project led by a University of California, San Diego research team working to develop a new high-temperature spectrally selective coating for receiver surfaces. These receiver surfaces, used in concentrating solar power systems, rely on high-temperature SSCs to effectively absorb solar energy without emitting much blackbody radiation.The optical properties of the SSC directly determine the efficiency and maximum attainable temperature of solar receivers, which in turn influence the power-conversion efficiency and overall system cost.

  19. Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Study: Innovative Energy Efficiency Approaches in NOAA's Environmental Security Computing Center in Fairmont, West Virginia High-Performance Computing Data Center Metering Protocol

  20. High Performance Sustainable Building Design RM

    Office of Environmental Management (EM)

    High Performance Sustainable Building Design Review Module March 2010 CD-0 O High 0 This ... Director HPSBD High Performance Sustainable Building Design IESNA Illuminating ...

  1. Software Synthesis for High Productivity Exascale Computing

    SciTech Connect (OSTI)

    Bodik, Rastislav

    2010-09-01

    Over the three years of our project, we accomplished three key milestones: We demonstrated how ideas from generative programming and software synthesis can help support the development of bulk-synchronous distributed memory kernels. These ideas are realized in a new language called MSL, a C-like language that combines synthesis features with high level notations for array manipulation and bulk-synchronous parallelism to simplify the semantic analysis required for synthesis. We also demonstrated that these high level notations map easily to low level C code and show that the performance of this generated code matches that of handwritten Fortran. Second, we introduced the idea of solver-aided domain-specific languages (SDSLs), which are an emerging class of computer-aided programming systems. SDSLs ease the construction of programs by automating tasks such as verification, debugging, synthesis, and non-deterministic execution. SDSLs are implemented by translating the DSL program into logical constraints. Next, we developed a symbolic virtual machine called Rosette, which simplifies the construction of such SDSLs and their compilers. We have used Rosette to build SynthCL, a subset of OpenCL that supports synthesis. Third, we developed novel numeric algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. We achieved progress in three aspects of this problem. First we determined lower bounds on communication. Second, we compared these lower bounds to widely used versions of these algorithms, and noted that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identified or invented new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrated large speed-ups in theory and practice.

  2. High performance image processing of SPRINT

    SciTech Connect (OSTI)

    DeGroot, T.

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  3. Computer Modeling of Chemical and Geochemical Processes in High...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer modeling of chemical and geochemical processes in high ionic strength solutions ... in brine Computer modeling of chemical and geochemical processes in high ionic ...

  4. Computationally Efficient Modeling of High-Efficiency Clean Combustion...

    Broader source: Energy.gov (indexed) [DOE]

    More Documents & Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines Computationally Efficient Modeling of High-Efficiency Clean Combustion ...

  5. DOE High Performance Computing Operational Review (HPCOR)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... David Smith, Jack Deslippe, Shreyas Cholia, David Skinner, John Harney, Stuart Campbell, Rudy Garcia, Craig Ulmer, Ilana Stern. Co-Chairs: David Skinner, Stuart Campbell. ...

  6. High-Performance Computing at Los

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    a supercomputer epicenter where "big data set" really means something, a data ... Statistical analysis generally occurs over the entire data set. But more detailed analysis ...

  7. Department of Defense High Performance Computing Modernization...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... CBU-115 separation from F-16 GBU-38 separation from B1B J. Keen, R. Moran, J. Dudley, J. Torres, Lt. J. Babcock, C. Cureton, and T. Eymann, AFSEO, Eglin AFB, FL; B. Jolly, J. ...

  8. High Performance Computing Facility Operational Assessment, CY...

    Office of Scientific and Technical Information (OSTI)

    costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from...

  9. High Performance Computing for Manufacturing Parternship | GE...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    GE, US DOE Partner on HPC4Mfg projects to deliver new capabilities in 3D Printing and higher jet engine efficiency Click to email this to a friend (Opens in new window) Share on ...

  10. Building America Webinar: High Performance Enclosure Strategies...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Next Gen Advanced Framing for High Performance Homes Integrated System Solutions Building ... - August 13, 2014 - Next Gen Advanced Framing for High Performance Homes Integrated ...

  11. Funding Opportunity: Building America High Performance Housing...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Opportunity: Building America High Performance Housing Innovation Funding Opportunity: Building America High Performance Housing Innovation November 19, 2015 - 11:51am Addthis The ...

  12. Automatic Performance Collection (AutoPerf) | Argonne Leadership Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Facility Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Automatic

  13. Routing performance analysis and optimization within a massively parallel computer

    DOE Patents [OSTI]

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  14. Ultra-high resolution computed tomography imaging

    DOE Patents [OSTI]

    Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  15. High Performance Factory Built Housing

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Performance Factory Built Housing 2015 Building Technologies Office Peer Review Jordan Dentz, jdentz@levypartnership.com ARIES The Levy Partnership, Inc. Project Summary ...

  16. High-Performance Phylogeny Reconstruction

    SciTech Connect (OSTI)

    Tiffani L. Williams

    2004-11-10

    Under the Alfred P. Sloan Fellowship in Computational Biology, I have been afforded the opportunity to study phylogenetics--one of the most important and exciting disciplines in computational biology. A phylogeny depicts an evolutionary relationship among a set of organisms (or taxa). Typically, a phylogeny is represented by a binary tree, where modern organisms are placed at the leaves and ancestral organisms occupy internal nodes, with the edges of the tree denoting evolutionary relationships. The task of phylogenetics is to infer this tree from observations upon present-day organisms. Reconstructing phylogenies is a major component of modern research programs in many areas of biology and medicine, but it is enormously expensive. The most commonly used techniques attempt to solve NP-hard problems such as maximum likelihood and maximum parsimony, typically by bounded searches through an exponentially-sized tree-space. For example, there are over 13 billion possible trees for 13 organisms. Phylogenetic heuristics that quickly analyze large amounts of data accurately will revolutionize the biological field. This final report highlights my activities in phylogenetics during the two-year postdoctoral period at the University of New Mexico under Prof. Bernard Moret. Specifically, this report reports my scientific, community and professional activities as an Alfred P. Sloan Postdoctoral Fellow in Computational Biology.

  17. A Comprehensive Look at High Performance Parallel I/O

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A Comprehensive Look at High Performance Parallel I/O A Comprehensive Look at High Performance Parallel I/O Book Signing @ SC14! Nov. 18, 5 p.m. in Booth 1939 November 10, 2014 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov HighPerf Parallel IO In the 1990s, high performance computing (HPC) made a dramatic transition to massively parallel processors. As this model solidified over the next 20 years, supercomputing performance increased from gigaflops-billions of calculations per second-to

  18. Building America Webinar: High Performance Space Conditioning...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    versus mini-splits being used in high performance (high R value enclosurelow air leakage) houses, often configured as a simplified distribution system (one heat source per floor). ...

  19. High Performance Binderless Electrodes for Rechargeable Lithium...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Binderless Electrodes for Rechargeable Lithium Batteries National ... Electrode for fast-charging Lithium Ion Batteries, Accelerating Innovation Webinar ...

  20. Thermoelectrics Partnership: High Performance Thermoelectric Waste Heat

    Broader source: Energy.gov (indexed) [DOE]

    Recovery System Based on Zintl Phase Materials with Embedded Nanoparticles | Department of Energy 70_shakouri_2011_p.pdf (856.16 KB) More Documents & Publications High Performance Zintl Phase TE Materials with Embedded Particles High performance Zintl phase TE materials with embedded nanoparticles High performance Zintl phase TE materials with embedded nanoparticles

  1. FPGAs in High Perfomance Computing: Results from Two LDRD Projects.

    SciTech Connect (OSTI)

    Underwood, Keith D; Ulmer, Craig D.; Thompson, David; Hemmert, Karl Scott

    2006-11-01

    Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5

  2. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  3. NUG 2013 User Day: Trends, Discovery, and Innovation in High Performance

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Home » For Users » NERSC Users Group » Annual Meetings » NUG 2013 » User Day NUG 2013 User Day: Trends, Discovery, and Innovation in High Performance Computing Wednesday, Feb. 13 Berkeley Lab Building 50 Auditorium Live streaming: http://hosting.epresence.tv/LBL/1.aspx 8:45 - Welcome: Kathy Yelick, Berkeley Lab Associate Director for Computing Sciences Trends 9:00 - The Future of High Performance Scientific Computing, Kathy Yelick, Berkeley Lab Associate Director for Computing

  4. Computing at JLab

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

  5. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  6. PPPL and Princeton join high-performance software project | Princeton

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Plasma Physics Lab and Princeton join high-performance software project By John Greenwald July 22, 2016 Tweet Widget Google Plus One Share on Facebook Co-principal investigators William Tang and Bei Wang (Photo by Elle Starkman/Office of Communications) Co-principal investigators William Tang and Bei Wang Princeton University and the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) are participating in the accelerated development of a modern high-performance computing

  7. PPPL and Princeton join high-performance software project | Princeton

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Plasma Physics Lab and Princeton join high-performance software project By John Greenwald July 22, 2016 Tweet Widget Google Plus One Share on Facebook Co-principal investigators William Tang and Bei Wang. (Photo by Elle Starkman/Office of Communications) Co-principal investigators William Tang and Bei Wang. Princeton University and the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) are participating in the accelerated development of a modern high-performance computing

  8. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOE Patents [OSTI]

    Faraj, Ahmad

    2013-02-12

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: performing, for each node, a local reduction operation using allreduce contribution data for the cores of that node, yielding, for each node, a local reduction result for one or more representative cores for that node; establishing one or more logical rings among the nodes, each logical ring including only one of the representative cores from each node; performing, for each logical ring, a global allreduce operation using the local reduction result for the representative cores included in that logical ring, yielding a global allreduce result for each representative core included in that logical ring; and performing, for each node, a local broadcast operation using the global allreduce results for each representative core on that node.

  9. High Temperature Fuel Cell Performance High Temperature Fuel Cell

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Performance of of Sulfonated Sulfonated Poly(phenylene Poly(phenylene) Proton) Proton Conducting Conducting Polymers | Department of Energy Cell Performance High Temperature Fuel Cell Performance of of Sulfonated Sulfonated Poly(phenylene Poly(phenylene) Proton) Proton Conducting Conducting Polymers High Temperature Fuel Cell Performance High Temperature Fuel Cell Performance of of Sulfonated Sulfonated Poly(phenylene Poly(phenylene) Proton) Proton Conducting Conducting Polymers Presentation

  10. Large Scale Production Computing and Storage Requirements for High Energy

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Physics: Target 2017 Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017 HEPlogo.jpg The NERSC Program Requirements Review "Large Scale Computing and Storage Requirements for High Energy Physics" is organized by the Department of Energy's Office of High Energy Physics (HEP), Office of Advanced Scientific Computing Research (ASCR), and the National Energy Research Scientific Computing Center (NERSC). The review's goal is to characterize