Sample records for high performance computer

  1. High Performance Computing in

    E-Print Network [OSTI]

    Stamatakis, Alexandros

    High Performance Computing in Bioinformatics Thomas Ludwig (t.ludwig@computer.org) Ruprecht PART I: High Performance Computing Thomas Ludwig PART II: HPC Computing in Bioinformatics Alexandros #12;© Thomas Ludwig, Alexandros Stamatakis, GCB'04 3 PART I High Performance Computing Introduction

  2. High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of ever-demanding workloads 13:01 Gary Grider, HPC Divison Leader The High Performance Computing (HPC) Division supports the Laboratory mission by managing world-class...

  3. Sandia Energy - High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingashoter2015-03-18T21:41:24+00:00...

  4. Computational biology and high performance computing

    E-Print Network [OSTI]

    Shoichet, Brian

    2011-01-01T23:59:59.000Z

    Biology and High Performance Computing Manfred Zorn, TeresaBiology and High Performance Computing Presenters: Manfred99-Portland High performance computing has become one of the

  5. High Performance Computing School COMSC

    E-Print Network [OSTI]

    Martin, Ralph R.

    High Performance Computing School COMSC This module aims to provide the students with fundamental knowledge and understanding of techniques associated with High Performance Computing and its practical' skills in analysing and evaluating High Performance Computing and will be structured around

  6. High Performance Quantum Computing

    E-Print Network [OSTI]

    Simon J. Devitt; William J. Munro; Kae Nemoto

    2008-10-14T23:59:59.000Z

    The architecture scalability afforded by recent proposals of a large scale photonic based quantum computer, utilizing the theoretical developments of topological cluster states and the photonic chip, allows us to move on to a discussion of massively scaled Quantum Information Processing (QIP). In this letter we introduce the model for a secure and unsecured topological cluster mainframe. We consider the quantum analogue of High Performance Computing, where a dedicated server farm is utilized by many users to run algorithms and share quantum data. The scaling structure of photonics based topological cluster computing leads to an attractive future for server based QIP, where dedicated mainframes can be constructed and/or expanded to serve an increasingly hungry user base with the ideal resource for individual quantum information processing.

  7. High Performance Computing contributions to

    E-Print Network [OSTI]

    High Performance Computing contributions to DoD Mission Success 2002 #12;Approved for public/C nanotube in a field emitter configuration #12;HIGH PERFORMANCE COMPUTING contributions tocontributions ­ SECTION 1 INTRODUCTION 1 Introduction 3 Overview of the High Performance Computing Modernization Program 3

  8. High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC) EnvironmentalGyroSolé(tm) Harmonicbet WhenHiggs Boson May| ArgonneHigh

  9. High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville PowerCherries 82981-1cn SunnybankD.jpgHanfordDepartmentInnovationHigh Flux Isotope

  10. Army High Performance Computing Research Center

    E-Print Network [OSTI]

    Prinz, Friedrich B.

    Army High Performance Computing Research Center Applying advanced computational science research challenges http://me.stanford.edu/research/centers/ahpcrc #12;Army High Performance Computing challenges http://me.stanford.edu/research/centers/ahpcrc #12;Army High Performance Computing Research

  11. Purchase of High Performance Computing (HPC) Central Compute Resources

    E-Print Network [OSTI]

    Shull, Kenneth R.

    Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers summarizes High Performance Computing (HPC) compute resources that faculty engaged in research may purchase of code on the Quest high performance computing system. The installation cycles for new

  12. High performance computing Igal G. Rasin

    E-Print Network [OSTI]

    Adler, Joan

    High performance computing Igal G. Rasin Department of Chemical Engineering, Technion Israel with different parallelization techniques and tools used in high performance computing (HPC). The tutorial

  13. HIGH PERFORMANCE COMPUTING TODAY Jack Dongarra

    E-Print Network [OSTI]

    Dongarra, Jack

    1 HIGH PERFORMANCE COMPUTING TODAY Jack Dongarra Computer Science Department, University detailed and well-founded analysis of the state of high performance computing. This paper summarizes some of systems available for performing grid based computing. Keywords High performance computing, Parallel

  14. Computational Biology and High Performance Computing 2000

    SciTech Connect (OSTI)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19T23:59:59.000Z

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  15. Tutorial: High Performance Computing Igal G. Rasin

    E-Print Network [OSTI]

    Adler, Joan

    Tutorial: High Performance Computing Igal G. Rasin Department of Chemical Engineering Israel Computing 27 Nisan 5769 (21.04.2009) 1 / 18 #12;Motivation What is High Performance Computing? What for serial Computing? Igal G. Rasin (Technion) Tutorial: High Performance Computing 27 Nisan 5769 (21

  16. High-Performance Computing at Los

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and back to the loosened semantics of key value stores," says Gary Grider, High Performance Computing division leader at Los Alamos. Computer simulations overall are scaling to...

  17. College of Engineering High Performance Computing Cluster

    E-Print Network [OSTI]

    Demirel, Melik C.

    College of Engineering High Performance Computing Cluster Policy and Procedures COE-HPC-01 and registered as requiring high performance computing; the course identification/registrations process the College High Performance Computing system will need register for system access by visiting http

  18. High Performance Computing at the Oak Ridge Leadership Computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Computing at the Oak Ridge Leadership Computing Facility Go to Menu Page 2 Outline * Our Mission * Computer Systems: Present, Past, Future * Challenges Along the...

  19. 423A HIGH-PERFORMANCE COMPUTING/NUMERICAL The International Journal of High Performance Computing Applications,

    E-Print Network [OSTI]

    Higham, Nicholas J.

    423A HIGH-PERFORMANCE COMPUTING/NUMERICAL The International Journal of High Performance Computing and barriers in the development of high-performance computing (HPC) algorithms and software. The activity has computing, numerical analy- sis, roadmap, applications and algorithms, software 1 The High-performance

  20. in High Performance Computing Computer System, Cluster, and Networking...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

  1. 64 _____________________________________Math & Computational Sciences Division High Performance Computing and Visualization

    E-Print Network [OSTI]

    Perkins, Richard A.

    64 _____________________________________Math & Computational Sciences Division High Performance Computing and Visualization Research and Development in Visual Analysis Judith Devaney Terrence Griffin John

  2. High performance computing: Clusters, constellations, MPPs, and future directions

    E-Print Network [OSTI]

    Dongarra, Jack; Sterling, Thomas; Simon, Horst; Strohmaier, Erich

    2003-01-01T23:59:59.000Z

    and Jim Gray, “High Performance Computing: Crays, Clusters,The Marketplace of High-Performance Computing”, ParallelHigh Performance Computing Clusters, Constellations, MPPs,

  3. Middleware in Modern High Performance Computing System Architectures

    E-Print Network [OSTI]

    Engelmann, Christian

    Middleware in Modern High Performance Computing System Architectures Christian Engelmann, Hong Ong trend in modern high performance computing (HPC) system architectures employs "lean" compute nodes) continue to reside on compute nodes. Key words: High Performance Computing, Middleware, Lean Compute Node

  4. High Performance Computing in Accelerator Science: Past Successes. Future Challenges

    E-Print Network [OSTI]

    Ryne, R.

    2013-01-01T23:59:59.000Z

    High Performance Computing in Accelerator Science: PastAC02- 05CH11231. High Performance Computing in Accelerator

  5. GeoComputational Intelligence and High-Performance Geospatial Computing

    E-Print Network [OSTI]

    Guan, Qingfeng

    2011-11-16T23:59:59.000Z

    GeoComputational Intelligence and High-performance Geospatial Computing Qingfeng (Gene) Guan, Ph.D Center for Advanced Land Management Information Technologies School of Natural Resources University of Nebraska - Lincoln GIS Day @ University... of Kansas Nov. 16th, 2011 Contents 1. Computational Science and GeoComputation 2. GeoComputational Intelligence - ANN-based Urban-CA model 3. High-performance Geospatial Computing - Parallel Geostatistical Areal Interpolation - pRPL and pSLEUTH 4. Conclusion...

  6. High Performance Computing Data Center (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2014-08-01T23:59:59.000Z

    This two-page fact sheet describes the new High Performance Computing Data Center in the ESIF and talks about some of the capabilities and unique features of the center.

  7. High Performance Computing Data Center (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2012-08-01T23:59:59.000Z

    This two-page fact sheet describes the new High Performance Computing Data Center being built in the ESIF and talks about some of the capabilities and unique features of the center.

  8. Design, Implementation and Performance of Exponential Integrators for High Performance Computing Applications

    E-Print Network [OSTI]

    Loffeld, John

    2013-01-01T23:59:59.000Z

    interest in high performance computing would help popularizeIntegrators for High Performance Computing Applications Aapplied mathematics, high performance computing and computer

  9. Debugging a high performance computing program

    SciTech Connect (OSTI)

    Gooding, Thomas M.

    2013-08-20T23:59:59.000Z

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  10. Debugging a high performance computing program

    SciTech Connect (OSTI)

    Gooding, Thomas M.

    2014-08-19T23:59:59.000Z

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  11. Climate Modeling using High-Performance Computing

    SciTech Connect (OSTI)

    Mirin, A A

    2007-02-05T23:59:59.000Z

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  12. High Performance Computing Managing world-class supercomputing centers

    E-Print Network [OSTI]

    - 1 - High Performance Computing Managing world-class supercomputing centers Read caption Leader The High Performance Computing (HPC) Division supports the Laboratory mission by managing world high performance computing, storage, and emerging data-intensive information science production systems

  13. Managing Stakeholder Requirements in High Performance Computing Procurement

    E-Print Network [OSTI]

    Sommerville, Ian

    Managing Stakeholder Requirements in High Performance Computing Procurement John Rooksby1 , Mark Department of Management, Lancaster University High Performance Computing (HPC) facilities are provided strategy can rigorously meet the demands of the potential users. 1 Introduction High Performance Computing

  14. NERSC 2011: High Performance Computing Facility Operational Assessment for the National Energy Research Scientific Computing Center

    E-Print Network [OSTI]

    Antypas, Katie

    2013-01-01T23:59:59.000Z

    NERSC 2011 High Performance Computing Facility Operationalby providing high-performance computing, information, data,s deep knowledge of high performance computing to overcome

  15. Energy Efficiency Opportunities in Federal High Performance Computing...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Efficiency Opportunities in Federal High Performance Computing Data Centers Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Case study describes...

  16. High-Performance Computing at Los Alamos announces milestone...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High-Performance Computing announces milestone High-Performance Computing at Los Alamos announces milestone for keyvalue middleware Billion inserts-per-second data milestone...

  17. John Shalf Gives Talk at San Francisco High Performance Computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    John Shalf Gives Talk at San Francisco High Performance Computing Meetup John Shalf Gives Talk at San Francisco High Performance Computing Meetup September 17, 2014 XBD200503 00083...

  18. Sandia National Laboratories: high-performance computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    performance computing Sandian Re-Elected as President of the Association for Computing Machinery Special Interest Group on Graphics and Interactive Techniques On July 8, 2014, in...

  19. High-performance computing for airborne applications

    SciTech Connect (OSTI)

    Quinn, Heather M [Los Alamos National Laboratory; Manuzzato, Andrea [Los Alamos National Laboratory; Fairbanks, Tom [Los Alamos National Laboratory; Dallmann, Nicholas [Los Alamos National Laboratory; Desgeorges, Rose [Los Alamos National Laboratory

    2010-06-28T23:59:59.000Z

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  20. High performance computing and numerical modelling

    E-Print Network [OSTI]

    ,

    2014-01-01T23:59:59.000Z

    Numerical methods play an ever more important role in astrophysics. This is especially true in theoretical works, but of course, even in purely observational projects, data analysis without massive use of computational methods has become unthinkable. The key utility of computer simulations comes from their ability to solve complex systems of equations that are either intractable with analytic techniques or only amenable to highly approximative treatments. Simulations are best viewed as a powerful complement to analytic reasoning, and as the method of choice to model systems that feature enormous physical complexity such as star formation in evolving galaxies, the topic of this 43rd Saas Fee Advanced Course. The organizers asked me to lecture about high performance computing and numerical modelling in this winter school, and to specifically cover the basics of numerically treating gravity and hydrodynamics in the context of galaxy evolution. This is still a vast field, and I necessarily had to select a subset ...

  1. High Performance Computing and Storage Requirements for Biological and Environmental Research Target 2017

    E-Print Network [OSTI]

    Gerber, Richard

    2014-01-01T23:59:59.000Z

    Journal   of   High   Performance  Computing  Applications  Target  2017   High  Performance  Computing  and  Storage  to   characterize   High   Performance   Computing   (HPC)  

  2. C++ programming techniques for High Performance Computing on systems with

    E-Print Network [OSTI]

    Fiebig, Peter

    C++ programming techniques for High Performance Computing on systems with non-uniform memory access (including NUMA) without sacrificing performance. ccNUMA In High Performance Computing (HPC), shared- memory

  3. CENTER FOR HIGH PERFORMANCE COMPUTING Overview of CHPC

    E-Print Network [OSTI]

    Alvarado, Alejandro Sánchez

    CENTER FOR HIGH PERFORMANCE COMPUTING Overview of CHPC Julia Harrison, Associate Director Center for High Performance Computing julia.harrison@utah.edu Spring 2009 #12;CENTER FOR HIGH PERFORMANCE://www.chpc.utah.edu/docs/services.html 2/26/09 http://www.chpc.utah.edu Slide 3 #12;CENTER FOR HIGH PERFORMANCE COMPUTING Arches 2

  4. High Performance Computing with a Conservative Spectral Boltzmann Solver

    E-Print Network [OSTI]

    High Performance Computing with a Conservative Spectral Boltzmann Solver Jeffrey R. Haack and Irene the structure of the collisional formulation for high performance computing environments. The locality in space on high performance computing resources. We also use the improved computational power of this method

  5. Java for High Performance Computing Java HPC Codes

    E-Print Network [OSTI]

    Fraguela, Basilio B.

    Motivation Java for High Performance Computing Java HPC Codes Performance Evaluation Conclusions Java for High Performance Computing: Myth or Reality? Guillermo López Taboada Grupo de Arquitectura de López Taboada Java for HPC: Myth or Reality? #12;Motivation Java for High Performance Computing Java HPC

  6. Building Algorithm-Based Energy Efficient High Performance Computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Building Algorithm-Based Energy Efficient High Performance Computing Systems with Resilience Event Sponsor: Mathematics and Computing Science Seminar Start Date: May 12 2015 -...

  7. Geocomputation's future at the extremes: high performance computing and nanoclients

    E-Print Network [OSTI]

    Clarke, Keith

    Geocomputation's future at the extremes: high performance computing and nanoclients K.C. Clarke; High performance computing; Tractability; Geocom- putation E-mail address: kclarke@geog.ucsb.edu (K

  8. M Sathia Narayanan MAE 609 High Performance Computing

    E-Print Network [OSTI]

    Krovi, Venkat

    M Sathia Narayanan 3401 2454 MAE 609 High Performance Computing M. Jones (instructor) Project . . . 27 Page 1 of 28 #12;M Sathia Narayanan 3401 2454 MAE 609 High Performance Computing M. Jones

  9. 1High Performance Computing at Liberal Arts Colleges Workshop 3

    E-Print Network [OSTI]

    Barr, Valerie

    1High Performance Computing at Liberal Arts Colleges ­ Workshop 3 11October 27, 2009 Experiences (?) #12;2High Performance Computing at Liberal Arts Colleges ­ Workshop 3 22 Acknowledgements Thanks to. October 27, 2009 #12;3High Performance Computing at Liberal Arts Colleges ­ Workshop 3 33October 27, 2009

  10. Software Reuse in High Performance Computing Shirley Browne

    E-Print Network [OSTI]

    Dongarra, Jack

    Software Reuse in High Performance Computing Shirley Browne University of Tennessee 107 Ayres Hall high performance computing architectures in the form of distributed memory mul­ tiprocessors have and cost of programming applications to run on these machines. Economical use of high performance computing

  11. Advanced Environments and Tools for High Performance Computing

    E-Print Network [OSTI]

    Walker, David W.

    Advanced Environments and Tools for High Performance Computing Problem-Solving Environments Environments and Tools for High Performance Computing. The conference was chaired by Professor D. W. Walker and managing distributed high performance comput- ing resources is important for a PSE to meet the requirements

  12. Universal High Performance Computing ---We Have Just Begun

    E-Print Network [OSTI]

    California at Berkeley, University of

    Universal High Performance Computing --- We Have Just Begun Jerome A. Feldman April, 1994, and deployment. At present, high performance computing is entirely different. Although there have been some commercial factor. A prerequisite for Universal High Performance Computing (UHPC) is con­ vergence

  13. Evaluating Parameter Sweep Workflows in High Performance Computing*

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    Evaluating Parameter Sweep Workflows in High Performance Computing* Fernando Chirigati1,# , Vítor a large amount of tasks that are submitted to High Performance Computing (HPC) environments. Different, Parameter Sweep, High Performance Computing (HPC) 1. INTRODUCTION1 # Many scientific experiments are based

  14. Studying Code Development for High Performance Computing: The HPCS Program

    E-Print Network [OSTI]

    Basili, Victor R.

    Studying Code Development for High Performance Computing: The HPCS Program Jeff Carver1 , Sima at measuring the development time for programs written for high performance computers (HPC). Our goal. Introduction The development of High-Performance Computing (HPC) programs (codes) is crucial to progress

  15. Software Reuse in High Performance Computing Shirley Browne

    E-Print Network [OSTI]

    Hawick, Ken

    Software Reuse in High Performance Computing Shirley Browne University of Tennessee 107 Ayres Hall high performance computing architectures in the form of distributed memorymul- tiprocessors have become of programming applications to run on these machines. Economical use of high performance computing and subsequent

  16. Applying High Performance Computing to Analyzing by Probabilistic Model Checking

    E-Print Network [OSTI]

    Schneider, Carsten

    Applying High Performance Computing to Analyzing by Probabilistic Model Checking Mobile Cellular on the use of high performance computing in order to analyze with the proba- bilistic model checker PRISM. The Figure Generation Script 22 2 #12;1. Introduction We report in this paper on the use of high performance

  17. High Performance Computing (HPC) Central Storage Resources for Research Support

    E-Print Network [OSTI]

    Shahriar, Selim

    High Performance Computing (HPC) Central Storage Resources for Research Support Effective for FY. They also describe new applications and technologies related to research in high performance computing2011 Revised: March 7, 2011 Page 1 Information Technology Purpose This memo summarizes High Performance

  18. Benchmarking: More Aspects of High Performance Computing

    SciTech Connect (OSTI)

    Rahul Ravindrudu

    2004-12-19T23:59:59.000Z

    The original HPL algorithm makes the assumption that all data can be fit entirely in the main memory. This assumption will obviously give a good performance due to the absence of disk I/O. However, not all applications can fit their entire data in memory. These applications which require a fair amount of I/O to move data to and from main memory and secondary storage, are more indicative of usage of an Massively Parallel Processor (MPP) System. Given this scenario a well designed I/O architecture will play a significant part in the performance of the MPP System on regular jobs. And, this is not represented in the current Benchmark. The modified HPL algorithm is hoped to be a step in filling this void. The most important factor in the performance of out-of-core algorithms is the actual I/O operations performed and their efficiency in transferring data to/from main memory and disk, Various methods were introduced in the report for performing I/O operations. The I/O method to use depends on the design of the out-of-core algorithm. Conversely, the performance of the out-of-core algorithm is affected by the choice of I/O operations. This implies, good performance is achieved when I/O efficiency is closely tied with the out-of-core algorithms. The out-of-core algorithms must be designed from the start. It is easily observed in the timings for various plots, that I/O plays a significant part in the overall execution time. This leads to an important conclusion, retro-fitting an existing code may not be the best choice. The right-looking algorithm selected for the LU factorization is a recursive algorithm and performs well when the entire dataset is in memory. At each stage of the loop the entire trailing submatrix is read into memory panel by panel. This gives a polynomial number of I/O reads and writes. If the left-looking algorithm was selected for the main loop, the number of I/O operations involved will be linear on the number of columns. This is due to the data access pattern for the left-looking factorization. The right-looking algorithm performs better for in-core data, but the left-looking will perform better for out-of-core data due to the reduced I/O operations. Hence the conclusion that out-of-core algorithms will perform better when designed from start. The out-of-core and thread based computation do not interact in this case, since I/O is not done by the threads. The performance of the thread based computation does not depend on I/O as the algorithms are in the BLAS algorithms which assumes all the data to be in memory. This is the reason the out-of-core results and OpenMP threads results were presented separately and no attempt to combine them was made. In general, the modified HPL performs better with larger block sizes, due to less I/O involved for out-of-core part and better cache utilization for the thread based computation.

  19. High Performance Computing Environment P.G.Senapathy Centre for Computing Resources,

    E-Print Network [OSTI]

    Krishnapura, Nagendra

    High Performance Computing Environment P.G.Senapathy Centre for Computing Resources, Computer Centre, IIT Madras. User id Requisition Form for High Performance Computing on Virgo Super Cluster.iitm.ac.in for information on High Performance Computing Environment. Email your suggestions/problems to hpce@iitm.ac.in. 11

  20. The High Performance Computing Juggling Act | Argonne Leadership...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The High Performance Computing Juggling Act Event Sponsor: Mathematics and Computing Science Seminar Start Date: Feb 17 2015 - 11:00am BuildingRoom: Building 240Room 1404-1405...

  1. The Faculty of Arts and Sciences High Performance Computing Core

    E-Print Network [OSTI]

    O'Hern, Corey S.

    The Faculty of Arts and Sciences High Performance Computing Core Advanced Computational Support/09/2010-9FAS HPC Center #12;Understanding Data Requirements 04/09/2010-10FAS HPC Center Source: Adriana Corona

  2. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    E-Print Network [OSTI]

    Adams, Mark

    2014-01-01T23:59:59.000Z

    for Ranking High Performance Computing Systems Mark F. Adamsmetric for ranking high performance computing systems. HPLmetric for ranking high performance computing systems. When

  3. On the user-scheduler relationship in high-performance computing

    E-Print Network [OSTI]

    Lee, Cynthia Bailey

    2009-01-01T23:59:59.000Z

    1.1. High-Performance Computing . . . . . . . . 1.2. ProblemJournal of High Performance Computing Applications, 19(4):IEEE Conference on High Performance Computing, Networking,

  4. MSIM 795/895:MSIM 795/895: HIGH PERFORMANCE COMPUTING AND SIMULATIONHIGH PERFORMANCE COMPUTING AND SIMULATION

    E-Print Network [OSTI]

    MSIM 795/895:MSIM 795/895: HIGH PERFORMANCE COMPUTING AND SIMULATIONHIGH PERFORMANCE COMPUTING://eng.odu.edu/msve COURSE DESCRIPTION Introduction to modern high performance computing platforms including top their research area. Project presentations are required. COURSE TOPICS 1. Overview of high-performance computing

  5. High Performance Computational Biology: A Distributed computing Perspective (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Konerding, David [Google, Inc

    2011-06-08T23:59:59.000Z

    David Konerding from Google, Inc. gives a presentation on "High Performance Computational Biology: A Distributed Computing Perspective" at the JGI/Argonne HPC Workshop on January 26, 2010.

  6. MSIM 715/815:MSIM 715/815: HIGH PERFORMANCE COMPUTING AND SIMULATIONHIGH PERFORMANCE COMPUTING AND SIMULATION

    E-Print Network [OSTI]

    MSIM 715/815:MSIM 715/815: HIGH PERFORMANCE COMPUTING AND SIMULATIONHIGH PERFORMANCE COMPUTING://www.odu.edu/msve COURSE DESCRIPTION Introduction to modern high performance comput- ing platforms including top-data for their research problems. Project presentations and reports are required. COURSE TOPICS 1. Overview of high-performance

  7. High-Performance Computing at Los

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville PowerCherries 82981-1cnHigh School football High School football Fancy

  8. High Performance Computing Data Center Metering Protocol

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't YourTransport(Fact Sheet), GeothermalGridHYDROGEN TOTechnologyHighLouisiana | Department ofHigh

  9. High Performance Computing Richard F. BARRETT

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville PowerCherries 82981-1cn SunnybankD.jpgHanfordDepartmentInnovationHigh Flux Isotopethe Role of

  10. Motor: A Virtual Machine for High Performance Computing Wojtek Goscinski and David Abramson

    E-Print Network [OSTI]

    Abramson, David

    Motor: A Virtual Machine for High Performance Computing Wojtek Goscinski and David Abramson {wojtek environments do not provide the necessary high performance computing abstractions required by e- Scientists, whilst retaining strong message passing performance. 1. Introduction High performance computing (HPC

  11. June 8, 2007 Advanced Fault Tolerance Solutions for High Performance Computing

    E-Print Network [OSTI]

    Engelmann, Christian

    June 8, 2007 Advanced Fault Tolerance Solutions for High Performance Computing Workshop on Trends Tolerance Solutions for High Performance Computing Christian Engelmann Oak Ridge National Laboratory, Oak for High Performance Computing Workshop on Trends, Technologies and Collaborative Opportunities in High

  12. May 28, 2007 Middleware in Modern High Performance Computing System Architectures 1/20 Middleware in Modern High Performance

    E-Print Network [OSTI]

    Engelmann, Christian

    May 28, 2007 Middleware in Modern High Performance Computing System Architectures 1/20 Middleware in Modern High Performance Computing System Architectures Christian Engelmann1,2, Hong Ong1, Stephen L 28, 2007 Middleware in Modern High Performance Computing System Architectures 2/20 Talk Outline

  13. High-Performance Computing for Advanced Smart Grid Applications

    SciTech Connect (OSTI)

    Huang, Zhenyu; Chen, Yousu

    2012-07-06T23:59:59.000Z

    The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

  14. March 14, 2007 Towards High Availability for High-Performance Computing System Services

    E-Print Network [OSTI]

    Engelmann, Christian

    March 14, 2007 Towards High Availability for High-Performance Computing System Services: Accomplishments and Limitations 1/42 Towards High Availability for High- Performance Computing System Services The University of Reading, Reading, UK #12;March 14, 2007 Towards High Availability for High

  15. High performance computing and communications: FY 1997 implementation plan

    SciTech Connect (OSTI)

    NONE

    1996-12-01T23:59:59.000Z

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  16. June 4, 2007 Advanced Fault Tolerance Solutions for High Performance Computing

    E-Print Network [OSTI]

    Engelmann, Christian

    June 4, 2007 Advanced Fault Tolerance Solutions for High Performance Computing Workshop on Trends Tolerance Solutions for High Performance Computing Christian Engelmann Oak Ridge National Laboratory, Oak Solutions for High Performance Computing Workshop on Trends, Technologies and Collaborative Opportunities

  17. Editorial for Advanced Theory and Practice for High Performance Computing and Communications Geoffrey Fox

    E-Print Network [OSTI]

    Editorial for Advanced Theory and Practice for High Performance Computing and Communications Theory and Practice for High Performance Computing and Communications. I would like to thank Omer Rana International Conference on High Performance Computing and Communications (HPCC-09) http

  18. Symmetric Active/Active High Availability for High-Performance Computing System Services

    E-Print Network [OSTI]

    He, Xubin "Ben"

    Symmetric Active/Active High Availability for High-Performance Computing System Services Christian to pave the way for high avail- ability in high-performance computing (HPC) by focusing on efficient symmetric active/active high availability for multiple redundant head and service nodes running in virtual

  19. Titanium: A Java Dialect for High Performance Computing

    E-Print Network [OSTI]

    California at Berkeley, University of

    1 1 Titanium: A Java Dialect for High Performance Computing U.C. Berkeley and LBNL http://titanium.cs.berkeley.edu Dan Bonachea (slides courtesy of Kathy Yelick) 2 Titanium Group (Past and Present) · Susan Graham ­ Large scale parallel machines · Titanium is designed for methods with ­ Structured grids ­ Locally

  20. Toward Codesign in High Performance Computing Systems - 06386705...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    s f o r t h i s w o r k . 7 . R E F E R E N C E S 1 J . A n g e t a l . High Performance Computing: From Grids and Clouds to Exascale, c h a p t e r E x a s c a l e C o m p u...

  1. High Performance Computing linear algorithms for two-phase flow in porous media

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    High Performance Computing linear algorithms for two-phase flow in porous media Robert Eymard High Performance Computing techniques. This implies to handle the difficult problem of solving

  2. GOCE DATA ANALYSIS: REALIZATION OF THE INVARIANTS APPROACH IN A HIGH PERFORMANCE COMPUTING ENVIRONMENT

    E-Print Network [OSTI]

    Stuttgart, Universität

    GOCE DATA ANALYSIS: REALIZATION OF THE INVARIANTS APPROACH IN A HIGH PERFORMANCE COMPUTING) implementation of the algorithms on high performance computing platforms. #12;2. INVARIANTS REPRESENTATION

  3. Finite-State Verification for High Performance Computing George S. Avrunin

    E-Print Network [OSTI]

    Avrunin, George S.

    Finite-State Verification for High Performance Computing George S. Avrunin Department (top500.org) reveals that high performance computing has become practically synonymous with parallel

  4. High performance computing and communications: FY 1996 implementation plan

    SciTech Connect (OSTI)

    NONE

    1995-05-16T23:59:59.000Z

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  5. Fermilab | Science at Fermilab | Computing | High-performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power AdministrationField8,Dist.New Mexico Feb. 13, 2013Focusreceives .1Grid Computing

  6. High-Performance Computing (High-Performance Computing ( HPCHPC ) in the) in the

    E-Print Network [OSTI]

    million · Large energy consumption · 820 Gflops/Kwatt · Water cooling system minimizes failure rate-end supercomputers · 25 years ago · Workstation-type computing is not competitive at all. No funding #12;Progress in different countries in the world, . Progress has been made, but many key challenges remain, such as power

  7. High Performance Computing Update, June 2009 1. A meeting was held with users and potential users of high performance computing systems in April and this

    E-Print Network [OSTI]

    Sussex, University of

    High Performance Computing Update, June 2009 1. A meeting was held with users and potential users of high performance computing systems in April and this considered a proposal from the Director and application "advice" and a core system to host and manage high performance computing nodes (or clusters

  8. The design of linear algebra libraries for high performance computers

    SciTech Connect (OSTI)

    Dongarra, J.J. [Tennessee Univ., Knoxville, TN (United States). Dept. of Computer Science; [Oak Ridge National Lab., TN (United States); Walker, D.W. [Oak Ridge National Lab., TN (United States)

    1993-08-01T23:59:59.000Z

    This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of block-partitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct higher-level algorithms, and hide many details of the parallelism from the application developer. The block-cyclic data distribution is described, and adopted as a good way of distributing block-partitioned matrices. Block-partitioned versions of the Cholesky and LU factorizations are presented, and optimization issues associated with the implementation of the LU factorization algorithm on distributed memory concurrent computers are discussed, together with its performance on the Intel Delta system. Finally, approaches to the design of library interfaces are reviewed.

  9. Participation by Columbia Researchers in Shared Central High Performance Computing (HPC) Resources

    E-Print Network [OSTI]

    Champagne, Frances A.

    ) to create a shared central high performance computing (HPC) cluster, ColumbiaParticipation by Columbia Researchers in Shared Central High Performance Computing (HPC) Resources Shared Research Computing Policy Advisory Committee (SRCPAC) Chair, Professor

  10. High-performance combinatorial algorithms

    E-Print Network [OSTI]

    Pinar, Ali

    2003-01-01T23:59:59.000Z

    mathematics, and high performance computing. The numericalalgorithms on high performance computing platforms.algorithms on high performance computing platforms, which

  11. High performance computing and communications: FY 1995 implementation plan

    SciTech Connect (OSTI)

    NONE

    1994-04-01T23:59:59.000Z

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  12. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect (OSTI)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01T23:59:59.000Z

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  13. Graph 500 Performance on a Distributed-Memory Cluster REU Site: Interdisciplinary Program in High Performance Computing

    E-Print Network [OSTI]

    Gobbert, Matthias K.

    traditional performance benchmarks for high- performance computers measure the speed of arithmetic operations benchmark is intended to rank high-performance computers based on speed of memory retrieval-memory cluster tara in the UMBC High Performance Computing Facility (www.umbc.edu/hpcf). The cluster tara has 82

  14. Event Services for High Performance Computing Greg Eisenhauer Fabian E. Bustamante Karsten Schwan

    E-Print Network [OSTI]

    Kuzmanovic, Aleksandar

    Event Services for High Performance Computing Greg Eisenhauer Fabi´an E. Bustamante Karsten Schwan,fabianb,schwang@cc.gatech.edu Abstract The Internet and the Grid are changing the face of high performance computing. Rather than tightly computing has been a strong fo- cus of research in high performance computing. This has resulted

  15. High Performance Computing in the Life/Medical SciencesVirginiaBioinformaticsInstitute

    E-Print Network [OSTI]

    Virginia Tech

    High Performance Computing in the Life/Medical SciencesVirginiaBioinformaticsInstitute 2 week in high performance computing and data intensive computing 4. basic knowledge of relational databases (i Program Dates: July 17 - 25 Application deadline is March 30, 2012 High Performance Computing in the Life

  16. High Performance Computing (HPC) Survey 1. Choose the category that best describes you

    E-Print Network [OSTI]

    Martin, Stephen John

    High Performance Computing (HPC) Survey 1. Choose the category that best describes you Response on your (home or work) computer to access the High Performance Computing Facilities (HPC) - (tick all High Performance Computing (HPC) Facilities? Response Percent Response Count Daily 26.7% 23 Weekly 27

  17. Java in the High Performance Computing Arena: Research, Practice and Experience

    E-Print Network [OSTI]

    Fraguela, Basilio B.

    Java in the High Performance Computing Arena: Research, Practice and Experience Guillermo L interest in Java for High Performance Computing (HPC) is based on the appealing features of this language in HPC. Keywords: Java, High Performance Computing, Performance Evaluation, Multi-core Architectures

  18. Investigating the Mobility of Light Autonomous Tracked Vehicles Using a High Performance Computing

    E-Print Network [OSTI]

    Investigating the Mobility of Light Autonomous Tracked Vehicles Using a High Performance Computing limiting the scope and impact of high performance computing (HPC). This scenario is rapidly changing due

  19. Multi-scale problems, high performance computing and hybrid numerical methods

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    Multi-scale problems, high performance computing and hybrid numerical methods G. Balarac, G of High Performance Computing (HPC) is not anymore restricted to academia and scientific grand challenges

  20. Multi-scale problems, high performance computing and hybrid numerical methods

    E-Print Network [OSTI]

    Cottet, Georges-Henri

    Multi-scale problems, high performance computing and hybrid numerical methods G. Balarac, G of High Performance Computing G. Balarac LEGI, CNRS and Universit´e de Grenoble, BP 53, 38041 Grenoble

  1. A scalable silicon photonic chip-scale optical switch for high performance computing systems

    E-Print Network [OSTI]

    Yoo, S. J. Ben

    A scalable silicon photonic chip-scale optical switch for high performance computing systems-scale optical switch for scalable interconnect network in high performance computing systems. The proposed

  2. High Performance Code Generation for Stencil Computation on Heterogeneous Multi-device

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    .namyst@inria.fr Abstract--Heterogeneous architectures have been widely used in the domain of high performance computing, Multi- device, Code generation, Heterogeneous architectures I. INTRODUCTION High performance computing

  3. Power/energy use cases for high performance computing.

    SciTech Connect (OSTI)

    Laros, James H.,; Kelly, Suzanne M; Hammond, Steven [National Renewable Energy Laboratory] [National Renewable Energy Laboratory; Elmore, Ryan; Munch, Kristin

    2013-12-01T23:59:59.000Z

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  4. Measuring Productivity on High Performance Computers Marvin Zelkowitz1,2

    E-Print Network [OSTI]

    Basili, Victor R.

    Measuring Productivity on High Performance Computers Marvin Zelkowitz1,2 Victor Basili1,2 Sima, lorin, hollings, nakamura}@cs.umd.edu Abstract In the high performance computing domain, the speed of concern to high performance computing developers. In this paper we will discuss the problems of defining

  5. 10 January 2009 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING High Performance Computing in Remote Sensing

    E-Print Network [OSTI]

    Plaza, Antonio J.

    10 January 2009 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING High Performance Computing in Remote Book ReviewBook Review High Performance Computing in Remote Sensing introduces the most recent advances in the incorporation of the high-performance computing (HPC) paradigm in remote sensing missions. Eighteen well

  6. A Pilot Study to Evaluate Development Effort for High Performance Computing

    E-Print Network [OSTI]

    Basili, Victor R.

    1 A Pilot Study to Evaluate Development Effort for High Performance Computing Victor Basili1 the development time for programs written for high performance computers (HPC). To attack this relatively novel students in a graduate level High Performance Computing class at the University of Maryland. We collected

  7. UNIVERSITY OF SOUTHERN CALIFORNIA CSCI 653 (High Performance Computing and Simulations) : Fall 2013

    E-Print Network [OSTI]

    Southern California, University of

    UNIVERSITY OF SOUTHERN CALIFORNIA CSCI 653 (High Performance Computing and Simulations) : Fall 2013 Performance Computing and Simulations). My PhD work is in the area of resiliency for future Exascale High. 2 DESCRIPTION OF AN HPCS APPLICATION Simulation of Large Scale High Performance Computing System

  8. PRESENT AND FUTURE OF HIGH PERFORMANCE COMPUTING Trends, Challenges, and Opportunities

    E-Print Network [OSTI]

    Ceragioli, Francesca

    PRESENT AND FUTURE OF HIGH PERFORMANCE COMPUTING Trends, Challenges, and Opportunities November 17 laboratories of the HPC facilities and resources. First, the EPFL high performance computing facilities of Modeling and Simulation through High Performance Computing. Leading research activities of various groups

  9. Modeling and Simulation Environment for Photonic Interconnection Networks in High Performance Computing

    E-Print Network [OSTI]

    Bergman, Keren

    at the scale of high performance computer clusters and warehouse scale data centers, system level simulations and results for rack scale photonic interconnection networks for high performance computing. Keywords: optical to the newsworthy power consumption [3], latency [4] and bandwidth challenges [5] of high performance computing (HPC

  10. 1st International Workshop on High Performance Computing, Networking and Analytics for the Power Grid

    E-Print Network [OSTI]

    1st International Workshop on High Performance Computing, Networking and Analytics for the Power Transient Stability" #12;1st International Workshop on High Performance Computing, Networking and Analytics (University of Vermont). "Developing a Dynamic Model of Cascading Failure for High Performance Computing using

  11. For Immediate Release AUB to develop its high performance computing capacities in the

    E-Print Network [OSTI]

    Shihadeh, Alan

    For Immediate Release AUB to develop its high performance computing capacities in the service steps to become a high performance computing center that will be able to process massive amounts thousands of servers. According to Wikipedia, supercomputers, or high performance computing, play

  12. Judging the Impact of Conference and Journal Publications in High Performance Computing

    E-Print Network [OSTI]

    Zhou, Yuanyuan

    Judging the Impact of Conference and Journal Publications in High Performance Computing dimensions that count most, conferences are superior. This is particularly true in high performance computing and are never published in journals. The area of high performance computing is broad, and we divide venues

  13. MAUI HIGH PERFORMANCE COMPUTING CENTER 550 Lipoa Parkway, Kihei-Maui, HI 96753

    E-Print Network [OSTI]

    Olsen, Stephen L.

    MAUI HIGH PERFORMANCE COMPUTING CENTER i 550 Lipoa Parkway, Kihei-Maui, HI 96753 (808) 879-5077 · Fax: (808) 879-5018 E-mail: info@mhpcc.hpc.mil URL: www.mhpcc.hpc.mil MAUI HIGH PERFORMANCE COMPUTING This is the fourteenth annual edition of Maui High Performance Computing Center's (MHPCC) Application Briefs which

  14. Third International Workshop on Software Engineering for High Performance Computing (HPC) Applications

    E-Print Network [OSTI]

    Carver, Jeffrey C.

    Third International Workshop on Software Engineering for High Performance Computing (HPC, and financial modeling. The TOP500 website (http://www.top500.org) lists the top 500 high performance computing to define new ways of measuring high performance computing systems that take into account not only the low

  15. A Multi-core High Performance Computing Framework for Probabilistic Solutions of

    E-Print Network [OSTI]

    Franchetti, Franz

    1 A Multi-core High Performance Computing Framework for Probabilistic Solutions of Distribution. In this paper we developed a generally applicable high performance computing framework for Monte Carlo--Distribution systems, high performance comput- ing, Monte Carlo simulation, probabilistic load flow, renewable energy

  16. An AWGR based Low-Latency Optical Switch for Data Centers and High Performance Computing Systems

    E-Print Network [OSTI]

    Kolner, Brian H.

    i An AWGR based Low-Latency Optical Switch for Data Centers and High Performance Computing Systems based optical switch for data centers and high performance computing systems that builds upon several for Data Centers and High Performance Computing Systems ..i ABSTRACT .....................................

  17. High performance computing and the simplex method Julian Hall, Qi Huangfu and Edmund Smith

    E-Print Network [OSTI]

    Hall, Julian

    High performance computing and the simplex method Julian Hall, Qi Huangfu and Edmund Smith School of Mathematics University of Edinburgh 12th April 2011 High performance computing and the simplex method #12;The... ... but methods for all three depend on it! High performance computing and the simplex method 1 #12;Overview · LP

  18. Optimizing performance per watt on GPUs in High Performance Computing: temperature, frequency and voltage effects

    E-Print Network [OSTI]

    Price, D C; Barsdell, B R; Babich, R; Greenhill, L J

    2014-01-01T23:59:59.000Z

    The magnitude of the real-time digital signal processing challenge attached to large radio astronomical antenna arrays motivates use of high performance computing (HPC) systems. The need for high power efficiency (performance per watt) at remote observatory sites parallels that in HPC broadly, where efficiency is an emerging critical metric. We investigate how the performance per watt of graphics processing units (GPUs) is affected by temperature, core clock frequency and voltage. Our results highlight how the underlying physical processes that govern transistor operation affect power efficiency. In particular, we show experimentally that GPU power consumption grows non-linearly with both temperature and supply voltage, as predicted by physical transistor models. We show lowering GPU supply voltage and increasing clock frequency while maintaining a low die temperature increases the power efficiency of an NVIDIA K20 GPU by up to 37-48% over default settings when running xGPU, a compute-bound code used in radio...

  19. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    E-Print Network [OSTI]

    Sartor, Dale

    2011-01-01T23:59:59.000Z

    Journal of High Performance Computing Applications 22 (2).is both a model of high-performance computing and a showcaseAbstract High-performance computing facilities in the United

  20. A Comparative Study of Stochastic Unit Commitment and Security-Constrained Unit Commitment Using High Performance Computing

    E-Print Network [OSTI]

    Oren, Shmuel S.

    High Performance Computing Anthony Papavasiliou and Shmuel S. Oren Abstract-- The large decomposition. The proposed algorithms are implemented in a high performance computing environment

  1. Tech-X Corporation has accessed the high performance computing (HPC) facilities at the Science and Technology Facilities Council's (STFC)

    E-Print Network [OSTI]

    Zharkova, Valentina V.

    Tech-X Corporation has accessed the high performance computing (HPC) facilities at the Science high performance computing (HPC) and simulation technology. A research collaboratory in association

  2. Towards an Abstraction-Friendly Programming Model for High Productivity and High Performance Computing

    SciTech Connect (OSTI)

    Liao, C; Quinlan, D; Panas, T

    2009-10-06T23:59:59.000Z

    General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will apply our programming model to more large scale applications. In particular, we plan to classify and formalize more high level abstractions and semantics which are relevant to high performance computing. We will also investigate better ways to allow language designers, library developers and programmers to communicate abstraction and semantics information with each other.

  3. Performance Computing with

    E-Print Network [OSTI]

    Martin, Stephen John

    High Performance Computing with Iceberg. Mike Griffiths Bob Booth November 2005 AP-Unix4 © University of Sheffield #12;Bob Booth High Performance Computing with Iceberg Contents 1. REVIEW OF AVAILABLE 23 7.1 USING FUSE TO MOUNT FILE SYSTEMS ON ICEBERG 23 2 #12;Bob Booth High Performance Computing

  4. MICMIC--GPU:GPU: Hi hHi h P f C tiP f C tiHighHigh--Performance ComputingPerformance Computing

    E-Print Network [OSTI]

    Mueller, Klaus

    MICMIC--GPU:GPU: Hi hHi h P f C tiP f C tiHighHigh--Performance ComputingPerformance Computing 2011 but wait, there is more to this..... #12;Amdahl's LawAmdahl's Law Governs theoretical speedupp p P Medical Imaging 2011 MIC-GPU 5 Amdahl's LawAmdahl's Law Governs theoretical speedupp p P P P P S )1( 1

  5. High Performance Computing at TJNAF| U.S. DOE Office of Science...

    Office of Science (SC) Website

    More Information Applications of Nuclear Science Archives High Performance Computing at TJNAF Print Text Size: A A A FeedbackShare Page Application...

  6. A secure communications infrastructure for high-performance distributed computing

    SciTech Connect (OSTI)

    Foster, I.; Koenig, G.; Tuecke, S. [and others

    1997-08-01T23:59:59.000Z

    Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentially of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. The authors address these requirements via a security-enhanced version of the Nexus communication library; which they use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine-grained security/performance tradeoffs. The authors present performance results that quantify the performance of their infrastructure.

  7. 2012 High Performance Computing Modernization Program Contributions to DoD Mission Success Computational Studies of Oxidation States, Polar Ferroelectric Films, and Defects

    E-Print Network [OSTI]

    Rappe, Andrew M.

    2012 High Performance Computing Modernization Program Contributions to DoD Mission Success 137, we review our recent High Performance Computing Modernization Program (HPCMP)-supported density

  8. A Floating-point Accumulator for FPGA-based High Performance Computing Applications

    E-Print Network [OSTI]

    Zambreno, Joseph A.

    A Floating-point Accumulator for FPGA-based High Performance Computing Applications Song Sun Joseph,zambreno}@iastate.edu Abstract--A floating-point accumulator for FPGA-based high performance computing applications is proposed and evaluated. Compared to previous work, our accumulator uses a fixed size circuit, and can reduce an arbitrary

  9. ENERGY-EFFICIENT RESOURCE MANAGEMENT FOR HIGH-PERFORMANCE COMPUTING PLATFORMS

    E-Print Network [OSTI]

    Qin, Xiao

    ENERGY-EFFICIENT RESOURCE MANAGEMENT FOR HIGH-PERFORMANCE COMPUTING PLATFORMS Except where School Engineering #12;ENERGY-EFFICIENT RESOURCE MANAGEMENT FOR HIGH-PERFORMANCE COMPUTING PLATFORMS of the Requirements for the Degree of Doctor of Philosophy Auburn, Alabama August 9, 2008 #12;iii ENERGY-EFFICIENT

  10. Durability Assessment of an Arch Dam using Inverse Analysis with Neural Networks and High Performance Computing.

    E-Print Network [OSTI]

    Coutinho, Alvaro L. G. A.

    the viscoelastic parameters; 3D FEM analysis using High Performance Computing (parallel and vector features) to run Performance Computing. E. M. R. Fairbairn, E. Goulart, A. L. G. A. Coutinho, N. F. F. Ebecken COPPEDurability Assessment of an Arch Dam using Inverse Analysis with Neural Networks and High

  11. NREL: Energy Systems Integration Facility - High-Performance Computing and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC)Integrated CodesTransparency Visit | NationalWebmasterAnalytics High-Performance

  12. High Performance Computing Modeling Advances Accelerator Science for High Energy Physics

    SciTech Connect (OSTI)

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-04-29T23:59:59.000Z

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).

  13. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Amundson, James [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Macridin, Alexandru [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Spentzouris, Panagiotis [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States)

    2014-11-01T23:59:59.000Z

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).

  14. High-Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-11-01T23:59:59.000Z

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation packagemore »capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).« less

  15. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    SciTech Connect (OSTI)

    NONE

    1996-11-01T23:59:59.000Z

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  16. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    SciTech Connect (OSTI)

    NONE

    1996-06-01T23:59:59.000Z

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  17. Proactive Resource Management for Failure Resilient High Performance Computing Clusters

    E-Print Network [OSTI]

    Xu, Cheng-Zhong

    Cheng-Zhong Xu Department of Computer Science Department of Electrical & Computer Engineering New Mexico the simultaneous use and control of hundreds of thousands or even millions of processing, storage, and net- working

  18. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    SciTech Connect (OSTI)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01T23:59:59.000Z

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.

  19. A compression scheme for radio data in high performance computing

    E-Print Network [OSTI]

    Masui, Kiyoshi; Connor, Liam; Deng, Meiling; Fandino, Mateus; Höfer, Carolin; Halpern, Mark; Hanna, David; Hincks, Adam D; Hinshaw, Gary; Parra, Juan Mena; Newburgh, Laura B; Shaw, J Richard; Vanderlinde, Keith

    2015-01-01T23:59:59.000Z

    We present a procedure for efficiently compressing astronomical radio data for high performance applications. Integrated, post-correlation data are first passed through a nearly lossless rounding step which compares the precision of the data to a generalized and calibration-independent form of the radiometer equation. This allows the precision of the data to be reduced in a way that has an insignificant impact on the data. The newly developed Bitshuffle lossless compression algorithm is subsequently applied. When the algorithm is used in conjunction with the HDF5 library and data format, data produced by the CHIME Pathfinder telescope is compressed to 28% of its original size and decompression throughputs in excess of 1 GB/s are obtained on a single core.

  20. Delivering High-Performance Computational Chemistry to Science

    E-Print Network [OSTI]

    to address a wide range of large, challenging scientific questions. As one of the U.S. Department of Energy scientific computational chemistry problems efficiently and in their use of available parallel computing the Global Array Toolkit, which provides an efficient and portable shared- memory programming interface

  1. Scalable System Virtualization in High Performance Computing Systems

    E-Print Network [OSTI]

    Maccabe, Barney

    , and Ron Brightwell2 1 Department of Electrical and Computer Engineering, Northwestern University, {ktpedre,mjleven,rbbrigh}@sandia.gov, hudson@osresearch.net 3 Scalable Systems Lab, Department of Computer Science, University of New Mexico, {cuizheng,bridges}@cs.unm.edu Palacios is a new opensource virtual

  2. A High Performance Computing Platform for Performing High-Volume Studies With Windows-based Power Grid Tools

    SciTech Connect (OSTI)

    Chen, Yousu; Huang, Zhenyu

    2014-08-31T23:59:59.000Z

    Serial Windows-based programs are widely used in power utilities. For applications that require high volume simulations, the single CPU runtime can be on the order of days or weeks. The lengthy runtime, along with the availability of low cost hardware, is leading utilities to seriously consider High Performance Computing (HPC) techniques. However, the vast majority of the HPC computers are still Linux-based and many HPC applications have been custom developed external to the core simulation engine without consideration for ease of use. This has created a technical gap for applying HPC-based tools to today’s power grid studies. To fill this gap and accelerate the acceptance and adoption of HPC for power grid applications, this paper presents a prototype of generic HPC platform for running Windows-based power grid programs on Linux-based HPC environment. The preliminary results show that the runtime can be reduced from weeks to hours to improve work efficiency.

  3. The demand for high performance computing research has been significantly increasing over the past few years. Various

    E-Print Network [OSTI]

    Akhmedov, Azer

    The demand for high performance computing research has been significantly increasing over the past to promote the effective use of High Performance Computing in the research environment. In addition facility has enabled cutting-edge computations material research, "Having a high-performance computing

  4. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    SciTech Connect (OSTI)

    Khaleel, Mohammad A.

    2009-10-01T23:59:59.000Z

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  5. VIP-FS: A Virtual, Parallel file System for High Performance Parallel and Distributed Computing *

    E-Print Network [OSTI]

    Kuzmanovic, Aleksandar

    -passing li- blclries only provide part of the support necessary for most high performan.ce distributed computing applzca- tzcjns - support for hagh speed parallel l/O is still lark- 211q. In this paper, we

  6. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Oehmen, Chris [PNNL

    2011-06-08T23:59:59.000Z

    Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

  7. Vehicle Technologies Office Merit Review 2013: Accelerating Predictive Simulation of IC Engines with High Performance Computing

    Broader source: Energy.gov [DOE]

    Presentation given by Oak Ridge National Laboratory at the 2013 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting about simulating internal combustion engines using high performance computing.

  8. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

    SciTech Connect (OSTI)

    Oehmen, Chris [PNNL] [PNNL

    2010-01-25T23:59:59.000Z

    Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

  9. Workshop on programming languages for high performance computing (HPCWPL): final report.

    SciTech Connect (OSTI)

    Murphy, Richard C.

    2007-05-01T23:59:59.000Z

    This report summarizes the deliberations and conclusions of the Workshop on Programming Languages for High Performance Computing (HPCWPL) held at the Sandia CSRI facility in Albuquerque, NM on December 12-13, 2006.

  10. High-performance computer system installed at Los Alamos National

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville PowerCherries 82981-1cnHigh School football High School footballHigh-PressureBiofuelEnergy

  11. Power System Probabilistic and Security Analysis on Commodity High Performance Computing Systems

    E-Print Network [OSTI]

    Franchetti, Franz

    power system infrastructures also requires merging of offline security analyses into on- line operationPower System Probabilistic and Security Analysis on Commodity High Performance Computing Systems tools for power system probabilistic and security analysis: 1) a high performance Monte Carlo simulation

  12. High performance computing and communications grand challenges program

    SciTech Connect (OSTI)

    Solomon, J.E.; Barr, A.; Chandy, K.M.; Goddard, W.A., III; Kesselman, C.

    1994-10-01T23:59:59.000Z

    The so-called protein folding problem has numerous aspects, however it is principally concerned with the {ital de novo} prediction of three-dimensional (3D) structure from the protein primary amino acid sequence, and with the kinetics of the protein folding process. Our current project focuses on the 3D structure prediction problem which has proved to be an elusive goal of molecular biology and biochemistry. The number of local energy minima is exponential in the number of amino acids in the protein. All current methods of 3D structure prediction attempt to alleviate this problem by imposing various constraints that effectively limit the volume of conformational space which must be searched. Our Grand Challenge project consists of two elements: (1) a hierarchical methodology for 3D protein structure prediction; and (2) development of a parallel computing environment, the Protein Folding Workbench, for carrying out a variety of protein structure prediction/modeling computations. During the first three years of this project, we are focusing on the use of two proteins selected from the Brookhaven Protein Data Base (PDB) of known structure to provide validation of our prediction algorithms and their software implementation, both serial and parallel. Both proteins, protein L from {ital peptostreptococcus magnus}, and {ital streptococcal} protein G, are known to bind to IgG, and both have an {alpha} {plus} {beta} sandwich conformation. Although both proteins bind to IgG, they do so at different sites on the immunoglobin and it is of considerable biological interest to understand structurally why this is so. 12 refs., 1 fig.

  13. Reliable High Performance Peta- and Exa-Scale Computing

    SciTech Connect (OSTI)

    Bronevetsky, G

    2012-04-02T23:59:59.000Z

    As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continue to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty operation. By synthesizing models of individual components into a whole-system behavior models my work is making it possible to automatically understand the behavior of arbitrary real-world systems to enable them to tolerate a wide range of system faults. My project is following a multi-pronged research strategy. Section II discusses my work on modeling the behavior of existing applications and systems. Section II.A discusses resilience in the face of soft faults and Section II.B looks at techniques to tolerate performance faults. Finally Section III presents an alternative approach that studies how a system should be designed from the ground up to make resilience natural and easy.

  14. Generalized Portable SHMEM Library for High Performance Computing

    SciTech Connect (OSTI)

    Krzysztof Parzyszek

    2003-08-05T23:59:59.000Z

    This dissertation describes the efforts to design and implement the Generalized Portable SHMEM library, GPSHMEM, as well as supplementary tools. There are two major components of the GPSHMEM project: the GPSHMEM library itself and the Fortran 77 source-to-source translator. The rest of this thesis is divided into two parts. Part I introduces the shared memory model and the distributed shared memory model. It explains the motivation behind GPSHMEM and presents its functionality and performance results. Part II is entirely devoted to the Fortran 77 translator call fgpp. The need for such a tool is demonstrated, functionality goals are stated, and the design issues are presented along with the development of the solutions.

  15. Introduction to High Performance Computers Richard Gerber NERSC User Services

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville PowerCherries 82981-1cnHigh SchoolIn OtherEnergyBPA-Film-Collection Sign In About |

  16. High Performance Computing Data Center Metering Protocol | Department of

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't YourTransport(Fact Sheet), GeothermalGridHYDROGEN TOTechnologyHighLouisiana | Department

  17. Dan Bonachea, Computer Science Division, EECS, University of California, Berkeley Titanium: A High Performance

    E-Print Network [OSTI]

    California at Berkeley, University of

    Dan Bonachea, Computer Science Division, EECS, University of California, Berkeley Titanium Titanium: A High Performance Dialect of Java U.C. Berkeley Computer Science Division Dan Bonachea http://www.cs.berkeley.edu/projects/titanium, University of California, Berkeley Titanium Titanium Group . Susan Graham . Katherine Yelick . Paul Hilfinger

  18. Process for selecting NEAMS applications for access to Idaho National Laboratory high performance computing resources

    SciTech Connect (OSTI)

    Michael Pernice

    2010-09-01T23:59:59.000Z

    INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

  19. 3rd Workshop on System-level Virtualization for High Performance Computing (HPCVirt) 2009, Nuremberg, Germany, March 30, 2009

    E-Print Network [OSTI]

    Engelmann, Christian

    3rd Workshop on System-level Virtualization for High Performance Computing (HPCVirt) 2009 for High Performance Computing (HPCVirt) 2009, Nuremberg, Germany, March 30, 2009 Outline · Background work #12;3/193rd Workshop on System-level Virtualization for High Performance Computing (HPCVirt) 2009

  20. Feb. 11, 2008 Advanced Fault Tolerance Solutions for High Performance Computing 1/47 Advanced Fault Tolerance Solutions

    E-Print Network [OSTI]

    Engelmann, Christian

    Feb. 11, 2008 Advanced Fault Tolerance Solutions for High Performance Computing 1/47 RAS RAS Advanced Fault Tolerance Solutions for High Performance Computing Christian Engelmann Oak Ridge National Solutions for High Performance Computing 2/47 · Nation's largest energy laboratory · Nation's largest

  1. February 13, 2008 Virtualized Environments for the Harness High Performance Computing Workbench 1/17 Virtualized Environments for the Harness

    E-Print Network [OSTI]

    Engelmann, Christian

    February 13, 2008 Virtualized Environments for the Harness High Performance Computing Workbench 1/17 Virtualized Environments for the Harness High Performance Computing Workbench Björn Könning1,2, Christian Virtualized Environments for the Harness High Performance Computing Workbench 4/17 Harness HPC Workbench

  2. Cluster Computing: High-Performance, High-Availability, and High-Throughput Processing on a Network of Computers

    E-Print Network [OSTI]

    Buyya, Rajkumar

    Laboratory Dept. of Computer Science and Software Engineering The University of Melbourne, Australia {csyeo, raj}@cs.mu.oz.au 2 Parallel and Distributed Systems Laboratory Department of Computer Science for cluster computing was developed in the 1960s by IBM as an alternative of linking large mainframes

  3. 2011 DoD High Performance Computing Modernization Program Users Group Conference A Web-based High-Throughput Tool for Next-Generation Sequence Annotation

    E-Print Network [OSTI]

    320 2011 DoD High Performance Computing Modernization Program Users Group Conference A Web deployed on the Mana Linux cluster at the Maui High Performance Computing Center. The two components

  4. Investigating methods of supporting dynamically linked executables on high performance computing platforms.

    SciTech Connect (OSTI)

    Kelly, Suzanne Marie; Laros, James H., III; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2009-09-01T23:59:59.000Z

    Shared libraries have become ubiquitous and are used to achieve great resource efficiencies on many platforms. The same properties that enable efficiencies on time-shared computers and convenience on small clusters prove to be great obstacles to scalability on large clusters and High Performance Computing platforms. In addition, Light Weight operating systems such as Catamount have historically not supported the use of shared libraries specifically because they hinder scalability. In this report we will outline the methods of supporting shared libraries on High Performance Computing platforms using Light Weight kernels that we investigated. The considerations necessary to evaluate utility in this area are many and sometimes conflicting. While our initial path forward has been determined based on this evaluation we consider this effort ongoing and remain prepared to re-evaluate any technology that might provide a scalable solution. This report is an evaluation of a range of possible methods of supporting dynamically linked executables on capability class1 High Performance Computing platforms. Efforts are ongoing and extensive testing at scale is necessary to evaluate performance. While performance is a critical driving factor, supporting whatever method is used in a production environment is an equally important and challenging task.

  5. High-performance Computation and Visualization of Plasma Turbulence on Graphics Processors

    E-Print Network [OSTI]

    Varshney, Amitabh

    1 High-performance Computation and Visualization of Plasma Turbulence on Graphics Processors George thermonuclear fusion devices. Turbulence in plasma can lead to energy losses and various catastrophic events be mapped efficiently to modern graphics processors, dramatically reducing cost while increasing both

  6. 1MAKING WORKFLOW APPLICATIONS WORK The International Journal of High Performance Computing Applications,

    E-Print Network [OSTI]

    Deelman, Ewa

    1MAKING WORKFLOW APPLICATIONS WORK The International Journal of High Performance Computing. Reprints and permissions: http://www.sagepub.co.uk/journalsPermissions.nav Figures 1­5 appear in color. Scien- tific workflows are being used to bring together these var- ious resources and answer complex

  7. UNBC HPC Policy last revised: 2/1/2007, 2:49:29 PM UNBC Enhanced High Performance Computing Center

    E-Print Network [OSTI]

    Northern British Columbia, University of

    UNBC HPC Policy last revised: 2/1/2007, 2:49:29 PM UNBC Enhanced High Performance Computing Center and contracts are sought by Principal Investigators. Membership Policies The UNBC Enhanced High Performance Computing Center (called " UNBC HPC" hereafter) provides computing resources and services to members

  8. Acts -- A collection of high performing software tools for scientific computing

    SciTech Connect (OSTI)

    Drummond, L.A.; Marques, O.A.

    2002-11-01T23:59:59.000Z

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Further, many new discoveries depend on high performance computer simulations to satisfy their demands for large computational resources and short response time. The Advanced CompuTational Software (ACTS) Collection brings together a number of general-purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS collection promotes code portability, reusability, reduction of duplicate efforts, and tool maturity. This paper presents a brief introduction to the functionality available in ACTS. It also highlight the tools that are in demand by Climate and Weather modelers.

  9. High performance photonic reservoir computer based on a coherently driven passive cavity

    E-Print Network [OSTI]

    Vinckier, Quentin; Smerieri, Anteo; Vandoorne, Kristof; Bienstman, Peter; Haelterman, Marc; Massar, Serge

    2015-01-01T23:59:59.000Z

    Reservoir computing is a recent bio-inspired approach for processing time-dependent signals. It has enabled a breakthrough in analog information processing, with several experiments, both electronic and optical, demonstrating state-of-the-art performances for hard tasks such as speech recognition, time series prediction and nonlinear channel equalization. A proof-of-principle experiment using a linear optical circuit on a photonic chip to process digital signals was recently reported. Here we present the first implementation of a photonic reservoir computer based on a coherently driven passive fiber cavity processing analog signals. Our experiment surpasses all previous experiments on a wide variety of tasks, and also has lower power consumption. Furthermore, the analytical model describing our experiment is also of interest, as it arguably constitutes the simplest high performance reservoir computer algorithm introduced so far. The present experiment, given its remarkable performances, low energy consumption...

  10. High performance computing and communications: Advancing the frontiers of information technology

    SciTech Connect (OSTI)

    NONE

    1997-12-31T23:59:59.000Z

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  11. Opening Remarks from the Joint Genome Institute and Argonne Lab High Performance Computing Workshop (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Rubin, Eddy

    2011-06-03T23:59:59.000Z

    DOE JGI Director Eddy Rubin gives opening remarks at the JGI/Argonne High Performance Computing (HPC) Workshop on January 25, 2010.

  12. Opening Remarks from the Joint Genome Institute and Argonne Lab High Performance Computing Workshop (2010 JGI/ANL HPC Workshop)

    SciTech Connect (OSTI)

    Rubin, Eddy

    2010-01-25T23:59:59.000Z

    DOE JGI Director Eddy Rubin gives opening remarks at the JGI/Argonne High Performance Computing (HPC) Workshop on January 25, 2010.

  13. An Overview of High Performance Computing and Challenges for the Future

    ScienceCinema (OSTI)

    Google Tech Talks

    2009-09-01T23:59:59.000Z

    In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and lgorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches. He is a Fellow of the AAAS, ACM, and the IEEE and a member of the National Academy of Engineering.

  14. An Overview of High Performance Computing and Challenges for the Future

    SciTech Connect (OSTI)

    Google Tech Talks

    2008-01-26T23:59:59.000Z

    In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and lgorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches. He is a Fellow of the AAAS, ACM, and the IEEE and a member of the National Academy of Engineering.

  15. NERSC 2011: High Performance Computing Facility Operational Assessment for the National Energy Research Scientific Computing Center

    E-Print Network [OSTI]

    Antypas, Katie

    2013-01-01T23:59:59.000Z

    tested, and preventive maintenance is scheduled. Safetyand perform preventive maintenance. Review and update

  16. Matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    DOE Patents [OSTI]

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2013-11-05T23:59:59.000Z

    Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.

  17. Toward a Performance/Resilience Tool for Hardware/Software Co-Design of High-Performance Computing Systems

    SciTech Connect (OSTI)

    Engelmann, Christian [ORNL] [ORNL; Naughton, III, Thomas J [ORNL

    2013-01-01T23:59:59.000Z

    xSim is a simulation-based performance investigation toolkit that permits running high-performance computing (HPC) applications in a controlled environment with millions of concurrent execution threads, while observing application performance in a simulated extreme-scale system for hardware/software co-design. The presented work details newly developed features for xSim that permit the injection of MPI process failures, the propagation/detection/notification of such failures within the simulation, and their handling using application-level checkpoint/restart. These new capabilities enable the observation of application behavior and performance under failure within a simulated future-generation HPC system using the most common fault handling technique.

  18. 198 Int. J. High Performance Computing and Networking, Vol. 4, Nos. 3/4, 2006 Copyright 2006 Inderscience Enterprises Ltd.

    E-Print Network [OSTI]

    Shen, Jian - Department of Mathematics, Texas State University

    198 Int. J. High Performance Computing and Networking, Vol. 4, Nos. 3/4, 2006 Copyright © 2006 Performance Computing and Networking, Vol. 4, Nos. 3/4, pp.198­206. Biographical notes: Xiao Chen, J. (2006) `Improved schemes for power-efficient broadcast in ad hoc networks', Int. J. High

  19. Towards Real-Time High Performance Computing For Power Grid Analysis

    SciTech Connect (OSTI)

    Hui, Peter SY; Lee, Barry; Chikkagoudar, Satish

    2012-11-16T23:59:59.000Z

    Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored. Imposing real-time constraints on a parallel (cluster) computing environment introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we give a motivating example to demonstrate the need for such a system--- an application to estimate the electromechanical states of the power grid--- and we introduce a formal method for performing verification of certain temporal properties within a system of parallel processes. We describe our work towards a full real-time implementation of the target application--- namely, our progress towards extracting a key mathematical kernel from the application, the formal process by which we analyze the intricate timing behavior of the processes on the cluster, as well as timing measurements taken on our test cluster to demonstrate use of these concepts.

  20. Astrocomp: a web service for the use of high performance computers in Astrophysics

    E-Print Network [OSTI]

    U. Becciani; R. Capuzzo Dolcetta; A. Costa; P. Di Matteo; P. Miocchi; V. Rosato

    2004-07-27T23:59:59.000Z

    Astrocomp is a joint project, developed by the INAF-Astrophysical Observatory of Catania, University of Roma La Sapienza and Enea. The project has the goal of providing the scientific community of a web-based user-friendly interface which allows running parallel codes on a set of high-performance computing (HPC) resources, without any need for specific knowledge about parallel programming and Operating Systems commands. Astrocomp provides, also, computing time on a set of parallel computing systems, available to the authorized user. At present, the portal makes a few codes available, among which: FLY, a cosmological code for studying three-dimensional collisionless self-gravitating systems with periodic boundary conditions; ATD, a parallel tree-code for the simulation of the dynamics of boundary-free collisional and collisionless self-gravitating systems and MARA, a code for stellar light curves analysis. Other codes are going to be added to the portal.

  1. Proc. Fourth IDEA Workshop, Magnetic Island, 17-20 May 1997, and Technical Note DHPC-006. Trends in High Performance Computing

    E-Print Network [OSTI]

    Hawick, Ken

    in High Performance Computing K.A.Hawick Department of Computer Science, University of Adelaide, South computing. What used to be referred to as ``Supercomputing'' became ``High Performance Computing speeds. A possible new acronym for the collective field is ``Distributed, High Performance, Computing

  2. Evaluating Performance, Power, and Cooling in High Performance Computing (HPC) Data Centers

    SciTech Connect (OSTI)

    Evans, Jeffrey; Sandeep, Gupta; Karavanic, Karen; Marquez, Andres; Varsamopoulos, Girogios

    2012-01-24T23:59:59.000Z

    This chapter explores current research focused on developing our understanding of the interrelationships involved with HPC performance and energy management. The first section explores data center instrumentation, measurement, and performance analysis techniques, followed by a section focusing on work in data center thermal management and resource allocation. This is followed by an exploration of emerging techniques to identify application behavioral attributes that can provide clues and advice to HPC resource and energy management systems for the purpose of balancing HPC performance and energy efficiency.

  3. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect (OSTI)

    Laros, James H., III

    2011-08-01T23:59:59.000Z

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  4. RAPID for high-performance computing systems: architecture and performance evaluation

    E-Print Network [OSTI]

    Louri, Ahmed

    Karanth Kodi and Ahmed Louri The limited bandwidth and the increase in power dissipation at longer power consump- tion, and enhances scalability. We also present two cost-effective design alternatives RAPID architecture and compare it to several electrical HPCS interconnects. Based on the performance

  5. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    SciTech Connect (OSTI)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17T23:59:59.000Z

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  6. GridPACK Toolkit for Developing Power Grid Simulations on High Performance Computing Platforms

    SciTech Connect (OSTI)

    Palmer, Bruce J.; Perkins, William A.; Glass, Kevin A.; Chen, Yousu; Jin, Shuangshuang; Callahan, Charles D.

    2013-11-30T23:59:59.000Z

    This paper describes the GridPACK™ framework, which is designed to help power grid engineers develop modeling software capable of running on todays high performance computers. The framework contains modules for setting up distributed power grid networks, assigning buses and branches with arbitrary behaviors to the network, creating distributed matrices and vectors, using parallel linear and non-linear solvers to solve algebraic equations, and mapping functionality to create matrices and vectors based on properties of the network. In addition, the framework contains additional functionality to support IO and to manage errors.

  7. Investigating Operating System Noise in Extreme-Scale High-Performance Computing Systems using Simulation

    SciTech Connect (OSTI)

    Engelmann, Christian [ORNL

    2013-01-01T23:59:59.000Z

    Hardware/software co-design for future-generation high-performance computing (HPC) systems aims at closing the gap between the peak capabilities of the hardware and the performance realized by applications (application-architecture performance gap). Performance profiling of architectures and applications is a crucial part of this iterative process. The work in this paper focuses on operating system (OS) noise as an additional factor to be considered for co-design. It represents the first step in including OS noise in HPC hardware/software co-design by adding a noise injection feature to an existing simulation-based co-design toolkit. It reuses an existing abstraction for OS noise with frequency (periodic recurrence) and period (duration of each occurrence) to enhance the processor model of the Extreme-scale Simulator (xSim) with synchronized and random OS noise simulation. The results demonstrate this capability by evaluating the impact of OS noise on MPI_Bcast() and MPI_Reduce() in a simulated future-generation HPC system with 2,097,152 compute nodes.

  8. High performance systems

    SciTech Connect (OSTI)

    Vigil, M.B. [comp.

    1995-03-01T23:59:59.000Z

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  9. High Performance Computing in the U.S. in An Analysis on the Basis of the TOP500 List

    E-Print Network [OSTI]

    Dongarra, Jack

    . Dongarra Computer Science Department University of Tennessee Knoxville, TN 37996-1301 and MathematicalHigh Performance Computing in the U.S. in 1995 An Analysis on the Basis of the TOP500 List Jack J Science Section Oak Ridge National Laboratory Oak Ridge, TN 37831-6367 dongarra@cs.utk.edu and Horst D

  10. Implementing the Data Center Energy Productivity Metric in a High Performance Computing Data Center

    SciTech Connect (OSTI)

    Sego, Landon H.; Marquez, Andres; Rawson, Andrew; Cader, Tahir; Fox, Kevin M.; Gustafson, William I.; Mundy, Christopher J.

    2013-06-30T23:59:59.000Z

    As data centers proliferate in size and number, the improvement of their energy efficiency and productivity has become an economic and environmental imperative. Making these improvements requires metrics that are robust, interpretable, and practical. We discuss the properties of a number of the proposed metrics of energy efficiency and productivity. In particular, we focus on the Data Center Energy Productivity (DCeP) metric, which is the ratio of useful work produced by the data center to the energy consumed performing that work. We describe our approach for using DCeP as the principal outcome of a designed experiment using a highly instrumented, high-performance computing data center. We found that DCeP was successful in clearly distinguishing different operational states in the data center, thereby validating its utility as a metric for identifying configurations of hardware and software that would improve energy productivity. We also discuss some of the challenges and benefits associated with implementing the DCeP metric, and we examine the efficacy of the metric in making comparisons within a data center and between data centers.

  11. High-Performance Computing for Real-Time Grid Analysis and Operation

    SciTech Connect (OSTI)

    Huang, Zhenyu; Chen, Yousu; Chavarría-Miranda, Daniel

    2013-10-31T23:59:59.000Z

    Power grids worldwide are undergoing an unprecedented transition as a result of grid evolution meeting information revolution. The grid evolution is largely driven by the desire for green energy. Emerging grid technologies such as renewable generation, smart loads, plug-in hybrid vehicles, and distributed generation provide opportunities to generate energy from green sources and to manage energy use for better system efficiency. With utility companies actively deploying these technologies, a high level of penetration of these new technologies is expected in the next 5-10 years, bringing in a level of intermittency, uncertainties, and complexity that the grid did not see nor design for. On the other hand, the information infrastructure in the power grid is being revolutionized with large-scale deployment of sensors and meters in both the transmission and distribution networks. The future grid will have two-way flows of both electrons and information. The challenge is how to take advantage of the information revolution: pull the large amount of data in, process it in real time, and put information out to manage grid evolution. Without addressing this challenge, the opportunities in grid evolution will remain unfulfilled. This transition poses grand challenges in grid modeling, simulation, and information presentation. The computational complexity of underlying power grid modeling and simulation will significantly increase in the next decade due to an increased model size and a decreased time window allowed to compute model solutions. High-performance computing is essential to enable this transition. The essential technical barrier is to vastly increase the computational speed so operation response time can be reduced from minutes to seconds and sub-seconds. The speed at which key functions such as state estimation and contingency analysis are conducted (typically every 3-5 minutes) needs to be dramatically increased so that the analysis of contingencies is both comprehensive and real time. An even bigger challenge is how to incorporate dynamic information into real-time grid operation. Today’s online grid operation is based on a static grid model and can only provide a static snapshot of current system operation status, while dynamic analysis is conducted offline because of low computational efficiency. The offline analysis uses a worst-case scenario to determine transmission limits, resulting in under-utilization of grid assets. This conservative approach does not necessarily lead to reliability. Many times, actual power grid scenarios are not studied, and they will push the grid over the edge and resulting in outages and blackouts. This chapter addresses the HPC needs in power grid analysis and operations. Example applications such as state estimation and contingency analysis are given to demonstrate the value of HPC in power grid applications. Future research directions are suggested for high performance computing applications in power grids to improve the transparency, efficiency, and reliability of power grids.

  12. SEPTEMBER 2011 VOLUME 4 NUMBER 3 IJSTHZ (ISSN 1939-1404) SPECIAL ISSUE ON HIGH PERFORMANCE COMPUTING IN EARTH OBSERVATION AND REMOTE SENSING

    E-Print Network [OSTI]

    Plaza, Antonio J.

    1939-1404) SPECIAL ISSUE ON HIGH PERFORMANCE COMPUTING IN EARTH OBSERVATION AND REMOTE SENSING Foreword to the Special Issue on High Performance Computing in Earth Observation and Remote Sensing .................................... ................................................................ C. A. Lee, S. D. Gasster, A. Plaza, C.-I Chang, and B. Huang 508 High Performance Computing

  13. Turner Construction is looking for interns at our Massachusetts Green High Performance Computing Center (MGHPCC) job site in Holyoke, MA. The ideal candidates would have the following qualifications

    E-Print Network [OSTI]

    Spence, Harlan Ernest

    Turner Construction is looking for interns at our Massachusetts Green High Performance Computing summer positions: May­June, July­August. ABOUT MASSACHUSETTS GREEN HIGH PERFORMANCE COMPUTING CENTER Engineer attdriscoll@tcco.com. Massachusetts Green High Performance Computing Center Intern Positions #12;

  14. A High Performance Computing Network and System Simulator for the Power Grid: NGNS^2

    SciTech Connect (OSTI)

    Villa, Oreste; Tumeo, Antonino; Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.

    2012-11-11T23:59:59.000Z

    Designing and planing next generation power grid sys- tems composed of large power distribution networks, monitoring and control networks, autonomous generators and consumers of power requires advanced simulation infrastructures. The objective is to predict and analyze in time the behavior of networks of systems for unexpected events such as loss of connectivity, malicious attacks and power loss scenarios. This ultimately allows one to answer questions such as: “What could happen to the power grid if ...”. We want to be able to answer as many questions as possible in the shortest possible time for the largest possible systems. In this paper we present a new High Performance Computing (HPC) oriented simulation infrastructure named Next Generation Network and System Simulator (NGNS2 ). NGNS2 allows for the distribution of a single simulation among multiple computing elements by using MPI and OpenMP threads. NGNS2 provides extensive configuration, fault tolerant and load balancing capabilities needed to simulate large and dynamic systems for long periods of time. We show the preliminary results of the simulator running approximately two million simulated entities both on a 64-node commodity Infiniband cluster and a 48-core SMP workstation.

  15. Development of high performance scientific components for interoperability of computing packages

    SciTech Connect (OSTI)

    Gulabani, Teena Pratap

    2008-12-01T23:59:59.000Z

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  16. National cyber defense high performance computing and analysis : concepts, planning and roadmap.

    SciTech Connect (OSTI)

    Hamlet, Jason R.; Keliiaa, Curtis M.

    2010-09-01T23:59:59.000Z

    There is a national cyber dilemma that threatens the very fabric of government, commercial and private use operations worldwide. Much is written about 'what' the problem is, and though the basis for this paper is an assessment of the problem space, we target the 'how' solution space of the wide-area national information infrastructure through the advancement of science, technology, evaluation and analysis with actionable results intended to produce a more secure national information infrastructure and a comprehensive national cyber defense capability. This cybersecurity High Performance Computing (HPC) analysis concepts, planning and roadmap activity was conducted as an assessment of cybersecurity analysis as a fertile area of research and investment for high value cybersecurity wide-area solutions. This report and a related SAND2010-4765 Assessment of Current Cybersecurity Practices in the Public Domain: Cyber Indications and Warnings Domain report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.

  17. Boosting Beamline Performance | Argonne Leadership Computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    please visit science.energy.gov. breakthroughs in computational methods Experimental results are greatly improved with the application of Swift and high-performance computing...

  18. High-Performance Computer Modeling of the Cosmos-Iridium Collision

    SciTech Connect (OSTI)

    Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W

    2009-08-28T23:59:59.000Z

    This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

  19. Alliance for Computational Science Collaboration: HBCU Partnership at Alabama A&M University Continuing High Performance Computing Research and Education at AAMU

    SciTech Connect (OSTI)

    Qian, Xiaoqing; Deng, Z. T.

    2009-11-10T23:59:59.000Z

    This is the final report for the Department of Energy (DOE) project DE-FG02-06ER25746, entitled, "Continuing High Performance Computing Research and Education at AAMU". This three-year project was started in August 15, 2006, and it was ended in August 14, 2009. The objective of this project was to enhance high performance computing research and education capabilities at Alabama A&M University (AAMU), and to train African-American and other minority students and scientists in the computational science field for eventual employment with DOE. AAMU has successfully completed all the proposed research and educational tasks. Through the support of DOE, AAMU was able to provide opportunities to minority students through summer interns and DOE computational science scholarship program. In the past three years, AAMU (1). Supported three graduate research assistants in image processing for hypersonic shockwave control experiment and in computational science related area; (2). Recruited and provided full financial support for six AAMU undergraduate summer research interns to participate Research Alliance in Math and Science (RAMS) program at Oak Ridge National Lab (ORNL); (3). Awarded highly competitive 30 DOE High Performance Computing Scholarships ($1500 each) to qualified top AAMU undergraduate students in science and engineering majors; (4). Improved high performance computing laboratory at AAMU with the addition of three high performance Linux workstations; (5). Conducted image analysis for electromagnetic shockwave control experiment and computation of shockwave interactions to verify the design and operation of AAMU-Supersonic wind tunnel. The high performance computing research and education activities at AAMU created great impact to minority students. As praised by Accreditation Board for Engineering and Technology (ABET) in 2009, ?The work on high performance computing that is funded by the Department of Energy provides scholarships to undergraduate students as computational science scholars. This is a wonderful opportunity to recruit under-represented students.? Three ASEE papers were published in 2007, 2008 and 2009 proceedings of ASEE Annual Conferences, respectively. Presentations of these papers were also made at the ASEE Annual Conferences. It is very critical to continue the research and education activities.

  20. VNET/P: Bridging the Cloud and High Performance Computing Through Fast Overlay Networking

    E-Print Network [OSTI]

    Bustamante, Fabián E.

    and Technology of China ytang@uestc.edu.cn ABSTRACT It is now possible to allow VMs hosting HPC applications to the native hardware, which is not the case for current user-level tools, including our own existing VNET/P achieves native performance on 1 Gbps Ethernet networks and very high performance on 10 Gbps Ethernet

  1. Global Scientific Information and Computing Center, Tokyo Institute of Technology Large-Scale GPU-Equipped High-Performance Compute Nodes

    E-Print Network [OSTI]

    Furui, Sadaoki

    -Equipped High-Performance Compute Nodes High-Speed Network Interconnect High-Speed and Highly Reliable Storage Systems Low Power Consumption and Green Operation System and Application Software HARDWARE AND SOFTWARE 2 USB Internal Micro SD DIMM6CDIMM6C DIMM5FDIMM5F DIMM4BDIMM4B DIMM3EDIMM3E DIMM2ADIMM2A DIMM1DDIMM1

  2. In Proceedings of the 17th IEEE International Symposium on High Performance Computer Architecture (HPCA), 2011

    E-Print Network [OSTI]

    Memik, Gokhan

    DRAM modules can rise to over 95°C. Another important property of DRAM temperature is the large, these methods exacerbate the problems of high power consumption and operating temperatures in DRAM systems]. These high temperatures have adverse effects on the performance and reliability of the DRAM. When

  3. Improvement of Power-Performance Efficiency for High-End Computing Rong Ge, Xizhou Feng, Kirk W. Cameron

    E-Print Network [OSTI]

    Freeh, Vincent

    . Earth Simulator requires 18 megawatts of power. Petaflop systems may require 100 megawatts of power[2], nearly the output of a small power plant (300 megawatts). At $100 per megawatt ($.10 per kilowatt), peakImprovement of Power-Performance Efficiency for High-End Computing Rong Ge, Xizhou Feng, Kirk W

  4. MICMIC--GPU:GPU: Hi hHi h P f C tiP f C tiHighHigh--Performance ComputingPerformance Computing

    E-Print Network [OSTI]

    Mueller, Klaus

    )Hardware (GPUs)Hardware (GPUs)Hardware (GPUs) ReferencesReferences Klaus Mueller Wei Xu Ziyi Zheng Fang Xu Computer Science C f Vi l C i Klaus Mueller, Wei Xu, Ziyi Zheng Fang Xu Siemens USA Research Center/EUROGRAPHICS Conference on Graphics Hardware pp 41-50, 2003. C. Rezk-Salama, K. Engel, M. Bauer, G. Greiner, and T. Ertl

  5. High performance computing and communications Grand Challenges program: Computational structural biology. Final report, August 15, 1992--January 14, 1997

    SciTech Connect (OSTI)

    Solomon, J.E.

    1997-10-02T23:59:59.000Z

    The Grand Challenge project consists of two elements: (1) a hierarchical methodology for 3D protein structure prediction; and (2) development of a parallel computing environment, the Protein Folding Workbench, for carrying out a variety of protein structure prediction/modeling computations. During the first three years of this project the author focused on the use of selected proteins from the Brookhaven Protein Data Base (PDB) of known structures to provide validation of the prediction algorithms and their software implementation, both serial and parallel. Two proteins in particular have been selected to provide the project with direct interaction with experimental molecular biology. A variety of site-specific mutagenesis experiments are performed on these two proteins to explore the many-to-one mapping characteristics of sequence to structure.

  6. A High-Performance Hybrid Computing Approach to Massive Contingency Analysis in the Power Grid

    SciTech Connect (OSTI)

    Gorton, Ian; Huang, Zhenyu; Chen, Yousu; Kalahar, Benson K.; Jin, Shuangshuang; Chavarría-Miranda, Daniel; Baxter, Douglas J.; Feo, John T.

    2009-12-01T23:59:59.000Z

    Operating the electrical power grid to prevent power black-outs is a complex task. An important aspect of this is contingency analysis, which involves understanding and mitigating potential failures in power grid elements such as transmission lines. When taking into account the potential for multiple simultaneous failures (known as the N-x contingency problem), contingency analysis becomes a massively computational task. In this paper we describe a novel hybrid computational approach to contingency analysis. This approach exploits the unique graph processing performance of the Cray XMT in conjunction with a conventional massively parallel compute cluster to identify likely simultaneous failures that could cause widespread cascading power failures that have massive economic and social impact on society. The approach has the potential to provide the first practical and scalable solution to the N-x contingency problem. When deployed in power grid operations, it will increase the grid operator’s ability to deal effectively with outages and failures with power grid components while preserving stable and safe operation of the grid. The paper describes the architecture of our solution and presents preliminary performance results that validate the efficacy of our approach.

  7. SciTech Connect: "high performance computing"

    Office of Scientific and Technical Information (OSTI)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOn AprilAElectronicCurvesSpeedingScientificofRussellTenney,performance computing" Find

  8. High performance bioinformatics and computational biology on general-purpose graphics processing units 

    E-Print Network [OSTI]

    Ling, Cheng

    2012-06-25T23:59:59.000Z

    Bioinformatics and Computational Biology (BCB) is a relatively new multidisciplinary field which brings together many aspects of the fields of biology, computer science, statistics, and engineering. Bioinformatics ...

  9. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    SciTech Connect (OSTI)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01T23:59:59.000Z

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  10. High-Performance Computing at Los Alamos announces milestone for key/value

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC) EnvironmentalGyroSolé(tm) Harmonicbet WhenHiggsmiddleware High-Performance

  11. High Performance Fortran: An overview

    SciTech Connect (OSTI)

    Zosel, M.E.

    1992-12-23T23:59:59.000Z

    The purpose of this paper is to give an overview of the work of the High Performance Fortran Forum (HPFF). This group of industry, academic, and user representatives has been meeting to define a set of extensions for Fortran dedicated to the special problems posed by a very high performance computers, especially the new generation of parallel computers. The paper describes the HPFF effort and its goals and gives a brief description of the functionality of High Performance Fortran (HPF).

  12. Case Study: Evaluating Liquid versus Air Cooling in the Maui High Performance Computing Center

    Broader source: Energy.gov [DOE]

    Study evaluates the energy efficiency of a new, liquid-cooled computing system applied in a retrofit project compared to the previously used air-cooled system.

  13. A Multi-core High Performance Computing Framework for Distribution Power Flow

    E-Print Network [OSTI]

    Franchetti, Franz

    power flow is a computation model and method specified for distribution system which often has multi by modeling the renewable energy resources as random variables or stochastic processes [6] [7] [8] [9]. Among to parallelization in hardware / software models. This means extracting these computation power from the hardware

  14. Java Programming for High Performance Numerical Computing J. E. Moreira S. P. Midkiff M. Gupta P. V. Artigas M. Snir R. D. Lawrence

    E-Print Network [OSTI]

    Goldstein, Seth Copen

    related to Java's applicability to solving large computational problems in science and engineering. Unless are an essential tool in many areas of science and engineering. Computations with complex numbers needJava Programming for High Performance Numerical Computing J. E. Moreira S. P. Midkiff M. Gupta P. V

  15. Decoupling algorithms from the organization of computation for high performance image processing

    E-Print Network [OSTI]

    Ragan-Kelley, Jonathan Millard

    2014-01-01T23:59:59.000Z

    Future graphics and imaging applications-from self-driving cards, to 4D light field cameras, to pervasive sensing-demand orders of magnitude more computation than we currently have. This thesis argues that the efficiency ...

  16. Application-Specific Memory Interleaving Enables High Performance in FPGA-based Grid Computations1

    E-Print Network [OSTI]

    Herbordt, Martin

    around an off-grid position that are needed for 3D interpolation of a value at that position. We present applications, where forces computed at grid points must be applied to off-grid particles. Red-black relaxation

  17. Designing High Performance, Reliable, and Energy-Efficient Networked Computing Systems for the Future

    E-Print Network [OSTI]

    Xu, Cheng-Zhong

    of Center for Networked Computing Systems (CNC), is dedicated to the investigation, establishment-to- end QoS consists of network and server www.wayne.edu Router #12;time and adjust the amount

  18. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    SciTech Connect (OSTI)

    Panda, Dhabaleswar Kumar [The Ohio State University; Beckman, Pete

    2011-07-01T23:59:59.000Z

    With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. #15; Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system through fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? #15; What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? #15; What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? #15; What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? #15; What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. #15; Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on top of existing publish-subscribe tools. #15; We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. #15; Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. #15; Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.

  19. Preprint of the paper "High performance computing for the analysis and postprocessing of earthing

    E-Print Network [OSTI]

    Colominas, Ignasi

    in certain places of the substation site. Its main objective is the transport and dissipation of electrical been systematically reported, such as the large computational costs required in the anal- ysis of real the construction of the substation produces a stratified soil, or as a con- sequence of a chemical treatment

  20. Complex matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    SciTech Connect (OSTI)

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2014-02-11T23:59:59.000Z

    Mechanisms for performing a complex matrix multiplication operation are provided. A vector load operation is performed to load a first vector operand of the complex matrix multiplication operation to a first target vector register. The first vector operand comprises a real and imaginary part of a first complex vector value. A complex load and splat operation is performed to load a second complex vector value of a second vector operand and replicate the second complex vector value within a second target vector register. The second complex vector value has a real and imaginary part. A cross multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the complex matrix multiplication operation. The partial product is accumulated with other partial products and a resulting accumulated partial product is stored in a result vector register.

  1. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation

    SciTech Connect (OSTI)

    Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn

    2014-11-14T23:59:59.000Z

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.

  2. The use of high-performance computing to solve participating media radiative heat transfer problems-results of an NSF workshop

    SciTech Connect (OSTI)

    Gritzo, L.A.; Skocypec, R.D. [Sandia National Labs., Albuquerque, NM (United States); Tong, T.W. [Arizona State Univ., Tempe, AZ (United States). Dept. of Mechanical and Aerospace Engineering

    1995-01-11T23:59:59.000Z

    Radiation in participating media is an important transport mechanism in many physical systems. The simulation of complex radiative transfer has not effectively exploited high-performance computing capabilities. In response to this need, a workshop attended by members active in the high-performance computing community, members active in the radiative transfer community, and members from closely related fields was held to identify how high-performance computing can be used effectively to solve the transport equation and advance the state-of-the-art in simulating radiative heat transfer. This workshop was held on March 29-30, 1994 in Albuquerque, New Mexico and was conducted by Sandia National Laboratories. The objectives of this workshop were to provide a vehicle to stimulate interest and new research directions within the two communities to exploit the advantages of high-performance computing for solving complex radiative heat transfer problems that are otherwise intractable.

  3. Climate Modeling using High-Performance Computing The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon

    E-Print Network [OSTI]

    and NCAR in the development of a comprehensive, earth systems model. This model incorporates the most-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well. Our collaborators in climate research include the National Center

  4. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    SciTech Connect (OSTI)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05T23:59:59.000Z

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  5. Reducing Electricity Cost Through Virtual Machine Placement in High Performance Computing Clouds

    E-Print Network [OSTI]

    .1 [Operating Systems]: Process Management General Terms Design, Performance Keywords Multi-data-center@cs.rutgers.edu ABSTRACT In this paper, we first study the impact of load placement policies on cooling and maximum data center temperatures in cloud ser- vice providers that operate multiple geographically distributed data

  6. High performance computing aspects of a dimension independent semi-Lagrangian discontinuous Galerkin code

    E-Print Network [OSTI]

    Einkemmer, Lukas

    2015-01-01T23:59:59.000Z

    The recently developed semi-Lagrangian discontinuous Galerkin approach is used to discretize hyperbolic partial differential equations (usually first order equations). Since these methods are conservative, local in space, and able to limit numerical diffusion, they are considered a promising alternative to more traditional semi-Lagrangian schemes (which are usually based on polynomial or spline interpolation). In this paper, we consider a parallel implementation of a semi-Lagrangian discontinuous Galerkin method for distributed memory systems (so-called clusters). Both strong and weak scaling studies are performed on the Vienna Scientific Cluster 2 (VSC-2). In the case of weak scaling, up to 8192 cores, we observe a parallel efficiency above 0.89 for both two and four dimensional problems. Strong scaling results show good scalability to at least 1024 cores (we consider problems that can be run on a single processor in reasonable time). In addition, we study the scaling of a two dimensional Vlasov--Poisson sol...

  7. A Low Latency Optical Switch for High Performance Computing with Minimized Processor Energy Load

    E-Print Network [OSTI]

    Liu, Shiyun; Cheng, Qixiang; Madarbux, Muhammad Ridwan; Wonfor, Adrian; Penty, Richard V.; White, Ian H.; Watts, Philip M.

    2015-03-01T23:59:59.000Z

    ion, CMP power densi ty and thermal management issues are ser iously l imit ing processor per formance [2]. High per formance server chips require >1Tb/s of off-chip bandwidth including Ethernet , PCI , main memory and coherence l inks which... with the 120W total processor power envelope. By compar ison, the processor chip power dissipat ion of our archi tecture (at 30% load) is 0.5 mW/(Gb/s), consuming only 0.23W for the same coherence bandwidth. Such compar isons are di fficul...

  8. Harnessing the Department of Energy’s High-Performance Computing Expertise to Strengthen the U.S. Chemical Enterprise

    SciTech Connect (OSTI)

    Dixon, David A.; Dupuis, Michel; Garrett, Bruce C.; Neaton, Jeffrey B.; Plata, Charity; Tarr, Matthew A.; Tomb, Jean-Francois; Golab, Joseph T.

    2012-01-17T23:59:59.000Z

    High-performance computing (HPC) is one area where the DOE has developed extensive expertise and capability. However, this expertise currently is not properly shared with or used by the private sector to speed product development, enable industry to move rapidly into new areas, and improve product quality. Such use would lead to substantial competitive advantages in global markets and yield important economic returns for the United States. To stimulate the dissemination of DOE's HPC expertise, the Council for Chemical Research (CCR) and the DOE jointly held a workshop on this topic. Four important energy topic areas were chosen as the focus of the meeting: Biomass/Bioenergy, Catalytic Materials, Energy Storage, and Photovoltaics. Academic, industrial, and government experts in these topic areas participated in the workshop to identify industry needs, evaluate the current state of expertise, offer proposed actions and strategies, and forecast the expected benefits of implementing those strategies.

  9. Published in proceedings of the 1996 EUROSIM Int'l Conf., June 10--12, 1996, Delft, The Netherlands, pp. 421--428. Automatic Code Generation for High Performance Computing in

    E-Print Network [OSTI]

    van Engelen, Robert A.

    , The Netherlands, pp. 421--428. Automatic Code Generation for High Performance Computing in Environmental Modeling Robert van Engelen a#+ , Lex Wolters a+# , and Gerard Cats b# a High Performance Computing Division, Dept: cats@knmi.nl In this paper we will discuss automatic code generation for high performance computer

  10. amorphous-metals high-performance corrosion-resistant: Topics...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Homer, Eric R. 26 Breaking the Barriers: High Performance Security for High Performance Computing Computer Technologies and Information Sciences Websites Summary: Breaking...

  11. Coordinated resource management for guaranteed high performance and efficient utilization in Lambda-Grids

    E-Print Network [OSTI]

    Taesombut, Nut

    2007-01-01T23:59:59.000Z

    Journal of High Performance Computing Applications, AugustConference on High Performance Computing and Communication (Conference on High Performance Computing and Networking (SC’

  12. High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    for synchronization May, 21 2013 - Researchers developed a surprisingly simple mathematical model that accurately predicts synchronization as a function of the parameters and...

  13. High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOnItem NotEnergy,ARMFormsGasReleaseSpeechesHallNot Logged3 HanfordHarry| CenterHPC INL Logo

  14. High-performance computing centres use a great deal of electricity. In order to run its new com-

    E-Print Network [OSTI]

    that with one pumping operation, cold water is supplied to two circuits to cool two types of systems. The cold- tural resource of Lake Lugano to cool its super- computers and the new building. A high as a small town. About a third of this electricity is used for cooling. If supercomputers are not constantly

  15. Improvement of Power-Performance Efficiency for High-End Computing Rong Ge, Xizhou Feng, Kirk W. Cameron

    E-Print Network [OSTI]

    Ge, Rong

    of thousands of power hungry components will lead to intolerable operating costs and failure rates. High to quantify and compare the power-performance efficiency for parallel Fourier transform and matrix transpose numbers of power-hungry commercial components (e.g. Itanium) in clusters of SMPs to achieve high

  16. This project is funded by an MIT Martin Family Fellowship and a MITEI Seed Fund Grant Leveraging High Performance Computation for Statistical Wind Power Prediction

    E-Print Network [OSTI]

    High Performance Computation for Statistical Wind Power Prediction Cy Chan*, James Stalker**, Alan for wind power forecasting is becoming imperative as wind energy becomes a larger contributor to the energy learning techniques for improving wind power prediction, with the goal of finding better ways to deliver

  17. Setting up the models on NYU's High Performance Computing System I recommend that you start with the exact same structure for running the

    E-Print Network [OSTI]

    Gerber, Edwin

    Setting up the models on NYU's High Performance Computing System I recommend that you start to understand how it all works. This is set up for work on bowery, as this is the main machine for parallel output or data. This file system is backed up, but it is small -- your quota is on the order

  18. High Performance Network Monitoring

    SciTech Connect (OSTI)

    Martinez, Jesse E [Los Alamos National Laboratory

    2012-08-10T23:59:59.000Z

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  19. High Performance Sustainable Building

    Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

    2011-11-09T23:59:59.000Z

    This Guide provides approaches for implementing the High Performance Sustainable Building (HPSB) requirements of DOE Order 413.3B, Program and Project Management for the Acquisition of Capital Assets. Cancels DOE G 413.3-6.

  20. Deployment of a Suite of High-Performance Computational Tools for Multi-scale Multi-physics Simulation of Generation IV Reactors

    SciTech Connect (OSTI)

    Podowski, Michael Z

    2013-01-03T23:59:59.000Z

    The overall objective of this project has been to deploy advanced simulation capabilities for next generation reactor systems utilizing newly available, high-performance computing facilities. The approach includes the following major components: The development of new simulation capabilities using state-of-the-art computer codes of different scales: molecular dynamics (MD) level, DNS (FronTier and PHASTA) and CFD (NPHASE-CMFD); The development of advanced numerical solvers for large-size computational problems; The deployment of a multiple-code computational platform for multiscale simulations of gas/liquid two-phase flow during reactor transients and accidents; and Application of the new computational methodology to study the progression of loss-of-flow accident in sodium fast reactor (SFR).

  1. Thermoelectrics Partnership: High Performance Thermoelectric...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    High Performance Thermoelectric Waste Heat Recovery System Based on Zintl Phase Materials with Embedded Nanoparticles Thermoelectrics Partnership: High Performance Thermoelectric...

  2. advanced high performance: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    cause the surface to become nonspecular. Hans J. Dehne 1991-01-01 7 Army High Performance Computing Research Center Materials Science Websites Summary: Army High Performance...

  3. High performance polymeric foams

    SciTech Connect (OSTI)

    Gargiulo, M.; Sorrentino, L. [Institute for Composite and Biomedical Materials (IMCB)-CNR, P.le Tecchio 80, 80125 Naples (Italy); Iannace, S. [Institute for Composite and Biomedical Materials (IMCB)-CNR, P.le Tecchio 80, 80125 Naples (Italy) and Technological District on Polymeric and Composite Materials Engineering and Structures (IMAST), P.le E.Fermi 1, location Porto del Granatello, 80055 Portici (Naples)

    2008-08-28T23:59:59.000Z

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy.

  4. High Performance Sustainable Building

    Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

    2008-06-20T23:59:59.000Z

    The guide supports DOE O 413.3A and provides useful information on the incorporation of high performance sustainable building principles into building-related General Plant Projects and Institutional General Plant Projects at DOE sites. Canceled by DOE G 413.3-6A. Does not cancel other directives.

  5. Towards Ultra-High Resolution Models of Climate and Weather To appear in the International Journal of High Performance Computing Applications, 2008.

    E-Print Network [OSTI]

    Oliker, Leonid

    of anthropogenic climate change are highly dependent on cloud-radiation interactions. In this paper, we Keywords Climate model, atmospheric general circulation model, finite volume model, global warming scientists today, with economic ramifications in the trillions of dollars. Effectively performing

  6. A Performance and Cost Analysis of the Amazon Elastic Compute Cloud (EC2) Cluster Compute Instance

    E-Print Network [OSTI]

    Bjørnstad, Ottar Nordal

    A Performance and Cost Analysis of the Amazon Elastic Compute Cloud (EC2) Cluster Compute Instance the availability of Elastic Compute Cloud (EC2) Cluster Compute Instances specifically designed for high compute power available on demand the question arises if cloud computing with using and Amazon EC2 HPC

  7. WARPP -A Toolkit for Simulating High-Performance Parallel Scientific Codes

    E-Print Network [OSTI]

    Jarvis, Stephen

    the High Performance Computing (HPC) community, including increasing levels of con- currency (threads, Simulation Keywords Application Performance Modelling, Simulation, High Performance Computing 1. INTRODUCTION

  8. High Performance New Construction

    E-Print Network [OSTI]

    Flores, M.

    2013-01-01T23:59:59.000Z

    2013: Clean Air Through Energy Efficiency Conference, San Antonio, Texas Dec. 16-18 Agenda • Concept of Project – Challenging Metrics • HPDB / IPD – Accountability / Outcomes Focus • Energy Modeling = Total Cost of Ownership • GMAX = Predictable Costs...-18 High Performance Design-Build • Age Myth • Cost Myth • Complexity Myth • Time Advantage • Early Cost Confirmation • Lower Cost For Design Changes ESL-KT-13-12-40 CATEE 2013: Clean Air Through Energy Efficiency Conference, San Antonio, Texas Dec. 16...

  9. High Performance Window Retrofit

    SciTech Connect (OSTI)

    Shrestha, Som S [ORNL; Hun, Diana E [ORNL; Desjarlais, Andre Omer [ORNL

    2013-12-01T23:59:59.000Z

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  10. A Comparison of Library Tracking Methods in High Performance

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Library Tracking Methods in High Performance Computing Computer System Cluster and Networking Summer Institute 2013 Poster Seminar William Rosenberger (New Mexico Tech), Dennis...

  11. High Performance Buildings Database

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  12. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

    SciTech Connect (OSTI)

    Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

    2011-02-01T23:59:59.000Z

    Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

  13. High Performance www.rrze.uni-erlangen.de

    E-Print Network [OSTI]

    Fiebig, Peter

    High Performance Computing at RRZE 2008 HPCatRRZE www.rrze.uni-erlangen.de #12;G. Hager, T. Zeiser and G. Wellein: Concepts of High Performance Computing. In: Fehske et al. Lect. Notes Phys. 739, 681 Optimization Techniques for the Hitachi SR8000 architecture. In: A. Bode (Ed.) : High Performance Computing

  14. ENGINES OF DISCOVERY: THE 21ST CENTURY REVOLUTION THE LONG RANGE PLAN FOR HIGH PERFORMANCE COMPUTING IN CANADA

    E-Print Network [OSTI]

    Schaeffer, Jonathan

    COMPUTING IN CANADA #12;Engines of Discovery: The 21st Century Revolution The Benefits of the Long Range in Canada - development of a national network of technical support available to all users · Autonomy: local dissemination of techniques and results across Canada · Transparency: appropriate and well-managed injection

  15. Java programming for high-performance

    E-Print Network [OSTI]

    Goldstein, Seth Copen

    language to solv- ing large computational problems in science and en- gineering. Unless these issuesJava programming for high-performance numerical computing by J. E. Moreira S. P. Midkiff M. Gupta PTM language has taken off as a serious general-purpose programming language. Industry and academia alike have

  16. System Software and Tools for High Performance Computing Environments: A report on the findings of the Pasadena Workshop, April 14--16, 1992

    SciTech Connect (OSTI)

    Sterling, T. [Universities Space Research Association, Washington, DC (United States); Messina, P. [Jet Propulsion Lab., Pasadena, CA (United States); Chen, M. [Yale Univ., New Haven, CT (United States)] [and others

    1993-04-01T23:59:59.000Z

    The Pasadena Workshop on System Software and Tools for High Performance Computing Environments was held at the Jet Propulsion Laboratory from April 14 through April 16, 1992. The workshop was sponsored by a number of Federal agencies committed to the advancement of high performance computing (HPC) both as a means to advance their respective missions and as a national resource to enhance American productivity and competitiveness. Over a hundred experts in related fields from industry, academia, and government were invited to participate in this effort to assess the current status of software technology in support of HPC systems. The overall objectives of the workshop were to understand the requirements and current limitations of HPC software technology and to contribute to a basis for establishing new directions in research and development for software technology in HPC environments. This report includes reports written by the participants of the workshop`s seven working groups. Materials presented at the workshop are reproduced in appendices. Additional chapters summarize the findings and analyze their implications for future directions in HPC software technology development.

  17. Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017

    E-Print Network [OSTI]

    Gerber, Richard

    2014-01-01T23:59:59.000Z

    in the use of High Performance Computing (HPC) and in factNERSC is the primary high-performance computing facility forthree major High Performance Computing Centers: NERSC and

  18. alternative high-performance material-based: Topics by E-print...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    12 Jalali. Bahram 4 Breaking the Barriers: High Performance Security for High Performance Computing Computer Technologies and Information Sciences Websites Summary: Breaking...

  19. amorphous metals high-performance: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    David Lee 2008-01-01 39 Breaking the Barriers: High Performance Security for High Performance Computing Computer Technologies and Information Sciences Websites Summary: Breaking...

  20. actuators alternative high-performance: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    12 Jalali. Bahram 14 Breaking the Barriers: High Performance Security for High Performance Computing Computer Technologies and Information Sciences Websites Summary: Breaking...

  1. Performance Engineering for Cloud Computing John Murphy

    E-Print Network [OSTI]

    Murphy, John

    Performance Engineering for Cloud Computing John Murphy Lero ­ The Irish Software Engineering.Murphy@ucd.ie Abstract. Cloud computing potentially solves some of the major challenges in the engineering of large efficient operation. This paper argues that cloud computing is an area where performance engineering must

  2. Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation and Completion of Episodic Information.

    SciTech Connect (OSTI)

    Aimone, James Bradley; Bernard, Michael Lewis; Vineyard, Craig Michael; Verzi, Stephen Joseph

    2014-10-01T23:59:59.000Z

    Adult neurogenesis in the hippocampus region of the brain is a neurobiological process that is believed to contribute to the brain's advanced abilities in complex pattern recognition and cognition. Here, we describe how realistic scale simulations of the neurogenesis process can offer both a unique perspective on the biological relevance of this process and confer computational insights that are suggestive of novel machine learning techniques. First, supercomputer based scaling studies of the neurogenesis process demonstrate how a small fraction of adult-born neurons have a uniquely larger impact in biologically realistic scaled networks. Second, we describe a novel technical approach by which the information content of ensembles of neurons can be estimated. Finally, we illustrate several examples of broader algorithmic impact of neurogenesis, including both extending existing machine learning approaches and novel approaches for intelligent sensing.

  3. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

    SciTech Connect (OSTI)

    Joseph, Earl C.; Conway, Steve; Dekate, Chirag

    2013-09-30T23:59:59.000Z

    This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size. ? A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

  4. The High Performance Storage System

    SciTech Connect (OSTI)

    Coyne, R.A.; Hulen, H. [IBM Federal Systems Co., Houston, TX (United States); Watson, R. [Lawrence Livermore National Lab., CA (United States)

    1993-09-01T23:59:59.000Z

    The National Storage Laboratory (NSL) was organized to develop, demonstrate and commercialize technology for the storage system that will be the future repositories for our national information assets. Within the NSL four Department of Energy laboratories and IBM Federal System Company have pooled their resources to develop an entirely new High Performance Storage System (HPSS). The HPSS project concentrates on scalable parallel storage system for highly parallel computers as well as traditional supercomputers and workstation clusters. Concentrating on meeting the high end of storage system and data management requirements, HPSS is designed using network-connected storage devices to transfer data at rates of 100 million bytes per second and beyond. The resulting products will be portable to many vendor`s platforms. The three year project is targeted to be complete in 1995. This paper provides an overview of the requirements, design issues, and architecture of HPSS, as well as a description of the distributed, multi-organization industry and national laboratory HPSS project.

  5. Creating high performance enterprises

    E-Print Network [OSTI]

    Stanke, Alexis K. (Alexis Kristen), 1977-

    2006-01-01T23:59:59.000Z

    How do enterprises successfully conceive, design, deliver, and operate large-scale, engineered systems? These large-scale projects often involve high complexity, significant technical challenges, a large number of diverse ...

  6. High Performance Window Attachments

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't YourTransport(Fact Sheet), GeothermalGridHYDROGEN TOTechnologyHighLouisiana |HighMaterials

  7. Commissioning for High Performance 

    E-Print Network [OSTI]

    Meline, K.

    2013-01-01T23:59:59.000Z

    &M Energy Systems Laboratory Type of Cx Previously Cx’d? Performance Monitoring Req’d? Functional Testing Req’d? Re-Cx Yes No Yes Retro-Cx No No Yes CC® Yes/No Yes No Monitoring-Based Cx Yes/No Yes No • New Building Commissioning ? Process Commissioning... Benefits of Cx Owners can achieve savings in operations of $4.00 over the first years of occupancy 5 as a direct result of every invested in commissioning* $1.00 * Data from Whole Building Design Guide – a program of National...

  8. Life-Cycle Energy Demand of Computational Logic: From High-Performance 32nm CPU to Ultra-Low-Power 130nm MCU

    E-Print Network [OSTI]

    Bol, David; Boyd, Sarah; Dornfeld, David

    2011-01-01T23:59:59.000Z

    Boyd et al. : “Life-cycle energy demand and global warmingLife-Cycle Energy Demand of Computational Logic: From High-to assess the life-cycle energy demand of its products for

  9. Life-Cycle Energy Demand of Computational Logic: From High-Performance 32nm CPU to Ultra-Low-Power 130nm MCU

    E-Print Network [OSTI]

    Bol, David; Boyd, Sarah; Dornfeld, David

    2011-01-01T23:59:59.000Z

    Boyd et al. : “Life-cycle energy demand and global warmingLife-Cycle Energy Demand of Computational Logic: From High-to assess the life-cycle energy demand of its products for

  10. Comments on: High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadapInactiveVisiting the TWPSuccessAlamosCharacterization2Climate,CobaltColdin679April

  11. Introduction to High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC)Integrated Codes | National NuclearInterlibraryDocumentationTechnical'Make'Intro

  12. Sandia Energy - High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOnItemResearch > TheNuclear Press ReleasesInAppliedEnergyGeothermal HomeGridHHigh

  13. Thrusts in High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:Energy: Grid Integration Redefining What'sis Taking Over OurThe Iron Spin Transition in2, 2003 (Next ReleaseThomasTheories |20 -18Home »in

  14. Using High Performance Libraries

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of ScienceandMesa del SolStrengthening aTurbulence mayUndergraduateAboutUserHadoop Using HadoopUsingHigh

  15. High Performance Sustainable Buildings

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville PowerCherries 82981-1cn SunnybankD.jpgHanfordDepartmentInnovationHigh Flux

  16. Dynamic Protocol Tuning Algorithms for High Performance Data...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Dynamic Protocol Tuning Algorithms for High Performance Data Transfers Event Sponsor: Mathematics and Computing Science Seminar Start Date: Apr 3 2015 - 2:00pm BuildingRoom:...

  17. Accelerating Predictive Simulation of IC Engines with High Performance...

    Broader source: Energy.gov (indexed) [DOE]

    IC engines with high performance computing (ACE017) This presentation does not contain any proprietary, confidential, or otherwise restricted information K. Dean Edwards, C....

  18. Frontiers in Planetary and Stellar Magnetism through High-Performance...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Hwang, project co-PI Frontiers in Planetary and Stellar Magnetism through High-Performance Computing PI Name: Jonathan Aurnou PI Email: aurnou@ucla.edu Institution: University...

  19. DOE ASSESSMENT SEAB Recommendations Related to High Performance...

    Office of Environmental Management (EM)

    of 10 DOE ASSESSMENT SEAB Recommendations Related to High Performance Computing 1. Introduction The Department of Energy (DOE) is planning to develop and deliver capable exascale...

  20. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    assetsimagesicon-science.jpg Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of...

  1. Computationally Efficient Modeling of High-Efficiency Clean Combustion...

    Broader source: Energy.gov (indexed) [DOE]

    Volvo; multi-zone cycle simulation, OpenFOAM model development Bosch; High Performance Computing of HCCISI transition Delphi; direct injection GE Research; new...

  2. High Performance Networks for High Impact Science

    SciTech Connect (OSTI)

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13T23:59:59.000Z

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  3. High Performance Incentive Program (Kansas)

    Broader source: Energy.gov [DOE]

    High Performance Incentive Program provides tax incentives to eligible employers that pay above-average wages and have a strong commitment to skills development for their workers. A substantial...

  4. Molecular Dynamics Simulations on High-Performance Reconfigurable

    E-Print Network [OSTI]

    Herbordt, Martin

    23 Molecular Dynamics Simulations on High-Performance Reconfigurable Computing Systems MATT CHIU. 2010. Molecular dynamics simulations on high performance recon- figurable computing systems. ACM Trans://doi.acm.org/10.1145/1862648.1862653. 1. INTRODUCTION Molecular dynamics simulation (MD) is a

  5. High Performance Information Filtering on Many-core Processors

    E-Print Network [OSTI]

    Tripathy, Aalap

    2013-12-06T23:59:59.000Z

    -purpose usage as application-coprocessors and are now widely used in high performance computing due to exceptional floating-point performance, memory bandwidth and power efficiency. GPU manufacturers have released API’s and programming models for application...

  6. Computational study of power conversion and luminous efficiency performance for

    E-Print Network [OSTI]

    Demir, Hilmi Volkan

    Computational study of power conversion and luminous efficiency performance for semiconductor) and luminous efficiency (LE) performance levels of high photometric quality white LEDs integrated with quantum dots (QDs) achieving an averaged color rendering index of 90 (with R9 at least 70), a luminous efficacy

  7. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    SciTech Connect (OSTI)

    Brown, Maxine D. [Acting Director, EVL; Leigh, Jason [PI

    2014-02-17T23:59:59.000Z

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascale computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.

  8. Elucidating geochemical response of shallow heterogeneous aquifers to CO2 leakage using high-performance computing: Implications for monitoring of CO2 sequestration

    SciTech Connect (OSTI)

    Navarre-Sitchler, Alexis K.; Maxwell, Reed M.; Siirila, Erica R.; Hammond, Glenn E.; Lichtner, Peter C.

    2013-03-01T23:59:59.000Z

    Predicting and quantifying impacts of potential carbon dioxide (CO2) leakage into shallow aquifers that overlie geologic CO2 storage formations is an important part of developing reliable carbon storage techniques. Leakage of CO2 through fractures, faults or faulty wellbores can reduce groundwater pH, inducing geochemical reactions that release solutes into the groundwater and pose a risk of degrading groundwater quality. In order to help quantify this risk, predictions of metal concentrations are needed during geologic storage of CO2. Here, we present regional-scale reactive transport simulations, at relatively fine-scale, of CO2 leakage into shallow aquifers run on the PFLOTRAN platform using high-performance computing. Multiple realizations of heterogeneous permeability distributions were generated using standard geostatistical methods. Increased statistical anisotropy of the permeability field resulted in more lateral and vertical spreading of the plume of impacted water, leading to increased Pb2+ (lead) concentrations and lower pH at a well down gradient of the CO2 leak. Pb2+ concentrations were higher in simulations where calcite was the source of Pb2+ compared to galena. The low solubility of galena effectively buffered the Pb2+ concentrations as galena reached saturation under reducing conditions along the flow path. In all cases, Pb2+ concentrations remained below the maximum contaminant level set by the EPA. Results from this study, compared to natural variability observed in aquifers, suggest that bicarbonate (HCO3) concentrations may be a better geochemical indicator of a CO2 leak under the conditions simulated here.

  9. Design and Implementation of High Performance Algorithms for the (n,k)-Universal Set Problem

    E-Print Network [OSTI]

    Luo, Ping

    2010-01-14T23:59:59.000Z

    . . . . . . . . . . . . . . . . . . . . . . 1 B. High Performance Computing . . . . . . . . . . . . . . . . 3 C. Contributions . . . . . . . . . . . . . . . . . . . . . . . . . 4 D. Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . 5 II THEORETICAL BACKGROUND... Performance Computing High performance computing (HPC) refers to solving advanced computation problems quickly and reliably using the power of parallel machines. A parallel machine can either be a shared memory supercomputer or a distributed memory computer...

  10. INL High Performance Building Strategy

    SciTech Connect (OSTI)

    Jennifer D. Morton

    2010-02-01T23:59:59.000Z

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design (LEED®) Green Building Rating System (LEED 2009). The document employs a two-level approach for high performance building at INL. The first level identifies the requirements of the Guiding Principles for Sustainable New Construction and Major Renovations, and the second level recommends which credits should be met when LEED Gold certification is required.

  11. anion exchange high-performance: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Websites Summary: Abstract Current communications tools and libraries for high performance computing are designed' in our research and (2) high performance collaborative...

  12. High Energy Physics from High Performance Computing

    E-Print Network [OSTI]

    T. Blum

    2009-08-06T23:59:59.000Z

    We discuss Quantum Chromodynamics calculations using the lattice regulator. The theory of the strong force is a cornerstone of the Standard Model of particle physics. We present USQCD collaboration results obtained on Argonne National Lab's Intrepid supercomputer that deepen our understanding of these fundamental theories of Nature and provide critical support to frontier particle physics experiments and phenomenology.

  13. High Performance Fortran Language Specification

    E-Print Network [OSTI]

    Shewchuk, Jonathan

    Version 1.1 #12; The High Performance Fortran Forum (HPFF), with participation from over 40 organ will become widely available, HPFF is not sanctioned or supported by any official standards organization. The HPFF had a second series of meetings from April 1994 to October 1994 to consider requests

  14. High Performance Outdoor Lighting Accelerator

    Broader source: Energy.gov [DOE]

    Hosted by the U.S. Department of Energy (DOE)’s Weatherization and Intergovernmental Programs Office (WIPO), this webinar covered the expansion of the Better Buildings platform to include the newest initiative for the public sector: the High Performance Outdoor Lighting Accelerator (HPOLA).

  15. High-Performance Nanostructured Coating

    Broader source: Energy.gov [DOE]

    The High-Performance Nanostructured Coating fact sheet details a SunShot project led by a University of California, San Diego research team working to develop a new high-temperature spectrally selective coating for receiver surfaces. These receiver surfaces, used in concentrating solar power systems, rely on high-temperature SSCs to effectively absorb solar energy without emitting much blackbody radiation.The optical properties of the SSC directly determine the efficiency and maximum attainable temperature of solar receivers, which in turn influence the power-conversion efficiency and overall system cost.

  16. Performance analysis of memory hierachies in high performance systems

    SciTech Connect (OSTI)

    Yogesh, A.

    1993-07-01T23:59:59.000Z

    This thesis studies memory bandwidth as a performance predictor of programs. The focus of this work is on computationally intensive programs. These programs are the most likely to access large amounts of data, stressing the memory system. Computationally intensive programs are also likely to use highly optimizing compilers to produce the fastest executables possible. Methods to reduce the amount of data traffic by increasing the average number of references to each item while it resides in the cache are explored. Increasing the average number of references to each cache item reduces the number of memory requests. Chapter 2 describes the DLX architecture. This is the architecture on which all the experiments were performed. Chapter 3 studies memory moves as a performance predictor for a group of application programs. Chapter 4 introduces a model to study the performance of programs in the presence of memory hierarchies. Chapter 5 explores some compiler optimizations that can help increase the references to each item while it resides in the cache.

  17. High Efficiency, High Performance Clothes Dryer

    SciTech Connect (OSTI)

    Peter Pescatore; Phil Carbone

    2005-03-31T23:59:59.000Z

    This program covered the development of two separate products; an electric heat pump clothes dryer and a modulating gas dryer. These development efforts were independent of one another and are presented in this report in two separate volumes. Volume 1 details the Heat Pump Dryer Development while Volume 2 details the Modulating Gas Dryer Development. In both product development efforts, the intent was to develop high efficiency, high performance designs that would be attractive to US consumers. Working with Whirlpool Corporation as our commercial partner, TIAX applied this approach of satisfying consumer needs throughout the Product Development Process for both dryer designs. Heat pump clothes dryers have been in existence for years, especially in Europe, but have not been able to penetrate the market. This has been especially true in the US market where no volume production heat pump dryers are available. The issue has typically been around two key areas: cost and performance. Cost is a given in that a heat pump clothes dryer has numerous additional components associated with it. While heat pump dryers have been able to achieve significant energy savings compared to standard electric resistance dryers (over 50% in some cases), designs to date have been hampered by excessively long dry times, a major market driver in the US. The development work done on the heat pump dryer over the course of this program led to a demonstration dryer that delivered the following performance characteristics: (1) 40-50% energy savings on large loads with 35 F lower fabric temperatures and similar dry times; (2) 10-30 F reduction in fabric temperature for delicate loads with up to 50% energy savings and 30-40% time savings; (3) Improved fabric temperature uniformity; and (4) Robust performance across a range of vent restrictions. For the gas dryer development, the concept developed was one of modulating the gas flow to the dryer throughout the dry cycle. Through heat modulation in a gas dryer, significant time and energy savings, combined with dramatically reduced fabric temperatures, was achieved in a cost-effective manner. The key design factor lay in developing a system that matches the heat input to the dryer with the fabrics ability to absorb it. The development work done on the modulating gas dryer over the course of this program led to a demonstration dryer that delivered the following performance characteristics: (1) Up to 25% reduction in energy consumption for small and medium loads; (2) Up to 35% time savings for large loads with 10-15% energy reduction and no adverse effect on cloth temperatures; (3) Reduced fabric temperatures, dry times and 18% energy reduction for delicate loads; and, (4) Robust performance across a range of vent restrictions.

  18. Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance...

    Office of Environmental Management (EM)

    Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance Computing Center Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance Computing Center Study...

  19. Building America Webinar: High Performance Enclosure Strategies...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Building America Webinar: High Performance Enclosure Strategies: Part II, New Construction - August 13, 2014 - Next Gen Advanced Framing for High Performance Homes Integrated...

  20. Building America Webinar: High Performance Space Conditioning...

    Energy Savers [EERE]

    Webinar: High Performance Space Conditioning Systems, Part II - Compact Buried Ducts Building America Webinar: High Performance Space Conditioning Systems, Part II - Compact Buried...

  1. Building America Webinar: High Performance Enclosure Strategies...

    Broader source: Energy.gov (indexed) [DOE]

    High Performance Enclosure Strategies: Part II, New Construction Building America Webinar: High Performance Enclosure Strategies: Part II, New Construction The webinar is the...

  2. High Performance Fortran: Implementor and Users Workshop Alok Choudhary * Charles Koelbel Mary Zosel

    E-Print Network [OSTI]

    of the High Performance Computing Forum (HPFF). A specific goal of HPFF was to have *Supported by ARPA

  3. Computational Tools to Assess Turbine Biological Performance

    SciTech Connect (OSTI)

    Richmond, Marshall C.; Serkowski, John A.; Rakowski, Cynthia L.; Strickler, Brad; Weisbeck, Molly; Dotson, Curtis L.

    2014-07-24T23:59:59.000Z

    Public Utility District No. 2 of Grant County (GCPUD) operates the Priest Rapids Dam (PRD), a hydroelectric facility on the Columbia River in Washington State. The dam contains 10 Kaplan-type turbine units that are now more than 50 years old. Plans are underway to refit these aging turbines with new runners. The Columbia River at PRD is a migratory pathway for several species of juvenile and adult salmonids, so passage of fish through the dam is a major consideration when upgrading the turbines. In this paper, a method for turbine biological performance assessment (BioPA) is demonstrated. Using this method, a suite of biological performance indicators is computed based on simulated data from a CFD model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. Using known relationships between the dose of an injury mechanism and frequency of injury (dose–response) from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from proposed designs, the engineer can identify the more-promising alternatives. We present an application of the BioPA method for baseline risk assessment calculations for the existing Kaplan turbines at PRD that will be used as the minimum biological performance that a proposed new design must achieve.

  4. Circuits for high-performance low-power VLSI logic

    E-Print Network [OSTI]

    Ma, Albert

    2006-01-01T23:59:59.000Z

    The demands of future computing, as well as the challenges of nanometer-era VLSI design, require new digital logic techniques and styles that are simultaneously high performance, energy efficient, and robust to noise and ...

  5. Accelerating Predictive Simulation of IC Engines with High Performance...

    Broader source: Energy.gov (indexed) [DOE]

    IC engines with high performance computing (ACE017) K. Dean Edwards (PI), C. Stuart Daw, Wael R. Elwasif, Charles E. A. Finney, Sreekanth Pannala, Miroslav K. Stoyanov, Robert M....

  6. FUTURE POWER GRID INITIATIVE Real-time High-Performance

    E-Print Network [OSTI]

    FUTURE POWER GRID INITIATIVE Real-time High-Performance Computing Infrastructure for Next- Generation Power Grid Analysis OBJECTIVE » We are developing infrastructure, software, formal models for real Infrastructure Operations Center (EIOC), the Pacific Northwest National Laboratory's (PNNL) national electric

  7. Computing High Accuracy Power Spectra with Pico

    E-Print Network [OSTI]

    William A. Fendt; Benjamin D. Wandelt

    2007-12-02T23:59:59.000Z

    This paper presents the second release of Pico (Parameters for the Impatient COsmologist). Pico is a general purpose machine learning code which we have applied to computing the CMB power spectra and the WMAP likelihood. For this release, we have made improvements to the algorithm as well as the data sets used to train Pico, leading to a significant improvement in accuracy. For the 9 parameter nonflat case presented here Pico can on average compute the TT, TE and EE spectra to better than 1% of cosmic standard deviation for nearly all $\\ell$ values over a large region of parameter space. Performing a cosmological parameter analysis of current CMB and large scale structure data, we show that these power spectra give very accurate 1 and 2 dimensional parameter posteriors. We have extended Pico to allow computation of the tensor power spectrum and the matter transfer function. Pico runs about 1500 times faster than CAMB at the default accuracy and about 250,000 times faster at high accuracy. Training Pico can be done using massively parallel computing resources, including distributed computing projects such as Cosmology@Home. On the homepage for Pico, located at http://cosmos.astro.uiuc.edu/pico, we provide new sets of regression coefficients and make the training code available for public use.

  8. Appears in the 21st International ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC'12) Locality-Aware Dynamic VM Reconfiguration on

    E-Print Network [OSTI]

    virtualization, has been expanding its services to distributed data-intensive platforms such as Map-communication Net- works]: Distributed Systems Permission to make digital or hard copies of all or part of this work computing has been expanding its services to data-intensive computing on distributed platforms such as Map

  9. Management issues for high performance storage systems

    SciTech Connect (OSTI)

    Louis, S. [Lawrence Livermore National Lab., CA (United States); Burris, R. [Oak Ridge National Lab., TN (United States)

    1995-03-01T23:59:59.000Z

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  10. Algoritmos y Programacin High Performance Fortran

    E-Print Network [OSTI]

    Giménez, Domingo

    . Promovido por el High Performance Fortran Forum (HPFF): http://www.hpfpc.org/index-E.html Características de

  11. High Performance An introduction talk

    E-Print Network [OSTI]

    Fang, Shiaofen

    in the world leader ¡ Being used in many different areas: medicine to consumer products, energy to aerospace, catalysts and batteries ¡ Life Science ¡ Better biofuels ¡ Sequence to structure to function ITER ILC ¡ Data -> knowledge 11 #12;¡ The listing of the 500 most powerful computer in the world ¡ Yardstick: Rmax

  12. Organizational Analysis in Computer Science

    E-Print Network [OSTI]

    Kling, Rob

    1993-01-01T23:59:59.000Z

    trying to develop high performance computing applicationsFor example, the High Performance Computing Act will providehelping to develop high performance computing applications

  13. High-Performance Nanostructured Coating

    Broader source: Energy.gov (indexed) [DOE]

    high spectral absorptivity in the solar spectrum and low spectral emissivity in the infrared spectrum, as well as excellent durability at elevated temperatures in open air....

  14. Life-Cycle Energy Demand of Computational Logic: From High-Performance 32nm CPU to Ultra-Low-Power 130nm MCU

    E-Print Network [OSTI]

    Bol, David; Boyd, Sarah; Dornfeld, David

    2011-01-01T23:59:59.000Z

    Performance 32 nm CPU to Ultra-Low-Power 130 nm MCU Davidboxes and smart phones to ultra-low-power 130 nm MCUs forthe energy demand for ultra-low-power MCUs is completely

  15. Life-Cycle Energy Demand of Computational Logic: From High-Performance 32nm CPU to Ultra-Low-Power 130nm MCU

    E-Print Network [OSTI]

    Bol, David; Boyd, Sarah; Dornfeld, David

    2011-01-01T23:59:59.000Z

    Performance 32 nm CPU to Ultra-Low-Power 130 nm MCU Davidboxes and smart phones to ultra-low-power 130 nm MCUs forthe energy demand for ultra-low-power MCUs is completely

  16. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing.

    SciTech Connect (OSTI)

    Bartlett, Roscoe Ainsworth

    2010-05-01T23:59:59.000Z

    The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classes is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory usage errors, and will be much more robust to later refactoring and maintenance. The level of debug-mode runtime checking provided by the Teuchos memory management classes is stronger in many respects than what is provided by memory checking tools like Valgrind and Purify while being much less expensive. However, tools like Valgrind and Purify perform a number of types of checks (like usage of uninitialized memory) that makes these tools very valuable and therefore complement the Teuchos memory management debug-mode runtime checking. The Teuchos memory management classes and idioms largely address the technical issues in resolving the fragile built-in C++ memory management model (with the exception of circular references which has no easy solution but can be managed as discussed). All that remains is to teach these classes and idioms and expand their usage in C++ codes. The long-term viability of C++ as a usable and productive language depends on it. Otherwise, if C++ is no safer than C, then is the greater complexity of C++ worth what one gets as extra features? Given that C is smaller and easier to learn than C++ and since most programmers don't know object-orientation (or templates or X, Y, and Z features of C++) all that well anyway, then what really are most programmers getting extra out of C++ that would outweigh the extra complexity of C++ over C? C++ zealots will argue this point but the reality is that C++ popularity has peaked and is becoming less popular while the popularity of C has remained fairly stable over the last decade22. Idioms like are advocated in this paper can help to avert this trend but it will require wide community buy-in and a change in the way C++ is taught in order to have the greatest impact. To make these programs more secure, compiler vendors or static analysis tools (e.g. klocwork23) could implement a preprocessor-like language similar to OpenMP24 that would allow the programmer to declare (in comments) that certain blocks of code should be ''pointer-free'' or allow smaller blocks to be 'pointers allowed'. This would signific

  17. High Performance Networks for High Impact Science

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC) EnvironmentalGyroSolé(tm) Harmonicbet WhenHiggs Boson May| ArgonneHighDISCLAIMER

  18. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    E-Print Network [OSTI]

    Gerber, Richard A.

    2012-01-01T23:59:59.000Z

    proceedings of High Performance Computing – 2011 (HPC-2011)is manager of High-Performance Computing group in the ITDensity Physics high-performance computing High Performance

  19. National Energy Research Scientific Computing Center 2007 Annual Report

    E-Print Network [OSTI]

    Hules, John A.

    2008-01-01T23:59:59.000Z

    and Directions in High Performance Computing for the Officein the evolution of high performance computing and networks.Hectopascals High performance computing High Performance

  20. High Performance Plastic DSSC | ANSER Center | Argonne-Northwestern...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High Performance Plastic DSSC Home > Research > ANSER Research Highlights > High Performance Plastic DSSC...

  1. High performance fibers. Final report

    SciTech Connect (OSTI)

    Economy, J.

    1994-01-01T23:59:59.000Z

    A two and a half year ONR/ARPA funded program to develop a low cost process for manufacture of a high strength/high modulus sigma/E boron nitride (BN) fiber was initiated on 7/1/90 and ended on 12/31/92. The preparation of high sigma/E BN fibers had been demonstrated in the late 1960's by the PI using a batch nitriding of B2O3 fiber with NH3 followed by stress graphitization at approx. 2000 deg C. Such fibers displayed values comparable to PAN based carbon fibers but the mechanicals were variable most likely because of redeposition of volatiles at 2000 deg C. In addition, the cost of the fibers was very high due to the need for many hours of nitriding necessary to convert the B2O3 fibers. The use of batch nitriding negated two possible cost advantages of this concept, namely, the ease of drawing very fine, multi-filament yarn of B2O3 and more importantly the very low cost of the starting materials.

  2. Building America Webinar: High Performance Enclosure Strategies...

    Energy Savers [EERE]

    Webinar: High Performance Enclosure Strategies: Part II, New Construction - August 13, 2014 - Cladding Attachment Over Thick Exterior Rigid Insulation Building America Webinar:...

  3. High Performance Green Schools Planning Grants

    Broader source: Energy.gov [DOE]

    The Governor's Green Government Council of Pennsylvania provides an incentive for new schools to be built according to green building standards. High Performance Green Schools Planning Grants are...

  4. Building America Webinar: High Performance Space Conditioning...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    II - Design Options for Locating Ducts within Conditioned Space Building America Webinar: High Performance Space Conditioning Systems, Part II - Design Options for Locating Ducts...

  5. High performance carbon nanocomposites for ultracapacitors

    DOE Patents [OSTI]

    Lu, Wen

    2012-10-02T23:59:59.000Z

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  6. TAP Webinar: High Performance Outdoor Lighting Accelerator

    Broader source: Energy.gov [DOE]

    Hosted by the Technical Assistance Program (TAP), this webinar will cover the recently announced expansion of the Better Buildings platform —the High Performance Outdoor Lighting Accelerator (HPOLA).

  7. This paper is adapted from a chapter in: L. Grandinetti (ed.), "Grid Computing and New Frontiers of High Performance Processing." Elsevier, 2005.

    E-Print Network [OSTI]

    , magnetic fusion energy sciences, chemical sciences, and bioinformatics. Except for nuclear physics ­ and is the principal federal funding agency of ­ the Nation's research programs in high-energy physics, nuclear physics, and fusion energy sciences. [It also] manages fundamental research programs in basic energy sciences

  8. Strategy Guideline: High Performance Residential Lighting

    SciTech Connect (OSTI)

    Holton, J.

    2012-02-01T23:59:59.000Z

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  9. GlyQ-IQ: Glycomics Quintavariate-Informed Quantification with High-Performance Computing and GlycoGrid 4D Visualization

    SciTech Connect (OSTI)

    Kronewitter, Scott R.; Slysz, Gordon W.; Marginean, Ioan; Hagler, Clay D.; Lamarche, Brian L.; Zhao, Rui; Harris, Myanna Y.; Monroe, Matthew E.; Polyukh, Christina A.; Crowell, Kevin L.; Fillmore, Thomas L.; Carlson, Timothy S.; Camp, David G.; Moore, Ronald J.; Payne, Samuel H.; Anderson, Gordon A.; Smith, Richard D.

    2014-05-31T23:59:59.000Z

    Dense LC-MS datasets have convoluted extracted ion chromatograms with multiple chromatographic peaks that cloud the differentiation between intact compounds with their overlapping isotopic distributions, peaks due to insource ion fragmentation, and noise. Making this differentiation is critical in glycomics datasets because chromatographic peaks correspond to different intact glycan structural isomers. The GlyQ-IQ software is targeted chromatography centric software designed for chromatogram and mass spectra data processing and subsequent glycan composition annotation. The targeted analysis approach offers several key advantages to LC-MS data processing and annotation over traditional algorithms. A priori information about the individual target’s elemental composition allows for exact isotope profile modeling for improved feature detection and increased sensitivity by focusing chromatogram generation and peak fitting on the isotopic species in the distribution having the highest intensity and data quality. Glycan target annotation is corroborated by glycan family relationships and in source fragmentation detection. The GlyQ-IQ software is developed in this work (Part 1) and was used to profile N-glycan compositions from human serum LC-MS Datasets. The companion manuscript GlyQ-IQ Part 2 discusses developments in human serum N-glycan sample preparation, glycan isomer separation, and glycan electrospray ionization. A case study is presented to demonstrate how GlyQ-IQ identifies and removes confounding chromatographic peaks from high mannose glycan isomers from human blood serum. In addition, GlyQ-IQ was used to generate a broad N-glycan profile from a high resolution (100K/60K) nESI-LS-MS/MS dataset including CID and HCD fragmentation acquired on a Velos Pro Mass spectrometer. 101 glycan compositions and 353 isomer peaks were detected from a single sample. 99% of the GlyQ-IQ glycan-feature assignments passed manual validation and are backed with high resolution mass spectra and mass accuracies less than 7 ppm.

  10. Network-Theoretic Classification of Parallel Computation Patterns

    E-Print Network [OSTI]

    Whalen, Sean; Engle, Sophie; Peisert, Sean; Bishop, Matt

    2012-01-01T23:59:59.000Z

    computation in a high performance computing environment canThe ?eld of High performance computing (HPC) is undergoingmembers of the high performance computing security project

  11. HIGH-PERFORMANCE COATING MATERIALS

    SciTech Connect (OSTI)

    SUGAMA,T.

    2007-01-01T23:59:59.000Z

    Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

  12. Data Mining Middleware for Wide Area High Performance Networks

    E-Print Network [OSTI]

    Grossman, Robert

    1 Data Mining Middleware for Wide Area High Performance Networks Robert L. Grossman*, Yunhong Gu, David Hanley, and Michal Sabala National Center for Data Mining, University of Illinois at Chicago, USA astronomical data from the Sloan Digital Sky Survey (SDSS) and the other involves computing histograms from

  13. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOE Patents [OSTI]

    Faraj, Ahmad (Rochester, MN)

    2012-04-17T23:59:59.000Z

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  14. Advanced Scientific Computing Research Network Requirements

    E-Print Network [OSTI]

    Dart, Eli

    2014-01-01T23:59:59.000Z

    that have a high-performance computing (HPC) component (with an emphasis on high performance computing facilities.develop and deploy high- performance computing hardware and

  15. Multiclass Classification of Distributed Memory Parallel Computations

    E-Print Network [OSTI]

    Whalen, Sean; Peisert, Sean; Bishop, Matt

    2012-01-01T23:59:59.000Z

    95616 b Abstract High Performance Computing (HPC) is a ?eldorganizing maps, High performance computing, Communicationpowerful known High Performance Computing (HPC) systems in

  16. The Magellan Final Report on Cloud Computing

    E-Print Network [OSTI]

    Coghlan, Susan

    2013-01-01T23:59:59.000Z

    framework for high-performance computing systems. In IEEEAnalysis of High Performance Computing Applications on theon UnConventional high performance computing workshop plus

  17. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    SciTech Connect (OSTI)

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-04-01T23:59:59.000Z

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability.

  18. Collaboration to advance high-performance computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadapInactiveVisiting the TWPSuccessAlamosCharacterization2Climate,CobaltCold

  19. DOE High Performance Computing Operational Review

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadapInactiveVisitingContract Management Fermi SitePART I SECTION ADMSEDOEPortal

  20. Presentation: High Performance Computing Applications | Department of

    Energy Savers [EERE]

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directed offOCHCO Overview OCHCO Overview OCHCODepartment of EnergyPresentation: DOE Office of

  1. Introduction to High Performance Computing Using GPUs

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC)Integrated Codes | National NuclearInterlibraryDocumentationTechnical'Make'IntroHPC

  2. Presentation: High Performance Computing Applications | Department of

    Broader source: Energy.gov (indexed) [DOE]

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOn April 23, 2014, an OHA AdministrativeofDepartment DOE-STD-3009-94 JulyDepartment

  3. Network-Theoretic Classification of Parallel Computation Patterns

    E-Print Network [OSTI]

    Whalen, Sean; Peisert, Sean; Bishop, Matt

    2011-01-01T23:59:59.000Z

    computation in a high performance computing en- vironmentAs the ?eld of high performance computing (HPC) plans for

  4. Dinosaurs can fly -- High performance refining

    SciTech Connect (OSTI)

    Treat, J.E. [Booz-Allen and Hamilton, Inc., San Francisco, CA (United States)

    1995-09-01T23:59:59.000Z

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.

  5. Dish Sterling High Performance Thermal Storage

    Broader source: Energy.gov (indexed) [DOE]

    Stirling High Performance Thermal Storage Sandia National Laboratories SNLAndrakaA: FY13Q2: Charles E. Andraka * 2-D PCM model extended to include realistic heat pipe boundary...

  6. New set of metrics for the computational performance

    E-Print Network [OSTI]

    New set of metrics for the computational performance of IS-ENES Earth System Models TR/CMGC/14/73 U performance of Earth System Models is developed and used for an initial performance analysis of the EC models.................................................................................................................................5 2.1 List of Participating Earth System Models

  7. Computationally Efficient Modeling of High-Efficiency Clean Combustion...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    & Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines...

  8. Kathy Yelick Co-authors NRC Report on Computer Performance -...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    the lab's NERSC Division, was a panelist in a March 22 discussion of "The Future of Computer Performance: Game Over or Next Level?" a new report by the National Research Council....

  9. Mira Performance Boot Camp 2015 | Argonne Leadership Computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mira Performance Boot Camp 2015 Event Sponsor: Argonne Leadership Computing Facility Start Date: May 19 2015 - 8:30am to May 21 2015 - 5:00pm BuildingRoom: TCS Building 240 | Room...

  10. Performance of various computers using standard linear equations

    SciTech Connect (OSTI)

    Dongarra, J. (Univ. of Tennessee, TN (US))

    1989-01-01T23:59:59.000Z

    This report compares the performance of different computer systems in solving dense systems of linear equations. The comparison involves approximately one hundred computers, ranging from a CRAY Y-MP to scientific workstations such as the Apollo and Sun to IBM PCs.

  11. Natural Refrigerant High-Performance Heat Pump for Commercial...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Natural Refrigerant High-Performance Heat Pump for Commercial Applications Natural Refrigerant High-Performance Heat Pump for Commercial Applications Lead Performer: S-RAM -...

  12. Routing performance analysis and optimization within a massively parallel computer

    DOE Patents [OSTI]

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16T23:59:59.000Z

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  13. National facility for advanced computational science: A sustainable path to scientific discovery

    E-Print Network [OSTI]

    2004-01-01T23:59:59.000Z

    Applications,” High Performance Computing for ComputationalSystem Effectiveness in High Performance Computing Systems,”Tammy Welcome, “High Performance Computing Facilities for

  14. Optimization of a Lattice Boltzmann Computation on State-of-the-Art Multicore Platforms

    E-Print Network [OSTI]

    Williams, Samuel

    2009-01-01T23:59:59.000Z

    In Proc. SC2005: High performance computing, networking, andMeeting on High Performance Computing for ComputationalIn Proc. SC2005: High performance computing, networking, and

  15. Fast algorithms and solvers in computational electromagnetics and micromagnetics on GPUs

    E-Print Network [OSTI]

    Li, Shaojing

    2012-01-01T23:59:59.000Z

    Architecture and High Performance Computing Workshops (SBAC-architecture high performance computing for computationalConference on High Performance Computing Networking, Storage

  16. RIKEN HPCI Program for Computational Life Sciences

    E-Print Network [OSTI]

    Fukai, Tomoki

    of computational resources offered by the High Performance Computing Infrastructure, with the K computer long-term support. High Performance Computing Development Education and Outreach Strategic Programs

  17. McMPI – a managed-code message passing interface library for high performance communication in C# 

    E-Print Network [OSTI]

    Holmes, Daniel John

    2012-11-28T23:59:59.000Z

    This work endeavours to achieve technology transfer between established best-practice in academic high-performance computing and current techniques in commercial high-productivity computing. It shows that a credible ...

  18. 9/10/2002 Internet/Grid Computing -Fall 2002 1 What is Performance for Internet/Grid Computation?

    E-Print Network [OSTI]

    Browne, James C.

    - Fall 2002 7 What is Performance for Internet/Grid Computation? Relative Speed/Cost of Computation is Performance for Internet/Grid Computation? Speed up for distributed parallel execution 1. Parallelizability9/10/2002 Internet/Grid Computing - Fall 2002 1 What is Performance for Internet/Grid Computation

  19. High-Precision Computation and Mathematical Physics

    SciTech Connect (OSTI)

    Bailey, David H.; Borwein, Jonathan M.

    2008-11-03T23:59:59.000Z

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  20. A Survey of High Performance Schools 

    E-Print Network [OSTI]

    Im, P.; Haberl, J. S.

    2006-01-01T23:59:59.000Z

    • Photovoltaic (PV) systems • Ground source heat pumps • High AFUE (e.g., over 90%) boilers From the EERE database, however, it is difficult to differentiate the energy efficient strategies according to climate area. Different strategies for difference... areas. In some schools, photovoltaic (PV) systems have been installed. For example, a 1-2 kW PV system was installed at Tucson Unified School District, Arizona (i.e., hot and dry climates). More detailed design guides for high performance schools...

  1. Strategy Guideline: Partnering for High Performance Homes

    SciTech Connect (OSTI)

    Prahl, D.

    2013-01-01T23:59:59.000Z

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  2. Project materials [Commercial High Performance Buildings Project

    SciTech Connect (OSTI)

    None

    2001-01-01T23:59:59.000Z

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  3. NET-ZERO ENERGY HIGH PERFORMANCE

    E-Print Network [OSTI]

    Farritor, Shane

    , University of Nebraska­Lincoln · Denise Kuehn, Manager, Demand Side and Sustainable Management, Omaha Public was that the largest potential for enhancing energy supplies in this country is making buildings more efficient. "-- Harvey Perlman, UNL Chancellor #12;Net-Zero Energy, High-Performance Green Buildings | 1 INTRODUCTION

  4. PMap : unlocking the performance genes of HPC applications

    E-Print Network [OSTI]

    He, Jiahua

    2011-01-01T23:59:59.000Z

    in High Performance Computing . . . . . . . . 1.2.1 ParallelConfer- ence for High Performance Computing, Networking,on Patterns in High Performance Computing, May 2005. [47] M.

  5. I/O Performance of Virtualized Cloud Environments

    E-Print Network [OSTI]

    Ghoshal, Devarshi

    2013-01-01T23:59:59.000Z

    Technologies in High Performance Computing. In 2nd IEEEusing virtual high-performance computing: a case study usingAnalysis of High Performance Computing Applications on the

  6. GPGPUs: How to Combine High Computational Power with High Reliability

    E-Print Network [OSTI]

    Gurumurthi, Sudhanva

    recent results derived from radiation experiments about the reliability of GPGPUs. Third, it describes of applications running on GPGPUs. Keywords--GPGPUs, reliability, HPC, fault injection, radiation experiments I their appearance on the market. Their very high computational power combined with low cost, reduced power

  7. High-Performance Energy Applications and Systems

    SciTech Connect (OSTI)

    Miller, Barton

    2014-05-19T23:59:59.000Z

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “Foundational Tools for Petascale Computing”, SC0003922/FG02-10ER25940, UW PRJ27NU.

  8. High Performance Commercial Fenestration Framing Systems

    SciTech Connect (OSTI)

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31T23:59:59.000Z

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial fenestration framing systems, by investigating new technologies that would improve the thermal performance of aluminum frames, while maintaining their structural and life-cycle performance. The project targeted an improvement of over 30% (whole window performance) over conventional commercial framing technology by improving the performance of commercial framing systems.

  9. High Level Computational Chemistry Approaches to the Prediction...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Level Computational Chemistry Approaches to the Prediction of Energetic Properties of Chemical Hydrogen Storage Systems High Level Computational Chemistry Approaches to the...

  10. TOWARD END-TO-END MODELING FOR NUCLEAR EXPLOSION MONITORING: SIMULATION OF UNDERGROUND NUCLEAR EXPLOSIONS AND EARTHQUAKES USING HYDRODYNAMIC AND ANELASTIC SIMULATIONS, HIGH-PERFORMANCE COMPUTING AND THREE-DIMENSIONAL EARTH MODELS

    SciTech Connect (OSTI)

    Rodgers, A; Vorobiev, O; Petersson, A; Sjogreen, B

    2009-07-06T23:59:59.000Z

    This paper describes new research being performed to improve understanding of seismic waves generated by underground nuclear explosions (UNE) by using full waveform simulation, high-performance computing and three-dimensional (3D) earth models. The goal of this effort is to develop an end-to-end modeling capability to cover the range of wave propagation required for nuclear explosion monitoring (NEM) from the buried nuclear device to the seismic sensor. The goal of this work is to improve understanding of the physical basis and prediction capabilities of seismic observables for NEM including source and path-propagation effects. We are pursuing research along three main thrusts. Firstly, we are modeling the non-linear hydrodynamic response of geologic materials to underground explosions in order to better understand how source emplacement conditions impact the seismic waves that emerge from the source region and are ultimately observed hundreds or thousands of kilometers away. Empirical evidence shows that the amplitudes and frequency content of seismic waves at all distances are strongly impacted by the physical properties of the source region (e.g. density, strength, porosity). To model the near-source shock-wave motions of an UNE, we use GEODYN, an Eulerian Godunov (finite volume) code incorporating thermodynamically consistent non-linear constitutive relations, including cavity formation, yielding, porous compaction, tensile failure, bulking and damage. In order to propagate motions to seismic distances we are developing a one-way coupling method to pass motions to WPP (a Cartesian anelastic finite difference code). Preliminary investigations of UNE's in canonical materials (granite, tuff and alluvium) confirm that emplacement conditions have a strong effect on seismic amplitudes and the generation of shear waves. Specifically, we find that motions from an explosion in high-strength, low-porosity granite have high compressional wave amplitudes and weak shear waves, while an explosion in low strength, high-porosity alluvium results in much weaker compressional waves and low-frequency compressional and shear waves of nearly equal amplitude. Further work will attempt to model available near-field seismic data from explosions conducted at NTS, where we have accurate characterization of the sub-surface from the wealth of geological and geophysical data from the former nuclear test program. Secondly, we are modeling seismic wave propagation with free-surface topography in WPP. We have model the October 9, 2006 and May 25, 2009 North Korean nuclear tests to investigate the impact of rugged topography on seismic waves. Preliminary results indicate that the topographic relief causes complexity in the direct P-waves that leads to azimuthally dependent behavior and the topographic gradient to the northeast, east and southeast of the presumed test locations generate stronger shear-waves, although each test gives a different pattern. Thirdly, we are modeling intermediate period motions (10-50 seconds) from earthquakes and explosions at regional distances. For these simulations we run SPECFEM3D{_}GLOBE (a spherical geometry spectral element code). We modeled broadband waveforms from well-characterized and well-observed events in the Middle East and central Asia, as well as the North Korean nuclear tests. For the recent North Korean test we found that the one-dimensional iasp91 model predicts the observed waveforms quite well in the band 20-50 seconds, while waveform fits for available 3D earth models are generally poor, with some exceptions. Interestingly 3D models can predict energy on the transverse component for an isotropic source presumably due to surface wave mode conversion and/or multipathing.

  11. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    E-Print Network [OSTI]

    David Abdurachmanov; Brian Bockelman; Peter Elmer; Giulio Eulisse; Robert Knight; Shahzad Muzaffar

    2014-10-10T23:59:59.000Z

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  12. Computing at JLab

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

  13. A high-performance workflow system for subsurface simulation

    SciTech Connect (OSTI)

    Freedman, Vicky L.; Chen, Xingyuan; Finsterle, Stefan A.; Freshley, Mark D.; Gorton, Ian; Gosink, Luke J.; Keating, Elizabeth; Lansing, Carina; Moeglein, William AM; Murray, Christopher J.; Pau, George Shu Heng; Porter, Ellen A.; Purohit, Sumit; Rockhold, Mark L.; Schuchardt, Karen L.; Sivaramakrishnan, Chandrika; Vesselinov, Velimir V.; Waichler, Scott R.

    2014-02-14T23:59:59.000Z

    Subsurface modeling applications typically neglect uncertainty in the conceptual models, past or future scenarios, and attribute most or all uncertainty to errors in model parameters. In this contribution, uncertainty in technetium-99 transport in a heterogeneous, deep vadose zone is explored with respect to the conceptual model using a next generation user environment called Akuna. Akuna provides a range of tools to manage environmental modeling projects, from managing simulation data to visualizing results from high-performance computational simulators. Core toolsets accessible through the user interface include model setup, grid generation, parameter estimation, and uncertainty quantification. The BC Cribs site at Hanford in southeastern Washington State is used to demonstrate Akuna capabilities. At the BC Cribs site, conceptualization of the system is highly uncertain because only sparse information is available for the geologic conceptual model, the physical and chemical properties of the sediments, and the history of waste disposal operations. Using the Akuna toolset to perform an analysis of conservative solute transport, significant prediction uncertainty in simulated concentrations is demonstrated by conceptual model variation. This demonstrates that conceptual model uncertainty is an important consideration in sparse data environments such as BC Cribs. It is also demonstrated that Akuna and the underlying toolset provides an integrated modeling environment that streamlines model setup, parameter optimization, and uncertainty analyses for high-performance computing applications.

  14. ARIES: Building America, High Performance Factory Built Housing...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ARIES: Building America, High Performance Factory Built Housing - 2015 Peer Review ARIES: Building America, High Performance Factory Built Housing - 2015 Peer Review Presenter:...

  15. New rocket propellant and motor design offer high-performance...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    New rocket propellant and motor design offer high-performance and safety New rocket propellant and motor design offer high-performance and safety Scientists recently flight tested...

  16. Boosting Small Engines to High Performance - Boosting Systems...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Boosting Small Engines to High Performance - Boosting Systems and Combustion Development Methodology Boosting Small Engines to High Performance - Boosting Systems and Combustion...

  17. Building America Webinar: High-Performance Enclosure Strategies...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Building America Webinar: High-Performance Enclosure Strategies, Part I: Unvented Roof Systems and Innovative Advanced Framing Strategies Building America Webinar: High-Performance...

  18. USABC Development of Advanced High-Performance Batteries for...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Development of Advanced High-Performance Batteries for EV Applications USABC Development of Advanced High-Performance Batteries for EV Applications 2012 DOE Hydrogen and Fuel Cells...

  19. Possible Origin of Improved High Temperature Performance of Hydrotherm...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Origin of Improved High Temperature Performance of Hydrothermally Aged CuBeta Zeolite Catalysts. Possible Origin of Improved High Temperature Performance of Hydrothermally Aged...

  20. Federal Leadership in High Performance and Sustainable Buildings...

    Broader source: Energy.gov (indexed) [DOE]

    leadership in the design, construction, and operation of High-Performance and Sustainable Buildings. Federal Leadership in High Performance and Sustainable Buildings...

  1. Materials and Modules for Low Cost, High Performance Fuel Cell...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Modules for Low Cost, High Performance Fuel Cell Humidifiers Materials and Modules for Low Cost, High Performance Fuel Cell Humidifiers Presented at the Department of Energy Fuel...

  2. Flexible Pillared Graphene-Paper Electrodes for High-Performance...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Flexible Pillared Graphene-Paper Electrodes for High-Performance Electrochemical Supercapacitors. Flexible Pillared Graphene-Paper Electrodes for High-Performance Electrochemical...

  3. Overcoming Processing Cost Barriers of High-Performance Lithium...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Processing Cost Barriers of High-Performance Lithium-Ion Battery Electrodes Overcoming Processing Cost Barriers of High-Performance Lithium-Ion Battery Electrodes 2012 DOE Hydrogen...

  4. Building America Webinar: High-Performance Enclosure Strategies...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    High-Performance Enclosure Strategies, Part I: Unvented Roof Systems and Innovative Advanced Framing Strategies Building America Webinar: High-Performance Enclosure Strategies,...

  5. NSF/DOE Thermoelectric Partnership: High-Performance Thermoelectric...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    High-Performance Thermoelectric Devices Based on Abundant Silicide Materials for Vehicle Waste Heat Recovery NSFDOE Thermoelectric Partnership: High-Performance Thermoelectric...

  6. Innovative High-Performance Deposition Technology for Low-Cost...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Innovative High-Performance Deposition Technology for Low-Cost Manufacturing of OLED Lighting Innovative High-Performance Deposition Technology for Low-Cost Manufacturing of OLED...

  7. Rebuilding It Better: Greensburg, Kansas, High Performance Buildings...

    Office of Environmental Management (EM)

    It Better: Greensburg, Kansas, High Performance Buildings Meeting Energy Savings Goals (Brochure) Rebuilding It Better: Greensburg, Kansas, High Performance Buildings Meeting...

  8. Webinar: ENERGY STAR Hot Water Systems for High Performance Homes...

    Energy Savers [EERE]

    Webinar: ENERGY STAR Hot Water Systems for High Performance Homes Webinar: ENERGY STAR Hot Water Systems for High Performance Homes This presentation is from the Building America...

  9. Measured Performance of Energy-Efficient Computer Systems 

    E-Print Network [OSTI]

    Floyd, D. B.; Parker, D. S.

    1996-01-01T23:59:59.000Z

    The intent of this study is to explore the potential performance of both Energy Star computers/printers and add-on control devices individually, and their expected savings if collectively applied in a typical office building in a hot and humid...

  10. DOE High Performance Concentrator PV Project

    SciTech Connect (OSTI)

    McConnell, R.; Symko-Davies, M.

    2005-08-01T23:59:59.000Z

    Much in demand are next-generation photovoltaic (PV) technologies that can be used economically to make a large-scale impact on world electricity production. The U.S. Department of Energy (DOE) initiated the High-Performance Photovoltaic (HiPerf PV) Project to substantially increase the viability of PV for cost-competitive applications so that PV can contribute significantly to both our energy supply and environment. To accomplish such results, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices with the goal of enabling progress of high-efficiency technologies toward commercial-prototype products. We will describe the details of the subcontractor and in-house progress in exploring and accelerating pathways of III-V multijunction concentrator solar cells and systems toward their long-term goals. By 2020, we anticipate that this project will have demonstrated 33% system efficiency and a system price of $1.00/Wp for concentrator PV systems using III-V multijunction solar cells with efficiencies over 41%.

  11. High performance robotic traverse of desert terrain.

    SciTech Connect (OSTI)

    Whittaker, William (Carnegie Mellon University, Pittsburgh, PA)

    2004-09-01T23:59:59.000Z

    This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.

  12. High Performance Piezoelectric Actuated Gimbal (HIERAX)

    SciTech Connect (OSTI)

    Charles Tschaggeny; Warren Jones; Eberhard Bamberg

    2007-04-01T23:59:59.000Z

    This paper presents a 3-axis gimbal whose three rotational axes are actuated by a novel drive system: linear piezoelectric motors whose linear output is converted to rotation by using drive disks. Advantages of this technology are: fast response, high accelerations, dither-free actuation and backlash-free positioning. The gimbal was developed to house a laser range finder for the purpose of tracking and guiding unmanned aerial vehicles during landing maneuvers. The tilt axis was built and the test results indicate excellent performance that meets design specifications.

  13. high-performance | netl.doe.gov

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:Energy: Grid Integration Redefining What'sis Taking Over OurThe Iron4 Self-Scrubbing:,, , ., DecemberganghDiffuseWorkGrandyHigh-Performance

  14. High Performance Sustainable Building Design RM

    Office of Environmental Management (EM)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742 33 1112011 Strategic2 OPAM Flash2011-12Approved on 24 July 2008 1 Office ofHigh Performance

  15. Ultra-high resolution computed tomography imaging

    DOE Patents [OSTI]

    Paulus, Michael J. (Knoxville, TN); Sari-Sarraf, Hamed (Knoxville, TN); Tobin, Jr., Kenneth William (Harriman, TN); Gleason, Shaun S. (Knoxville, TN); Thomas, Jr., Clarence E. (Knoxville, TN)

    2002-01-01T23:59:59.000Z

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  16. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24T23:59:59.000Z

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.

  17. High-performance laboratories and cleanrooms

    SciTech Connect (OSTI)

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-07-01T23:59:59.000Z

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations--primarily safety driven--that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities.

  18. Titanium: A High-Performance Java Dialect Kathy Yelick, Luigi Semenzato, Geoff Pike, Carleton Miyamoto,

    E-Print Network [OSTI]

    Aiken, Alex

    Titanium: A High-Performance Java Dialect Kathy Yelick, Luigi Semenzato, Geoff Pike, Carleton Laboratory Abstract Titanium is a language and system for high-performance parallel scientific computing. Titanium uses Java as its base, thereby leveraging the advantages of that language and allowing us to focus

  19. Titanium: A HighPerformance Java Dialect \\Lambda Kathy Yelick, Luigi Semenzato, Geoff Pike, Carleton Miyamoto,

    E-Print Network [OSTI]

    Krishnamurthy, Arvind

    Titanium: A High­Performance Java Dialect \\Lambda Kathy Yelick, Luigi Semenzato, Geoff Pike National Laboratory Abstract Titanium is a language and system for high­performance parallel scientific computing. Titanium uses Java as its base, thereby leveraging the advantages of that language and allowing

  20. Titanium: A High-Performance Java Dialect* Kathy Yelick, Luigi Semenzato, Geoff Pike, Carleton Miyamoto,

    E-Print Network [OSTI]

    Titanium: A High-Performance Java Dialect* Kathy Yelick, Luigi Semenzato, Geoff Pike, Carleton Laboratory Abstract Titanium is a language and system for high-performance parallel scientific computing. Titanium uses Java as its base, thereby leveraging the advantages of that language and allowing us to focus

  1. 12003 Workshop on High Performance Switching and Routing M.Atiquzzaman, Univ. of Oklahoma, June 2003.

    E-Print Network [OSTI]

    Atiquzzaman, Mohammed

    12003 Workshop on High Performance Switching and Routing M.Atiquzzaman, Univ. of Oklahoma, June Fu) School of Computer Science University of Oklahoma, Norman, OK 73019-6151. Email: atiq@ou.edu #12;22003 Workshop on High Performance Switching and Routing M.Atiquzzaman, Univ. of Oklahoma, June 2003

  2. Sandia Energy - High-Resolution Computational Algorithms for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    High-Resolution Computational Algorithms for Simulating Offshore Wind Farms Home Stationary Power Energy Conversion Efficiency Wind Energy Offshore Wind High-Resolution...

  3. Computational Fluid Dynamics Framework for Turbine Biological Performance Assessment

    SciTech Connect (OSTI)

    Richmond, Marshall C.; Serkowski, John A.; Carlson, Thomas J.; Ebner, Laurie L.; Sick, Mirjam; Cada, G. F.

    2011-05-04T23:59:59.000Z

    In this paper, a method for turbine biological performance assessment is introduced to bridge the gap between field and laboratory studies on fish injury and turbine design. Using this method, a suite of biological performance indicators is computed based on simulated data from a computational fluid dynamics (CFD) model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. If the relationship between the dose of an injury mechanism and frequency of injury (dose-response) is known from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from various turbine designs, the engineer can identify the more-promising designs. Discussion here is focused on Kaplan-type turbines, although the method could be extended to other designs. Following the description of the general methodology, we will present sample risk assessment calculations based on CFD data from a model of the John Day Dam on the Columbia River in the USA.

  4. Mathematical, Information and Computational Sciences Mathematical, Information

    E-Print Network [OSTI]

    and Computational Sciences #12;High Performance Computing, Collaboration and Networks- Critical for DOE Science

  5. High-performance, high-volume fly ash concrete

    SciTech Connect (OSTI)

    NONE

    2008-01-15T23:59:59.000Z

    This booklet offers the construction professional an in-depth description of the use of high-volume fly ash in concrete. Emphasis is placed on the need for increased utilization of coal-fired power plant byproducts in lieu of Portland cement materials to eliminate increased CO{sub 2} emissions during the production of cement. Also addressed is the dramatic increase in concrete performance with the use of 50+ percent fly ash volume. The booklet contains numerous color and black and white photos, charts of test results, mixtures and comparisons, and several HVFA case studies.

  6. High performance internal reforming unit for high temperature fuel cells

    DOE Patents [OSTI]

    Ma, Zhiwen (Sandy Hook, CT); Venkataraman, Ramakrishnan (New Milford, CT); Novacco, Lawrence J. (Brookfield, CT)

    2008-10-07T23:59:59.000Z

    A fuel reformer having an enclosure with first and second opposing surfaces, a sidewall connecting the first and second opposing surfaces and an inlet port and an outlet port in the sidewall. A plate assembly supporting a catalyst and baffles are also disposed in the enclosure. A main baffle extends into the enclosure from a point of the sidewall between the inlet and outlet ports. The main baffle cooperates with the enclosure and the plate assembly to establish a path for the flow of fuel gas through the reformer from the inlet port to the outlet port. At least a first directing baffle extends in the enclosure from one of the sidewall and the main baffle and cooperates with the plate assembly and the enclosure to alter the gas flow path. Desired graded catalyst loading pattern has been defined for optimized thermal management for the internal reforming high temperature fuel cells so as to achieve high cell performance.

  7. Building America Webinar: High Performance Space Conditioning...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    EST The webinar will continue our series on strategies to improve the performance of HVAC systems for low load homes and home performance retrofits. Presenters and specific...

  8. Towards High Performance Processing In Modern Java Based Control Systems

    E-Print Network [OSTI]

    Misiowiec, M; Buttner, M

    2011-01-01T23:59:59.000Z

    CERN controls software is often developed on Java foundation. Some systems carry out a combination of data, network and processor intensive tasks within strict time limits. Hence, there is a demand for high performing, quasi real time solutions. Extensive prototyping of the new CERN monitoring and alarm software required us to address such expectations. The system must handle dozens of thousands of data samples every second, along its three tiers, applying complex computations throughout. To accomplish the goal, a deep understanding of multithreading, memory management and interprocess communication was required. There are unexpected traps hidden behind an excessive use of 64 bit memory or severe impact on the processing flow of modern garbage collectors. Tuning JVM configuration significantly affects the execution of the code. Even more important is the amount of threads and the data structures used between them. Accurately dividing work into independent tasks might boost system performance. Thorough profili...

  9. Science-driven system architecture: A new process for leadership class computing

    E-Print Network [OSTI]

    2004-01-01T23:59:59.000Z

    Hitachi SR8000-F1, High Performance Computing in Science ands investment in high performance computing, initiate a newofferings, the high performance computing community has

  10. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOE Patents [OSTI]

    Faraj, Ahmad

    2013-02-12T23:59:59.000Z

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: performing, for each node, a local reduction operation using allreduce contribution data for the cores of that node, yielding, for each node, a local reduction result for one or more representative cores for that node; establishing one or more logical rings among the nodes, each logical ring including only one of the representative cores from each node; performing, for each logical ring, a global allreduce operation using the local reduction result for the representative cores included in that logical ring, yielding a global allreduce result for each representative core included in that logical ring; and performing, for each node, a local broadcast operation using the global allreduce results for each representative core on that node.

  11. High-performance commercial building systems

    E-Print Network [OSTI]

    Selkowitz, Stephen

    2003-01-01T23:59:59.000Z

    IAQ, with possible improvements in attendance, health and learning in schools and performance and productivity

  12. High Performance MPI on IBM 12x InfiniBand Architecture Abhinav Vishnu Brad Benton

    E-Print Network [OSTI]

    Panda, Dhabaleswar K.

    . Panda Network Based Computing Lab Department of Computer Science and Engineering The Ohio StateHigh Performance MPI on IBM 12x InfiniBand Architecture Abhinav Vishnu Brad Benton Dhabaleswar K University {vishnu, panda}@cse.ohio-state.edu IBM Austin 11501 Burnet Road Austin, TX 78758 {brad.benton}@us.ibm

  13. Project Profile: Development and Performance Evaluation of High...

    Energy Savers [EERE]

    Development and Performance Evaluation of High Temperature Concrete for Thermal Energy Storage for Solar Power Generation Project Profile: Development and Performance Evaluation of...

  14. High-performance commercial building systems

    SciTech Connect (OSTI)

    Selkowitz, Stephen

    2003-10-01T23:59:59.000Z

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and health and performance benefits to occupants. At the same time this program can strengthen the growing energy efficiency industry in California by providing new jobs and growth opportunities for companies providing the technology, systems, software, design, and building services to the commercial sector. The broad objectives across all five program elements were: (1) To develop and deploy an integrated set of tools and techniques to support the design and operation of energy-efficient commercial buildings; (2) To develop open software specifications for a building data model that will support the interoperability of these tools throughout the building life-cycle; (3) To create new technology options (hardware and controls) for substantially reducing controllable lighting, envelope, and cooling loads in buildings; (4) To create and implement a new generation of diagnostic techniques so that commissioning and efficient building operations can be accomplished reliably and cost effectively and provide sustained energy savings; (5) To enhance the health, comfort and performance of building occupants. (6) To provide the information technology infrastructure for owners to minimize their energy costs and manage their energy information in a manner that creates added value for their buildings as the commercial sector transitions to an era of deregulated utility markets, distributed generation, and changing business practices. Our ultimate goal is for our R&D effort to have measurable market impact. This requires that the research tasks be carried out with a variety of connections to key market actors or trends so that they are recognized as relevant and useful and can be adopted by expected users. While some of this activity is directly integrated into our research tasks, the handoff from ''market-connected R&D'' to ''field deployment'' is still an art as well as a science and in many areas requires resources and a timeframe well beyond the scope of this PIER research program. The TAGs, PAC and other industry partners have assisted directly in this effort

  15. The University of New Mexico ELECTRICAL & COMPUTER

    E-Print Network [OSTI]

    New Mexico, University of

    power microwaves, high performance computing, and signal processing. During the year 2000 we received industry. Also, 3 new research laboratories were established: the High Performance Computing Laboratory & Computational EM Laboratory High-Performance Computing Laboratory PURSUE Research Program Laboratories Pulsed

  16. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOE Patents [OSTI]

    Faraj, Ahmad

    2013-07-09T23:59:59.000Z

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: establishing, for each node, a plurality of logical rings, each ring including a different set of at least one core on that node, each ring including the cores on at least two of the nodes; iteratively for each node: assigning each core of that node to one of the rings established for that node to which the core has not previously been assigned, and performing, for each ring for that node, a global allreduce operation using contribution data for the cores assigned to that ring or any global allreduce results from previous global allreduce operations, yielding current global allreduce results for each core; and performing, for each node, a local allreduce operation using the global allreduce results.

  17. A Review of High Occupancy Vehicle (HOV) Lane Performance and...

    Open Energy Info (EERE)

    High Occupancy Vehicle (HOV) Lane Performance and Policy Options in the United States: Final Report Jump to: navigation, search Tool Summary LAUNCH TOOL Name: A Review of High...

  18. Enhanced High Temperature Performance of NOx Storage/Reduction...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Meeting ace026peden2012o.pdf More Documents & Publications Enhanced High Temperature Performance of NOx StorageReduction (NSR) Materials Enhanced High and Low...

  19. A Survey of High-Quality Computational Libraries and their Impactin Science and Engineering Applications

    SciTech Connect (OSTI)

    Drummond, L.A.; Hernandez, V.; Marques, O.; Roman, J.E.; Vidal, V.

    2004-09-20T23:59:59.000Z

    Recently, a number of important scientific and engineering problems have been successfully studied and solved by means of computational modeling and simulation. Many of these computational models and simulations benefited from the use of available software tools and libraries to achieve high performance and portability. In this article, we present a reference matrix of the performance of robust, reliable and widely used tools mapped to scientific and engineering applications that use them. We aim at regularly maintaining and disseminating this matrix to the computational science community. This matrix will contain information on state-of-the-art computational tools, their applications and their use.

  20. Amy W. Apon, Ph.D. Professor and Chair, Computer Science Division

    E-Print Network [OSTI]

    Duchowski, Andrew T.

    performance computing, impact of high performance computing to research competiveness, sustainable funding, Division of Computer Science, Clemson University 20082011 Director, Arkansas High Performance Computing Center 20042008 Director of High Performance Computing, University of Arkansas 20072011 Professor

  1. Symmetric Active/Active HighSymmetric Active/Active High Availability for HighAvailability for High--PerformancePerformance

    E-Print Network [OSTI]

    Engelmann, Christian

    Symmetric Active/Active HighSymmetric Active/Active High Availability for HighAvailability for High reliability and availability is deceasing rapidlyHPC system reliability and availability is deceasing rapidly memory)bit errors in ECC memory) High availability as well as high performance isHigh availability

  2. High-Performance Secure Database Access Technologies for HEP Grids

    SciTech Connect (OSTI)

    Matthew Vranicar; John Weicher

    2006-04-17T23:59:59.000Z

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.

  3. Open Problems in Network-aware Data Management in Exa-scale Computing and Terabit Networking Era

    E-Print Network [OSTI]

    Balman, Mehmet

    2014-01-01T23:59:59.000Z

    1st Workshop on High-Performance Computing meets Databases (Provisioning,High Performance Computing, Networking, StorageConference for High Performance Computing, Networking,

  4. AN EMPIRICAL INVESTIGATION INTO THE MODERATING RELATIONSHIP OF COMPUTER SELF-EFFICACY ON PERFORMANCE IN A COMPUTER-SUPPORTED TASK

    E-Print Network [OSTI]

    Aguirre Urreta, Miguel Ignacio

    2008-01-01T23:59:59.000Z

    Computer Self-Efficacy has been shown to be a critical construct in a number of research areas within the Information Systems literature, most notably training, technology adoption, and performance in computer-related tasks. Attention has been...

  5. High-reliability computing for the smarter planet

    SciTech Connect (OSTI)

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

    2010-01-01T23:59:59.000Z

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary. Already critical infrastructure is failing too frequently. In this paper, we will introduce the Cross-Layer Reliability concept for designing more reliable computer systems.

  6. High-Resolution Computed Tomography Study of the Cranium of

    E-Print Network [OSTI]

    Allman, John M.

    High-Resolution Computed Tomography Study of the Cranium of a Fossil Anthropoid Primate of these characteristics may have important implications for brain evolution. Here computed tomography is used to examine in the evolutionary development of anthropoids did these characteristics evolve? We recently used X-ray computed

  7. High performance phenolic piping for oilfield applications

    SciTech Connect (OSTI)

    Folkers, J.L. [Ameron International, Burkburnett, TX (United States); Friedrich, R.S.; Fortune, M. [Ameron International, South Gate, CA (United States)

    1997-08-01T23:59:59.000Z

    The performance advantages of phenolic resins have been enticing for composites manufacturers and users for many years. The use of these materials has been limited, however, by the process, handling and assembly difficulties they present. This paper introduces an innovative modification which has allowed the development of a filament wound piping system for oilfield applications which previously had been beyond the performance envelope of fiberglass pipe. Improvement in temperature resistance and response to steam exposure, as compared to conventional epoxy products, are of particular benefit. Fabrication innovations are also included which can be used where impact resistance or fire performance are needed.

  8. PERFORMANCE-RELATED SPECIAL PROVISION FOR HIGH PERFORMANCE CONCRETE MIX DESIGNS FOR CONCRETE SUPERSTRUCTURE (Tollway)

    E-Print Network [OSTI]

    PERFORMANCE-RELATED SPECIAL PROVISION FOR HIGH PERFORMANCE CONCRETE MIX DESIGNS FOR CONCRETE of designing and furnishing high performance portland cement concrete for special applications to the decks the Illinois Tollway with a methodology to assure high quality concrete with reduced shrinkage potential, while

  9. High Performance Electrolyzers for Hybrid Thermochemical Cycles

    SciTech Connect (OSTI)

    Dr. John W. Weidner

    2009-05-10T23:59:59.000Z

    Extensive electrolyzer testing was performed at the University of South Carolina (USC). Emphasis was given to understanding water transport under various operating (i.e., temperature, membrane pressure differential and current density) and design (i.e., membrane thickness) conditions when it became apparent that water transport plays a deciding role in cell voltage. A mathematical model was developed to further understand the mechanisms of water and SO2 transport, and to predict the effect of operating and design parameters on electrolyzer performance.

  10. Designing Accelerator-Based Distributed Systems for High Performance M. Mustafa Rafique, Ali R. Butt

    E-Print Network [OSTI]

    Butt, Ali R.

    ), yielding highly power-efficient and cost-efficient designs, with per- formance exceeding 100 Gflops [1Designing Accelerator-Based Distributed Systems for High Performance M. Mustafa Rafique, Ali R general- purpose cores (e.g. x86, PowerPC) and computational accelerators (e.g. SIMD processors and GPUs

  11. Theories and Techniques for Efficient High-End Computing Dissertation submitted to the Faculty of the

    E-Print Network [OSTI]

    Ge, Rong

    megawatts of electric power but deliver only 10­15% of peak system perfor- mance for applications in power scalable systems. This model quantifies the impact of processor frequency and power on instruction: High End Computing, Performance Modeling and Analysis, Power-Performance Efficiency, Power

  12. This document is an author-formatted work. The definitive version for citation appears as: R. F. DeMara and P. J. Wilder, "A Taxonomy of High Performance Computer Architectures for Uniform Treatment of Multiproces-

    E-Print Network [OSTI]

    DeMara, Ronald F.

    characteristics 1 Dept. of Electrical and Computer Engr., Univ. of Central Florida, Orlando FL 32816-2450. E memory transactions per cycle. However, they require physical memory capacities rang- ing from NM up of computer architectures presented in this paper that was developed at the University of Central Florida

  13. System Identification and Modelling of a High Performance Hydraulic Actuator

    E-Print Network [OSTI]

    Hayward, Vincent

    System Identification and Modelling of a High Performance Hydraulic Actuator Benoit Boulet, Laeeque with the experimental identification and modelling of the nonlinear dynamics ofa high performance hydraulic actuator. The actuator properties and performance are also discussed. 1 Introduction Hydraulic actuation used to be

  14. Corporate Information & Computing Services

    E-Print Network [OSTI]

    Martin, Stephen John

    Corporate Information & Computing Services High Performance Computing Report March 2008 Author The University of Sheffield's High Performance Computing (HPC) facility is provided by CiCS. It consists of both Graduate Students and Staff. #12;Corporate Information & Computing Services High Performance Computing

  15. High Performance Valve Materials | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't YourTransport(Fact Sheet), GeothermalGridHYDROGEN TOTechnologyHighLouisiana |HighMaterials High

  16. Presented by High-Performance Visualization of

    E-Print Network [OSTI]

    Bhaduri_HPC_GIS_Viz_SC10 Data courtesy of Center for Space Research, UT­Austin High-resolution 3D view example: Beyond desktop capabilities Shuttle Radar Topography Mission (SRTM) dataset · 90 m cell size

  17. High energy neutron Computed Tomography developed

    E-Print Network [OSTI]

    be observed behind high-density materials, such as depleted uranium or tungsten. Comparison of the high (bottom half) and foam (center teeth) phantom could be viewed through 76 mm of depleted uranium. Some ~ 3

  18. High Performance Green LEDs by Homoepitaxial

    SciTech Connect (OSTI)

    Wetzel, Christian; Schubert, E Fred

    2009-11-22T23:59:59.000Z

    This work's objective was the development of processes to double or triple the light output power from green and deep green (525 - 555 nm) AlGaInN light emitting diode (LED) dies within 3 years in reference to the Lumileds Luxeon II. The project paid particular effort to all aspects of the internal generation efficiency of light. LEDs in this spectral region show the highest potential for significant performance boosts and enable the realization of phosphor-free white LEDs comprised by red-green-blue LED modules. Such modules will perform at and outperform the efficacy target projections for white-light LED systems in the Department of Energy's accelerated roadmap of the SSL initiative.

  19. High performance, close-spaced thermionic converters

    SciTech Connect (OSTI)

    Dick, R.S.; Britt, E.J.; Fitzpatrick, G.O.; McVey, J.B.

    1983-08-01T23:59:59.000Z

    Near ideal performance in a Thermionic Energy Converter (TEC) can be obtained using extremely small (< 10 microns) interelectrode spacings. Previous efforts to build such converters have encountered engineering problems. A new type of converter, called SAVTEC (for Self-Adjusting, Versatile Thermionic Energy Converter) has been developed at Rasor Associates, Inc., as a practical way to achieve small spacings. It has been demonstrated to deliver improved performance over conventional, ignited-mode converters. A series of individual SAVTEC's have been built and tested. Two general configurations were built: in the first a single emitter support lead (0.25 mm wire) passes through a hole in the center of the collector, with the emitter being welded to it. In the second three smaller wires replace the center wire and are welded to the emitter perimeter. These converters have shown reliable, temperature controlled spacings of the emitter and collector. Reproducible spacing of 10 microns (0.4 mils) were achieved on several converters. This paper presents details of SAVTEC converter construction and performance, including volt-ampere curves.

  20. High intensity performance of the Brookhaven AGS

    SciTech Connect (OSTI)

    Brennan, J.M.; Roser, T.

    1996-07-01T23:59:59.000Z

    Experience and results from recent high intensity proton running periods of the Brookhaven AGS, during which a record intensity for a proton synchrotron of 6.3 x 10{sup 13} protons/pulse was reached, is presented. This high beam intensity allowed for the simultaneous operation of three high precision rare kaon decay experiments. The record beam intensities were achieved after the 1.5 GeV Booster was commissioned and a transition jump system, a powerful transverse damper, and an rf upgrade in the AGS were completed. Recently even higher intensity proton synchrotrons are studied for neutron spallation sources or proton driver for a muon collider. Implications of the experiences from the AGS to these proposals and also possible future upgrades for the AGS are discussed.

  1. High Performance Window Attachments | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page onYouTube YouTube Note: Since the.pdfBreaking of Blythe Solar Power ProjectHawai'iPresentedHigh PenetrationEnergyHigh

  2. High Performance and Sustainable Buildings Guidance

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't YourTransport(Fact Sheet), GeothermalGridHYDROGEN TOTechnologyHighLouisianaDepartment of HIGH

  3. Performing a global barrier operation in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-12-09T23:59:59.000Z

    Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.

  4. High-performance commercial building facades

    SciTech Connect (OSTI)

    Lee, Eleanor; Selkowitz, Stephen; Bazjanac, Vladimir; Inkarojrit, Vorapat; Kohler, Christian

    2002-06-01T23:59:59.000Z

    This study focuses on advanced building facades that use daylighting, sun control, ventilation systems, and dynamic systems. A quick perusal of the leading architectural magazines, or a discussion in most architectural firms today will eventually lead to mention of some of the innovative new buildings that are being constructed with all-glass facades. Most of these buildings are appearing in Europe, although interestingly U.S. A/E firms often have a leading role in their design. This ''emerging technology'' of heavily glazed fagades is often associated with buildings whose design goals include energy efficiency, sustainability, and a ''green'' image. While there are a number of new books on the subject with impressive photos and drawings, there is little critical examination of the actual performance of such buildings, and a generally poor understanding as to whether they achieve their performance goals, or even what those goals might be. Even if the building ''works'' it is often dangerous to take a design solution from one climate and location and transport it to a new one without a good causal understanding of how the systems work. In addition, there is a wide range of existing and emerging glazing and fenestration technologies in use in these buildings, many of which break new ground with respect to innovative structural use of glass. It is unclear as to how well many of these designs would work as currently formulated in California locations dominated by intense sunlight and seismic events. Finally, the costs of these systems are higher than normal facades, but claims of energy and productivity savings are used to justify some of them. Once again these claims, while plausible, are largely unsupported. There have been major advances in glazing and facade technology over the past 30 years and we expect to see continued innovation and product development. It is critical in this process to be able to understand which performance goals are being met by current technology and design solutions, and which ones need further development and refinement. The primary goal of this study is to clarify the state-of-the-art of the performance of advanced building facades so that California building owners and designers can make informed decisions as to the value of these building concepts in meeting design goals for energy efficiency, ventilation, productivity and sustainability.

  5. Guiding Principles for Federal Leadership in High-Performance...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Sustainable Buildings Guiding Principles for Federal Leadership in High-Performance and Sustainable Buildings The Federal Energy Management Program (FEMP) provides guidance to...

  6. NNSA Presents SRS with Award for Achieving High Performance Sustainabl...

    National Nuclear Security Administration (NNSA)

    Presents SRS with Award for Achieving High Performance Sustainable Building Status | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission...

  7. Energy Design Guidelines for High Performance Schools: Hot and...

    Energy Savers [EERE]

    Climates Energy Design Guidelines for High Performance Schools: Hot and Humid Climates School districts around the country are finding that the smart energy choices can help them...

  8. SEE Action Series: High Performance Leasing Strategies for State...

    Broader source: Energy.gov (indexed) [DOE]

    Real Estate Broker (and Tenant) Education and Engagement Program Solution 6 - Energy Efficiency Code Variance Process High-Performance Leasing Barriers What's needed - *...

  9. Stretchable and High-Performance Supercapacitors with Crumpled Graphene Papers

    E-Print Network [OSTI]

    Zang, Jianfeng

    Fabrication of unconventional energy storage devices with high stretchability and performance is challenging, but critical to practical operations of fully power-independent stretchable electronics. While supercapacitors ...

  10. Memorandum of American High-Performance Buildings Coalition DOE...

    Energy Savers [EERE]

    Leadership in High Performance and Sustainable Buildings Memorandum of Understanding Green Building Certification Systems Requirement for New Federal Buildings and Major...

  11. Dish/Stirling High-Performance Thermal Storage

    Broader source: Energy.gov (indexed) [DOE]

    studies Goal: * Demonstrate the feasibility of significant thermal storage for dish Stirling systems to leverage their existing high performance to greater capacity * Demonstrate...

  12. Enhanced High and Low Temperature Performance of NOx Reduction...

    Energy Savers [EERE]

    High and Low Temperature Performance of NOx Reduction Materials 2013 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer...

  13. High-Performance Home Technologies: Solar Thermal & Photovoltaic...

    Broader source: Energy.gov (indexed) [DOE]

    in each of the volumes. High-Performance Home Technologies: Solar Thermal & Photovoltaic Systems More Documents & Publications Building America Whole-House Solutions for...

  14. LBNL: High Performance Active Perimeter Building Systems - 2015...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Eleanor Lee, LBNL View the Presentation LBNL: High Performance Active Perimeter Building Systems - 2015 Peer Review More Documents & Publications FLEXLAB LBNL: NYC Office...

  15. Rethinking the idealized morphology in high-performance organic...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Rethinking the idealized morphology in high-performance organic photovoltaics December 9, 2011 Tweet EmailPrint Traditionally, organic photovoltaic (OPV) active layers are viewed...

  16. High-Performance Thermoelectric Devices Based on Abundant Silicide...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Development of high-performance thermoelectric devices for vehicle waste heat recovery will include fundamental research to use abundant promising low-cost thermoelectric...

  17. Enhanced High Temperature Performance of NOx Storage/Reduction...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Program Annual Merit Review and Peer Evaluation ace026peden2011o.pdf More Documents & Publications Enhanced High Temperature Performance of NOx StorageReduction (NSR) Materials...

  18. DOE Announces Webinars on High Performance Space Conditioning...

    Broader source: Energy.gov (indexed) [DOE]

    18: Live Webinar on High Performance Space Conditioning Systems, Part II Webinar Sponsor: Building Technologies Office The Energy Department will present a live webinar titled...

  19. High Performance Thermal Interface Technology Overview

    E-Print Network [OSTI]

    R. Linderman; T. Brunschwiler; B. Smith; B. Michel

    2008-01-07T23:59:59.000Z

    An overview on recent developments in thermal interfaces is given with a focus on a novel thermal interface technology that allows the formation of 2-3 times thinner bondlines with strongly improved thermal properties at lower assembly pressures. This is achieved using nested hierarchical surface channels to control the particle stacking with highly particle-filled materials. Reliability testing with thermal cycling has also demonstrated a decrease in thermal resistance after extended times with longer overall lifetime compared to a flat interface.

  20. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing: the PRIMA Project

    SciTech Connect (OSTI)

    Malony, Allen D. [Department of Computer and Information Science, University of Oregon] [Department of Computer and Information Science, University of Oregon; Wolf, Felix G. [Juelich Supercomputing Centre, Forschungszentrum Juelich] [Juelich Supercomputing Centre, Forschungszentrum Juelich

    2014-01-31T23:59:59.000Z

    The growing number of cores provided by today’s high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data – even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-Productivity Supercomputing (VI-HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-HPS training activities together within the past three years.

  1. Pete Beckman on Exascale Computing | Argonne National Laboratory

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    & computer science Petascale & exascale computing Supercomputing & high-performance computing Systems architecture & design Pete Beckman, codirector of the...

  2. Performance issues in cloud computing for cyber-physical applications Michael Olson, K. Mani Chandy

    E-Print Network [OSTI]

    Chandy, K. Mani

    Performance issues in cloud computing for cyber-physical applications Michael Olson, K. Mani Chandy@cs.caltech.edu, mani@cs.caltech.edu I. INTRODUCTION A focus of the paper is on the performance of cloud computing of architectures based on Cloud computing systems for such CPS applications. We use earthquake detection

  3. Computational Methods for High-Dimensional Rotations

    E-Print Network [OSTI]

    Buja, Andreas

    . To be useful, virtual rotations need to be under interactive user control, and they need to be animated. We scatters in virtual 3-D space. Although not obvivous, three-dimensional data rotations can be extended is due to the power of human 3-D perception and the natural controls they afford. To perform 3-D

  4. Highly Integrated, High Performance, Imaging Detector Systems Using

    E-Print Network [OSTI]

    Fossum, Eric R.

    performance - Susceptible to bulk radiation damage so radiation"soft" · CCDs are large capacitance devices so.3 volt) operation - Fast, digital readout - More radiation hard than CCDs · Strong commercial, biomedical Miniaturized imaging instruments Space telescopes - Spacecraft Star trackers Optical navigation Optical comm

  5. High performance magnet power supply optimization

    SciTech Connect (OSTI)

    Jackson, L.T.

    1988-01-01T23:59:59.000Z

    The power supply system for the joint LBL--SLAC proposed accelerator PEP provides the opportunity to take a fresh look at the current techniques employed for controlling large amounts of dc power and the possibility of using a new one. A basic requirement of +- 100 ppM regulation is placed on the guide field of the bending magnets and quadrupoles placed around the 2200 meter circumference of the accelerator. The optimization questions to be answered by this paper are threefold: Can a firing circuit be designed to reduce the combined effects of the harmonics and line voltage combined effects of the harmonics and line voltage unbalance to less than 100 ppM in the magnet field. Given the ambiguity of the previous statement, is the addition of a transistor bank to a nominal SCR controlled system the way to go or should one opt for an SCR chopper system running at 1 KHz where multiple supplies are fed from one large dc bus and the cost--performance evaluation of the three possible systems.

  6. Department of Computer & Information Sciences University of Delaware 302-831-2712 www.cis.udel.edu Endless career opportunities

    E-Print Network [OSTI]

    Firestone, Jeremy

    · Networking · High performance computing systems · Multimedia · Bioinformatics · Software engineering

  7. On Computation of Performance Bounds of Optimal Index Assignment

    E-Print Network [OSTI]

    2011-08-07T23:59:59.000Z

    Dec 1, 2010 ... X. Wu is with the Department of Electrical and Computer Engineer- ... H. D. Mittelmann is with School of Mathematical and Statistical Sciences. Arizona State ... usually computed with heuristic methods, and the results are.

  8. Fingerprinting Communication and Computation on HPC Machines

    E-Print Network [OSTI]

    Peisert, Sean

    2010-01-01T23:59:59.000Z

    Journal of High Performance Computing Applications, 5(3):63,running on high-performance computing systems? Names ofjob runs on the high-performance computing (HPC) system, and

  9. On Computation of Performance Bounds of Optimal Index Assignment

    E-Print Network [OSTI]

    2010-01-20T23:59:59.000Z

    Department of Electrical and Computer Engineering ... School of Mathematical and Statistical Sciences .... methods, and the results are typically overestimates.

  10. Computationally Efficient Modeling of High-Efficiency Clean Combustion...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Meeting, June 7-11, 2010 -- Washington D.C. ace012aceves2010o.pdf More Documents & Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines...

  11. High Performance Dense Linear System Solver with Soft Error Resilience

    E-Print Network [OSTI]

    Dongarra, Jack

    High Performance Dense Linear System Solver with Soft Error Resilience Peng Du, Piotr Luszczek systems, and in some scientific applications C/R is not applicable for soft error at all due to error) high performance dense linear system solver with soft error resilience. By adopting a mathematical

  12. High Performance Control I. M. Y. Mareels2

    E-Print Network [OSTI]

    Moore, John Barratt

    in conjunction with #12;vi Preface linear quadratic optimal control with frequency shaping to achieve robustness The engineering objective of high performance control using the tools of optimal control theory, robust control it has. Many systems today are control system limited and the quest is for high performance in the real

  13. "Exploring damage management of high performance metallic alloys in critical

    E-Print Network [OSTI]

    Acton, Scott

    Fatigue Localized corrosion degrades fatigue performance of high strength aluminum alloys. The expense and Aluminum Alloys Exposure to a moist environment degrades the fatigue resistance of all aluminum alloys"Exploring damage management of high performance metallic alloys in critical systems to develop new

  14. High energy neutron Computed Tomography developed

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville PowerCherries 82981-1cnHigh School football High School football Fancy footworke ne rgy data s

  15. High Performance Walls in Hot-Dry Climates

    SciTech Connect (OSTI)

    Hoeschele, M.; Springer, D.; Dakin, B.; German, A.

    2015-01-01T23:59:59.000Z

    High performance walls represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. The primary goal in improving wall thermal performance revolves around increasing the wall framing from 2x4 to 2x6, adding more cavity and exterior rigid insulation, achieving insulation installation criteria meeting ENERGY STAR's thermal bypass checklist, and reducing the amount of wood penetrating the wall cavity.

  16. High performance BLAS formulation of the multipole-to-local operator in the Fast Multipole Method

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    High performance BLAS formulation of the multipole-to-local operator in the Fast Multipole Method (submitted) Olivier Coulaud Pierre Fortin Jean Roman December 20, 2006 Abstract The multipole-to-local (M2L) operator is the most time-consuming part of the far field computation in the Fast Multipole Method

  17. PowerPack: Energy Profiling and Analysis of High-Performance Systems and Applications

    E-Print Network [OSTI]

    PowerPack: Energy Profiling and Analysis of High-Performance Systems and Applications Rong Ge of power and energy on the computer systems community, few studies provide insight to where and how power of these systems. These analyses include the impacts of chip multiprocessing on power and energy efficiency

  18. 12003 Workshop on High Performance Switching and Routing M.Atiquzzaman, Univ. of Oklahoma, June 2003.

    E-Print Network [OSTI]

    Atiquzzaman, Mohammed

    12003 Workshop on High Performance Switching and Routing M.Atiquzzaman, Univ. of Oklahoma, June Mohammed Atiquzzaman School of Computer Science, University of Oklahoma, Norman, OK 73019-6151. Email: atiq.Atiquzzaman, Univ. of Oklahoma, June 2003. Queue Management Passive No preventive packet drop until buffer reaches

  19. H5hut: A High-Performance I/O Library for Particle-based Simulations

    SciTech Connect (OSTI)

    Howison, Mark; Adelmann, Andreas; Bethel, E. Wes; Gsell, Achim; Oswald, Benedikt; Prabhat,

    2010-09-24T23:59:59.000Z

    Particle-based simulations running on large high-performance computing systems over many time steps can generate an enormous amount of particle- and field-based data for post-processing and analysis. Achieving high-performance I/O for this data, effectively managing it on disk, and interfacing it with analysis and visualization tools can be challenging, especially for domain scientists who do not have I/O and data management expertise. We present the H5hut library, an implementation of several data models for particle-based simulations that encapsulates the complexity of HDF5 and is simple to use, yet does not compromise performance.

  20. High-Performance I/O: HDF5 for Lattice QCD

    SciTech Connect (OSTI)

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav; Syritsyn, Sergey; Walker-Loud, Andre

    2015-01-01T23:59:59.000Z

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design and implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.

  1. From International Computer Performance and Dependability Symposium, Erlangen, Germany, April 1995, pp.285 294 MODELING RECYCLE: A CASE STUDY IN THE INDUSTRIAL USE OF

    E-Print Network [OSTI]

    Illinois at Urbana-Champaign, University of

    85721 Center for Reliable and High-Performance Computing Coordinated Science Laboratory UniversityFrom International Computer Performance and Dependability Symposium, Erlangen, Germany, April 1995, pp.285 294 MODELING RECYCLE: A CASE STUDY IN THE INDUSTRIAL USE OF MEASUREMENT AND MODELING Luai M

  2. Achieving "Green" Concrete Through The Use Of High Performance FiberThe Use Of High Performance Fiber

    E-Print Network [OSTI]

    Chao, Shih-Ho

    Achieving "Green" Concrete Through The Use Of High Performance FiberThe Use Of High Performance Fiber Reinforced Concrete ShihShih--Ho Chao,Ho Chao, Ph.DPh.D Assistant Professor, Department of Civil, 2008 #12;What is D rable Concrete?What is D rable Concrete?What is Durable Concrete?What is Durable

  3. High energy neutron Computed Tomography developed

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC) EnvironmentalGyroSolé(tm) Harmonicbet WhenHiggs BosonAccurate knowledgeHigh energy

  4. In Proceedings of the 7th International Symposium on High-Performance Computer Architecture, Jan. 22-24, 2001, Monterrey, Mexico, pp. 255-266. (Best Student Paper Award)

    E-Print Network [OSTI]

    Lee, Thomas H.

    . 22-24, 2001, Monterrey, Mexico, pp. 255-266. (Best Student Paper Award) A Delay Model and Speculative as a wormhole router, while improving throughput by up to 40%. 1. Introduction Interconnection networks are used in network switches and Internet routers. The performance of interconnection networks and hence

  5. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

    2010-11-02T23:59:59.000Z

    Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

  6. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

    SciTech Connect (OSTI)

    Corones, James [Krell Institute] [Krell Institute

    2013-09-23T23:59:59.000Z

    High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

  7. Energy Performance and Comfort Level in High Rise and Highly Glazed Office Buildings

    E-Print Network [OSTI]

    Bayraktar, M.; Perino, M.; Yilmaz, A. Z.

    2010-01-01T23:59:59.000Z

    Thermal and visual comfort in buildings play a significant role on occupants' performance but on the other hand achieving energy savings and high comfort levels can be a quite difficult task especially in high rise buildings with highly glazed...

  8. Halide and Oxy-halide Eutectic Systems for High Performance High...

    Broader source: Energy.gov (indexed) [DOE]

    Q2 Halide and Oxy-Halide Eutectic Systems for High Performance High Temperature Heat Transfer Fluids Corrosion in Very High-Temperature Molten Salt for Next Generation CSP Systems...

  9. Energy Performance and Comfort Level in High Rise and Highly Glazed Office Buildings 

    E-Print Network [OSTI]

    Bayraktar, M.; Perino, M.; Yilmaz, A. Z.

    2010-01-01T23:59:59.000Z

    Thermal and visual comfort in buildings play a significant role on occupants' performance but on the other hand achieving energy savings and high comfort levels can be a quite difficult task especially in high rise buildings with highly glazed...

  10. Commercial remodeling : using computer graphic imagery to evaluate building energy performance during conceptual redesign

    E-Print Network [OSTI]

    Williams, Kyle D

    1985-01-01T23:59:59.000Z

    This research is an investigation of the relationship between commercial remodeling and building thermal performance. A computer graphic semiotic is developed to display building thermal performance based on this relationship. ...

  11. Computing Resources | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    is dedicated to large-scale computation and builds on Argonne's strengths in high-performance computing software, advanced hardware architectures and applications expertise. It...

  12. Keeneland: Computational Science Using Heterogeneous GPU Computing

    E-Print Network [OSTI]

    Dongarra, Jack

    ................................................................. 152 123 #12;124 Contemporary High Performance Computing: From Petascale toward Exascale 7.1 Overview of Computational Sciences, and Oak Ridge National Laboratory. NSF 08-573: High Performance Computing System performance computing system. The Keeneland project is led by the Georgia Institute of Technology (Georgia

  13. A specialized Masters program in applying computation and the principles of dynamical systems

    E-Print Network [OSTI]

    Zürich, Universität

    - dents are: High Performance Computing Applied Math and Computational Methods Simulation and Modeling

  14. Ductility enhancement of high performance cementitious composites and structures

    E-Print Network [OSTI]

    Chuang, Eugene (Eugene Yu), 1975-

    2002-01-01T23:59:59.000Z

    High performance cementitious composites (HP2C) are a new generation of fiber reinforced cementitious composites (FRCC) with substantial improvements in mechanical behavior. The most important development in these HP2C ...

  15. Anne Arundel County- High Performance Dwelling Property Tax Credit

    Broader source: Energy.gov [DOE]

    The state of Maryland permits local governments (Md Code: Property Tax § 9-242) to offer property tax credits for high performance buildings if they choose to do so. In October 2010 Anne Arundel...

  16. High Performance Building Standards in New State Construction

    Broader source: Energy.gov [DOE]

    In January 2008, New Jersey enacted legislation mandating the use of high performance green building standards in new state construction. The standard requires that new buildings larger than 15...

  17. Energy Star Helps Manufacturers To Achieve High Energy Performance 

    E-Print Network [OSTI]

    Dutrow, E.; Hicks, T.

    2001-01-01T23:59:59.000Z

    From personal electronic devices to homes and office buildings, ENERGY STAR® is a recognized symbol of high quality energy performance which enables consumers, home buyers, and businesses to make informed energy decisions. Now, the U...

  18. High-Performance Organic Light-Emitting Diodes Using ITO

    E-Print Network [OSTI]

    Ho, Seng-Tiong

    High-Performance Organic Light-Emitting Diodes Using ITO Anodes Grown on Plastic by Room,* Mark E. Madsen, Antonio DiVenere, and Seng-Tiong Ho Organic light-emitting diodes (OLEDs) fabricated

  19. Design of wind turbines with Ultra-High Performance Concrete

    E-Print Network [OSTI]

    Jammes, François-Xavier

    2009-01-01T23:59:59.000Z

    Ultra-High Performance Concrete (UHPC) has proven an asset for bridge design as it significantly reduces costs. However, UHPC has not been applied yet to wind turbine technology. Design codes do not propose any recommendations ...

  20. High Performance Leasing Strategies for State and Local Governments

    Broader source: Energy.gov [DOE]

    Presentation for the SEE Action Series: High Performance Leasing Strategies for State and Local Governments webinar, presented on January 26, 2013 as part of the U.S. Department of Energy's Technical Assistance Program (TAP).