Powered by Deep Web Technologies
Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


1

High Performance Computing  

Science Conference Proceedings (OSTI)

High Performance Computing. Summary: High Performance Computing (HPC) enables work on challenging problems that ...

2012-03-05T23:59:59.000Z

2

High Performance Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Information Science, Computing, Applied Math High Performance Computing High Performance Computing Providing world-class high performance computing capability that enables...

3

High Performance Computing in  

E-Print Network (OSTI)

High Performance Computing in Bioinformatics Thomas Ludwig (t.ludwig@computer.org) Ruprecht PART I: High Performance Computing Thomas Ludwig PART II: HPC Computing in Bioinformatics Alexandros #12;© Thomas Ludwig, Alexandros Stamatakis, GCB'04 3 PART I High Performance Computing Introduction

Stamatakis, Alexandros

4

High Performance Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Information Science, Computing, Applied Math » Information Science, Computing, Applied Math » High Performance Computing High Performance Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest Gary Grider High Performance Computing Division Leader Randal Rheinheimer High Performance Computing Deputy Division Leader Contact Us Carol Hogsett Student/Internship Opportunities Email Division Office Email Managing world-class supercomputing centers Powerall simulations modeling Read caption + The Powerwall is used by LANL scientists to view objects and processes in 3D. High Performance Computing video 13:01 Gary Grider, HPC Divison Leader The High Performance Computing (HPC) Division supports the Laboratory mission by managing world-class Supercomputing Centers.

5

.NET High Performance Computing.  

E-Print Network (OSTI)

?? Graphics Processing Units (GPUs) have been extensively applied in the High Performance Computing (HPC) community. HPC applications require additional special programming environments to improve (more)

Ou, Hsuan-Hsiu

2012-01-01T23:59:59.000Z

6

Computational biology and high performance computing  

E-Print Network (OSTI)

Biology and High Performance Computing Manfred Zorn, TeresaBiology and High Performance Computing Presenters: Manfred99-Portland High performance computing has become one of the

Shoichet, Brian

2011-01-01T23:59:59.000Z

7

High Performance Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

High Performance Computing Managing world-class supercomputing centers Read caption + The Powerwall is used by LANL scientists to view objects and processes in 3D. 13:01 Gary...

8

High Performance Computing School COMSC  

E-Print Network (OSTI)

High Performance Computing School COMSC This module aims to provide the students with fundamental knowledge and understanding of techniques associated with High Performance Computing and its practical' skills in analysing and evaluating High Performance Computing and will be structured around

Martin, Ralph R.

9

High Performance Computing: Modeling & Simulation  

NLE Websites -- All DOE Office Websites (Extended Search)

High Performance Computing: Modeling & Simulation High Performance Computing: Modeling & Simulation Express Licensing Adaptive Real-Time Methodology for Optimizing Energy-Efficient...

10

Computational biology and high performance computing  

E-Print Network (OSTI)

Paper in Computational Biology The First Step Beyond theM . Glaeser, Mol. & Cell Biology, UCB and Life SciencesLBNL-44460 Computational Biology and High Performance

Shoichet, Brian

2011-01-01T23:59:59.000Z

11

Introduction to High Performance Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Introduction to High Performance Computing Introduction to High Performance Computing June 10, 2013 Photo on 7 30 12 at 7.10 AM Downloads Download File Gerber-HPC-2.pdf...

12

High Performance Computing contributions to  

E-Print Network (OSTI)

High Performance Computing contributions to DoD Mission Success 2002 #12;Approved for public/C nanotube in a field emitter configuration #12;HIGH PERFORMANCE COMPUTING contributions tocontributions ­ SECTION 1 INTRODUCTION 1 Introduction 3 Overview of the High Performance Computing Modernization Program 3

13

High Performance Computing and Visualization Group ...  

Science Conference Proceedings (OSTI)

High Performance Computing and Visualization Group. Welcome. The High Performance Computing and Visualization Group. ...

2011-09-27T23:59:59.000Z

14

Thrusts in High Performance Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

in HPC 1 Thrusts in High Performance Computing Science at Scale Petaflops to Exaflops Science through Volume Thousands to Millions of Simulations Science in Data Petabytes to...

15

High Performance Computing @ Fermilab  

NLE Websites -- All DOE Office Websites (Extended Search)

simulations, research & development of the physics analysis software. Computing Data Handling & storage Networking Analysis Software LHC CMS Experiment one of many experiments...

16

High Performance Computing Meets Experimental Mathematics  

E-Print Network (OSTI)

High Performance Computing Meets Experimental Mathematics David H. Bailey Lawrence Berkeley large, high-performance computer systems. What's more, in these new appli- cations the computer computation, implemented on high performance computer (HPC) systems. We present these results, in part

Bailey, David H.

17

Army High Performance Computing Research Center  

E-Print Network (OSTI)

Army High Performance Computing Research Center Applying advanced computational science research challenges http://me.stanford.edu/research/centers/ahpcrc #12;Army High Performance Computing challenges http://me.stanford.edu/research/centers/ahpcrc #12;Army High Performance Computing Research

Prinz, Friedrich B.

18

Computational biology and high performance computing  

E-Print Network (OSTI)

Acknowledgements for Community White Paper in ComputationalComputational Biology white paper Is there strong objectionportions of community white paper on high end computing

Shoichet, Brian

2011-01-01T23:59:59.000Z

19

Purchase of High Performance Computing (HPC) Central Compute Resources  

E-Print Network (OSTI)

Purchase of High Performance Computing (HPC) Central Compute Resources by Northwestern Researchers summarizes High Performance Computing (HPC) compute resources that faculty engaged in research may purchase of code on the Quest high performance computing system. The installation cycles for new

Shull, Kenneth R.

20

Computational Biology and High Performance Computing 2000  

SciTech Connect

The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

2000-10-19T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


21

Collaboration to advance high-performance computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Collaboration to advance high-performance computing Collaboration to advance high-performance computing LANL and EMC will enhance, design, build, test, and deploy new cutting-edge...

22

High performance computing Igal G. Rasin  

E-Print Network (OSTI)

High performance computing Igal G. Rasin Department of Chemical Engineering, Technion Israel with different parallelization techniques and tools used in high performance computing (HPC). The tutorial

Adler, Joan

23

HIGH PERFORMANCE COMPUTING TODAY Jack Dongarra  

E-Print Network (OSTI)

1 HIGH PERFORMANCE COMPUTING TODAY Jack Dongarra Computer Science Department, University detailed and well-founded analysis of the state of high performance computing. This paper summarizes some of systems available for performing grid based computing. Keywords High performance computing, Parallel

Dongarra, Jack

24

Tutorial: High Performance Computing Igal G. Rasin  

E-Print Network (OSTI)

Tutorial: High Performance Computing Igal G. Rasin Department of Chemical Engineering Israel Computing 27 Nisan 5769 (21.04.2009) 1 / 18 #12;Motivation What is High Performance Computing? What for serial Computing? Igal G. Rasin (Technion) Tutorial: High Performance Computing 27 Nisan 5769 (21

Adler, Joan

25

Bringing Energy Efficiency to High Performance Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Bringing Energy Efficiency to High Performance Computing Oak Ridge National Laboratory's Jaguar Supercomputer William Tschudi September 2013 The ability of high performance...

26

NREL: Computational Science - High-Performance Computing Capabilities  

NLE Websites -- All DOE Office Websites (Extended Search)

High-Performance Computing Capabilities The Computational Science Center carries out research using computers as the primary tool of investigation. The Center focuses on supporting...

27

Providing Access to High Performance Computing Technologies  

E-Print Network (OSTI)

Providing Access to High Performance Computing Technologies Jack Dongarra 1 , Shirley Browne 2 to high performance computing technologies. One effort, the National HPCC Software Exchange, is providing scientists involved with High Performance Computing and Communications (HPCC) [1] 3 . The NHSE facilitates

Dongarra, Jack

28

College of Engineering High Performance Computing Cluster  

E-Print Network (OSTI)

College of Engineering High Performance Computing Cluster Policy and Procedures COE-HPC-01 and registered as requiring high performance computing; the course identification/registrations process the College High Performance Computing system will need register for system access by visiting http

Demirel, Melik C.

29

AGILA: The Ateneo High Performance Computing System  

E-Print Network (OSTI)

A Beowulf cluster is a low-cost parallel high performance computing system that uses commodity hardware components such as personal computers and standard Ethernet adapters and switches and runs on freely available software such as Linux and LAM-MPI. In this paper the development of the AGILA HPCS, which stands for the Ateneo GigaflopsRange Performance, Linux OS, and Athlon Processors High Performance Computing System, is discussed including its hardware and software configurations and performance evaluation. Keywords High-performance computing, commodity cluster computing, parallel computing, Beowulf-class cluster 1.

Rafael Salda Na; Felix P. Muga Ii; Jerrold J. Garcia; William Emmanuel; S. Yu

2000-01-01T23:59:59.000Z

30

Integrating High Performance Computing and Virtual Environments  

E-Print Network (OSTI)

High performance computing has become accepted as a tool that can be used to solve many large scale computational problems. Because of the complexity of the problems associated with high performance computing, visualization of the output of high performance computing applications has always been an important factor in providing a complete problem solving environment for the high performance computing user. As visualization technology advances, it is important to consider what impacts those advances will have on the integration of high performance computing and visualization. Virtual environments are the most recent, and arguably the most powerful, visualization environments in use today. In this paper we analyze the current state of the research of integrating visualization, and in particular virtual environments, with high performance computing. We also present a framework for implementing such an environment and report on the status of its implementation at the Australian National Un...

Brian Corrie; David Sitsky; Paul Mackerras

1997-01-01T23:59:59.000Z

31

64 _____________________________________Math & Computational Sciences Division High Performance Computing and Visualization  

E-Print Network (OSTI)

64 _____________________________________Math & Computational Sciences Division High Performance Computing and Visualization Research and Development in Visual Analysis Judith Devaney Terrence Griffin John

Perkins, Richard A.

32

AGILA: The Ateneo High Performance Computing System  

E-Print Network (OSTI)

A Beowulf cluster is a low-cost parallel high performance computing system that uses commodity hardware components such as personal computers and standard Ethernet adapters and switches and runs on freely available software such as Linux and LAM-MPI. In this paper the development of the AGILA HPCS, which stands for the Ateneo GigaflopsRange Performance, Linux OS, and Athlon Processors High Performance Computing System, is discussed including its hardware and software configurations and performance evaluation. Keywords High-performance computing, commodity cluster computing, parallel computing, Beowulf-class cluster 1. INTRODUCTION In the Philippines today, computing power in the range of gigaflops is not generally available for use in research and development. Conventional supercomputers or high performance computing systems are very expensive and are beyond the budgets of most university research groups especially in developing countries such as the Philippines. A lower cost option...

Rafael P. Saldaa; Felix P. Muga; II; Jerrold J. Garcia; William Emmanuel S. Yu; S. Yu

2000-01-01T23:59:59.000Z

33

423A HIGH-PERFORMANCE COMPUTING/NUMERICAL The International Journal of High Performance Computing Applications,  

E-Print Network (OSTI)

423A HIGH-PERFORMANCE COMPUTING/NUMERICAL The International Journal of High Performance Computing and barriers in the development of high-performance computing (HPC) algorithms and software. The activity has computing, numerical analy- sis, roadmap, applications and algorithms, software 1 The High-performance

Higham, Nicholas J.

34

for the Support of High Performance Computing  

E-Print Network (OSTI)

Architecture for the Support of High Performance Computing was sponsored by the National Science Foundation to identify critical research topics in computer architecture as they relate to high performance computing. Following a wide-ranging discus-sion of the computational characteristics and requirements of the grand challenge applications, the workshop identified four major computer architecture grand challenges as crucial to advancing the state of the art of high performance computation in the coming decade. These are: (1) idealized parallel computer models; (2) usable peta-ops (1015 ops) performance; (3) computers in an era of HDTV, gigabyte networks, and visualization; and (4) infrastruc-ture for prototyping architectures. This report overviews some of the demands of the grand challenge applications and presents the above four grand challenges for computer architecture. Q MZ AM-demic Press, Inc. A. Origin of the Workshop

Howard Jay Siegel; Seth Abraham; William L. Bain; Kenneth E. Batcher; Thomas L. Casavant; Doug Degroot; Jack B. Dennis; David C. Douglas; Tse-yun Feng; James R. Goodman; Alan Huang; Harry F. Jordan; J. Robertjump; Yalen. Patt; I Alan; Jay Smith; James E. Smith; Lawrence Snyder; I~harold S. Stone; Russ Tuck; Benjamin W. Wah

1992-01-01T23:59:59.000Z

35

Mercury | RPC for High-Performance Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

RPC for High-Performance Computing Skip to content Home About Overview Collaborators Downloads Documentation Getting Started Doxygen Publications Support Mailing Lists Bug Reports...

36

Middleware in Modern High Performance Computing System Architectures  

E-Print Network (OSTI)

Middleware in Modern High Performance Computing System Architectures Christian Engelmann, Hong Ong trend in modern high performance computing (HPC) system architectures employs "lean" compute nodes) continue to reside on compute nodes. Key words: High Performance Computing, Middleware, Lean Compute Node

Engelmann, Christian

37

High performance computing: Clusters, constellations, MPPs, and future directions  

E-Print Network (OSTI)

and Jim Gray, High Performance Computing: Crays, Clusters,The Marketplace of High-Performance Computing, ParallelHigh Performance Computing Clusters, Constellations, MPPs,

Dongarra, Jack; Sterling, Thomas; Simon, Horst; Strohmaier, Erich

2003-01-01T23:59:59.000Z

38

Introduction to High Performance Computing Using GPUs  

NLE Websites -- All DOE Office Websites (Extended Search)

HPC Using GPUs Introduction to High Performance Computing Using GPUs July 11, 2013 NERSC, NVIDIA, and The Portland Group will present a one-day workshop "Introduction to High...

39

Co-design for high performance computing.  

Science Conference Proceedings (OSTI)

Co-design has been identified as a key strategy for achieving Exascale computing in this decade. This paper describes the need for co-design in High Performance Computing related research in embedded computing the development of hardware/software co-simulation methods.

Dosanjh, Sudip Singh; Hemmert, Karl Scott; Rodrigues, Arun F.

2010-07-01T23:59:59.000Z

40

Co?design for High Performance Computing  

Science Conference Proceedings (OSTI)

Co?design has been identified as a key strategy for achieving Exascale computing in this decade. This paper describes the need for co?design in High Performance Computing related research in embedded computing the development of hardware/software co?simulation methods.

Arun Rodrigues; Sudip Dosanjh; Scott Hemmert

2010-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


41

High Performance Computing Richard F. BARRETT  

NLE Websites -- All DOE Office Websites (Extended Search)

Role of Co-design in High Performance Computing Richard F. BARRETT a,1 , Shekhar BORKAR b , Sudip S. DOSANJH c , Simon D. HAMMOND a , Michael A. HEROUX a , X. Sharon HU d , Justin...

42

High Performance Computing Data Center (Fact Sheet)  

SciTech Connect

This two-page fact sheet describes the new High Performance Computing Data Center being built in the ESIF and talks about some of the capabilities and unique features of the center.

Not Available

2012-08-01T23:59:59.000Z

43

High performance computing meets experimental mathematics  

Science Conference Proceedings (OSTI)

In this paper we describe some novel applications of high performance computing in a discipline now known as "experimental mathematics." The paper reviews some recent published work, and then presents some new results that have not yet appeared in the ...

David H. Bailey; David Broadhurst; Yozo Hida; Xiaoye S. Li; Brandon Thompson

2002-11-01T23:59:59.000Z

44

High Performance Computing in Accelerator Science: Past Successes. Future Challenges  

E-Print Network (OSTI)

High Performance Computing in Accelerator Science: PastAC02- 05CH11231. High Performance Computing in Accelerator

Ryne, R.

2013-01-01T23:59:59.000Z

45

Debugging a high performance computing program  

SciTech Connect

Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

Gooding, Thomas M.

2013-08-20T23:59:59.000Z

46

Fermilab | Science at Fermilab | Computing | High-performance...  

NLE Websites -- All DOE Office Websites (Extended Search)

Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer can perform billions of multiplication and addition...

47

High Performance Computing Data Center Metering Protocol | Department...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

High Performance Computing Data Center Metering Protocol High Performance Computing Data Center Metering Protocol Guide details the methods for measurement in High Performance...

48

Trends in High-Performance Computer Architecture  

E-Print Network (OSTI)

Trends in High-Performance Computer Architecture David J. Lilja Department of Electrical;Historical Trends and Perspective pre-WW II: Mechanical calculating machines WW II - 50's: Technology of Minnesota April 1996 #12;Performance Metrics System throughput - work per unit time rate - used by system

Minnesota, University of

49

Lessons learned when building a greenfield high performance computing ecosystem  

Science Conference Proceedings (OSTI)

Faced with a fragmented research computing environment and growing needs for high performance computing resources, Michigan State University established the High Performance Computing Center in 2005 to serve as a central high performance computing resource ...

Andrew R. Keen; William F. Punch; Greg Mason

2012-12-01T23:59:59.000Z

50

High Performance Computing Data Center Metering Protocol  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

High Performance High Performance Computing Data Center Metering Protocol Prepared for: U.S. Department of Energy Office of Energy Efficiency and Renewable Energy Federal Energy Management Program Prepared by: Thomas Wenning Michael MacDonald Oak Ridge National Laboratory September 2010 ii Introduction Data centers in general are continually using more compact and energy intensive central processing units, but the total number and size of data centers continues to increase to meet progressive computing requirements. In addition, efforts are underway to consolidate smaller data centers across the country. This consolidation is resulting in a growth of high-performance computing facilities (i.e. - supercomputers) which consume large amounts of energy to support the numerically intensive

51

High Performance Computing at the Oak Ridge Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

High Performance Computing at High Performance Computing at the Oak Ridge Leadership Computing Facility Go to Menu Page 2 Outline * Our Mission * Computer Systems: Present, Past, Future * Challenges Along the Way * Resources for Users Go to Menu Page 3 Our Mission Go to Menu Page 4 * World's most powerful computing facility * Nation's largest concentration of open source materials research * $1.3B budget * 4,250 employees * 3,900 research guests annually * $350 million invested in modernization * Nation's most diverse energy portfolio * The $1.4B Spallation Neutron Source in operation * Managing the billion-dollar U.S. ITER project ORNL is the U.S. Department of Energy's largest science and energy laboratory Go to Menu Page 5 Computing Complex @ ORNL World's most powerful computer for open science

52

Empirical Performance Analysis of High Performance Computing Benchmarks Across Variations in Cloud Computing.  

E-Print Network (OSTI)

??High Performance Computing (HPC) applications are data-intensive scientific software requiring significant CPU and data storage capabilities. Researchers have examined the performance of Amazon Elastic Compute (more)

Mani, Sindhu

2012-01-01T23:59:59.000Z

53

NERSC 2011: High Performance Computing Facility Operational Assessment for the National Energy Research Scientific Computing Center  

E-Print Network (OSTI)

NERSC 2011 High Performance Computing Facility Operationalby providing high-performance computing, information, data,s deep knowledge of high performance computing to overcome

Antypas, Katie

2013-01-01T23:59:59.000Z

54

Managing Stakeholder Requirements in High Performance Computing Procurement  

E-Print Network (OSTI)

Managing Stakeholder Requirements in High Performance Computing Procurement John Rooksby1 , Mark Department of Management, Lancaster University High Performance Computing (HPC) facilities are provided strategy can rigorously meet the demands of the potential users. 1 Introduction High Performance Computing

Sommerville, Ian

55

SYSTEMS ENGINEERING FOR HIGH PERFORMANCE COMPUTING SOFTWARE: THE HDDA DAGH  

E-Print Network (OSTI)

SYSTEMS ENGINEERING FOR HIGH PERFORMANCE COMPUTING SOFTWARE: THE HDDA DAGH INFRASTRUCTURE systems implementing high performance computing applications. The example which drives the creation in the context of high performance computing software. Applicationof these principleswill be seen

Parashar, Manish

56

Energy Efficiency Opportunities in Federal High Performance Computing...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Energy Efficiency Opportunities in Federal High Performance Computing Data Centers Case study...

57

DOE Science Showcase - High-Performance Computing | OSTI, US...  

Office of Scientific and Technical Information (OSTI)

Science Showcase - High-Performance Computing Supercomputers or massively parallel high-performance computers (HPCs) are machines that employ very large numbers of processors in...

58

High Performance computing Data Center (Fact Sheet), NREL (National...  

NLE Websites -- All DOE Office Websites (Extended Search)

via efficient evaporative cooling towers serving the HPC data center. High Performance Computing Data Center The new high performance computing (HPC) data center in NREL's...

59

Monitoring SLAC High Performance UNIX Computing Systems  

SciTech Connect

Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

2005-12-15T23:59:59.000Z

60

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers  

Energy.gov (U.S. Department of Energy (DOE))

Case study describes an outline of energy efficiency opportunities in federal high performance computing data centers.

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


61

Evaluation of high-performance computing software  

Science Conference Proceedings (OSTI)

The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

1996-12-31T23:59:59.000Z

62

Computer experts meet at CERN to discuss future of european high performance computing and networking  

E-Print Network (OSTI)

Computer experts meet at CERN to discuss future of european high performance computing and networking

CERN Press Office. Geneva

1992-01-01T23:59:59.000Z

63

Homepage: High-Performance Computing Systems, HPC-3: High-Performance...  

NLE Websites -- All DOE Office Websites (Extended Search)

1188 667-5243 Fax: 667-7665 MS T080 Computing Solutions that work for you High-Performance Computing Systems The High-Performance Computing Systems Group provides production...

64

C++ programming techniques for High Performance Computing on systems with  

E-Print Network (OSTI)

C++ programming techniques for High Performance Computing on systems with non-uniform memory access (including NUMA) without sacrificing performance. ccNUMA In High Performance Computing (HPC), shared- memory

Sanderson, Yasmine

65

CENTER FOR HIGH PERFORMANCE COMPUTING Overview of CHPC  

E-Print Network (OSTI)

CENTER FOR HIGH PERFORMANCE COMPUTING Overview of CHPC Julia Harrison, Associate Director Center for High Performance Computing julia.harrison@utah.edu Spring 2009 #12;CENTER FOR HIGH PERFORMANCE://www.chpc.utah.edu/docs/services.html 2/26/09 http://www.chpc.utah.edu Slide 3 #12;CENTER FOR HIGH PERFORMANCE COMPUTING Arches 2

Alvarado, Alejandro Sánchez

66

High Performance Computing with a Conservative Spectral Boltzmann Solver  

E-Print Network (OSTI)

High Performance Computing with a Conservative Spectral Boltzmann Solver Jeffrey R. Haack and Irene the structure of the collisional formulation for high performance computing environments. The locality in space on high performance computing resources. We also use the improved computational power of this method

67

Homepage: Computing Operations & Support, HPC-2: High-Performance...  

NLE Websites -- All DOE Office Websites (Extended Search)

2 Home SERVICES PRODUCTS Data Storage ES&H Management and Support High Performance Computing Operations Procurement Computer Support CONTACTS Group Leader (Acting) Cindy Martin...

68

Geocomputation's future at the extremes: high performance computing and nanoclients  

E-Print Network (OSTI)

Geocomputation's future at the extremes: high performance computing and nanoclients K.C. Clarke; High performance computing; Tractability; Geocom- putation E-mail address: kclarke@geog.ucsb.edu (K

Clarke, Keith

69

Software Reuse in High Performance Computing Shirley Browne  

E-Print Network (OSTI)

Software Reuse in High Performance Computing Shirley Browne University of Tennessee 107 Ayres Hall high performance computing architectures in the form of distributed memorymul- tiprocessors have become of programming applications to run on these machines. Economical use of high performance computing and subsequent

Dongarra, Jack

70

Advanced Environments and Tools for High Performance Computing  

E-Print Network (OSTI)

Advanced Environments and Tools for High Performance Computing Problem-Solving Environments Environments and Tools for High Performance Computing. The conference was chaired by Professor D. W. Walker and managing distributed high performance comput- ing resources is important for a PSE to meet the requirements

Walker, David W.

71

A Study of Software Development for High Performance Computing  

E-Print Network (OSTI)

A Study of Software Development for High Performance Computing Manish Parashar, Salim Hariri Parallel Distributed Systems, 1994 Abstract Software development in a High Performance Computing (HPC. The objective of this paper is to study the software development process in a high performance computing

Parashar, Manish

72

Universal High Performance Computing ---We Have Just Begun  

E-Print Network (OSTI)

Universal High Performance Computing --- We Have Just Begun Jerome A. Feldman April, 1994, and deployment. At present, high performance computing is entirely different. Although there have been some commercial factor. A prerequisite for Universal High Performance Computing (UHPC) is con­ vergence

California at Berkeley, University of

73

Evaluating Parameter Sweep Workflows in High Performance Computing*  

E-Print Network (OSTI)

Evaluating Parameter Sweep Workflows in High Performance Computing* Fernando Chirigati1,# , Vítor a large amount of tasks that are submitted to High Performance Computing (HPC) environments. Different, Parameter Sweep, High Performance Computing (HPC) 1. INTRODUCTION1 # Many scientific experiments are based

Paris-Sud XI, Université de

74

Software Reuse in High Performance Computing Shirley Browne  

E-Print Network (OSTI)

Software Reuse in High Performance Computing Shirley Browne University of Tennessee 107 Ayres Hall high performance computing architectures in the form of distributed memory mul­ tiprocessors have and cost of programming applications to run on these machines. Economical use of high performance computing

Dongarra, Jack

75

1High Performance Computing at Liberal Arts Colleges Workshop 3  

E-Print Network (OSTI)

1High Performance Computing at Liberal Arts Colleges ­ Workshop 3 11October 27, 2009 Experiences (?) #12;2High Performance Computing at Liberal Arts Colleges ­ Workshop 3 22 Acknowledgements Thanks to. October 27, 2009 #12;3High Performance Computing at Liberal Arts Colleges ­ Workshop 3 33October 27, 2009

Barr, Valerie

76

Studying Code Development for High Performance Computing: The HPCS Program  

E-Print Network (OSTI)

Studying Code Development for High Performance Computing: The HPCS Program Jeff Carver1 , Sima at measuring the development time for programs written for high performance computers (HPC). Our goal. Introduction The development of High-Performance Computing (HPC) programs (codes) is crucial to progress

Basili, Victor R.

77

Towards green computing using diskless high performance clusters  

Science Conference Proceedings (OSTI)

In recent years, significant research has been conducted to boost the performance and increase the reliability of high performance computing (HPC) clusters. As the number of compute nodes in modern HPC clusters continues to grow, it is critical to design ... Keywords: Linux, cluster computing and architecture, green computing, performance evaluation

K. Salah; R. Al-Shaikh; M. Sindi

2011-10-01T23:59:59.000Z

78

LinBox and future high performance computer algebra  

Science Conference Proceedings (OSTI)

Computer chip design is entering an era in which further increases in computational power will come by increased on-chip parallelism through multi-core architectures rather than by increasing clock speed. If high performance computer algebra tools are ... Keywords: high performance, multi-core, parallel computation

Bruce W. Char; B. David Saunders; Bryan Youse

2007-07-01T23:59:59.000Z

79

A lightweight, high performance communication protocol for grid computing  

Science Conference Proceedings (OSTI)

This paper describes a lightweight, high-performance communication protocol for the high-bandwidth, high-delay networks typical of computational Grids. One unique feature of this protocol is that it incorporates an extremely accurate classification mechanism ... Keywords: Bayesian analysis, Classification mechanisms, Grid computing, High-performance communication protocols, High-performance networking

Phillip M. Dickens

2010-03-01T23:59:59.000Z

80

Compiler-based Memory Optimizations for High Performance Computing Systems.  

E-Print Network (OSTI)

??Parallelism has always been the primary method to achieve higher performance. To advance the computational capabilities of state-of-the-art high performance computing systems, we continue to (more)

Kultursay, Emre

2013-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


81

A DYNAMICALLY CONFIGURABLE ENVIRONMENT FOR HIGH Performance Computing  

E-Print Network (OSTI)

Current tools available for high performance computing require that all the computing nodes used in a parallel execution be known in advance: the execution environment must know where the different "chunks" of programs will be executed, and each computer involved in the execution must be properly configured. In this paper, we describe how the ) environment may be used to dynamically locate available computers to perform such computations and how these computers are dynamically configured.

Nabil Abdennadher; Gilbert Babin; Peter Kropf; Gilbert Babin ; Peter Kropf ; Pierre Kuonen

2000-01-01T23:59:59.000Z

82

Development of a Beowulf-Class High Performance Computing System for Computational Science Applications  

E-Print Network (OSTI)

Using Beowulf cluster computing technology, the Ateneo High Performance Computing Group has developed a high performance computing system consisting of eight compute nodes. Called the AGILA HPCS this Beowulf cluster computer is designed for computational science applications. In this paper, we present the motivation for the AGILA HPCS and some results on its performance evaluation.

Rafael Saldaa; Jerrold Garcia; Felix Muga Ii; William Yu

2001-01-01T23:59:59.000Z

83

High Performance Computing (HPC) Central Storage Resources for Research Support  

E-Print Network (OSTI)

High Performance Computing (HPC) Central Storage Resources for Research Support Effective for FY. They also describe new applications and technologies related to research in high performance computing2011 Revised: March 7, 2011 Page 1 Information Technology Purpose This memo summarizes High Performance

Shull, Kenneth R.

84

Applying High Performance Computing to Analyzing by Probabilistic Model Checking  

E-Print Network (OSTI)

Applying High Performance Computing to Analyzing by Probabilistic Model Checking Mobile Cellular on the use of high performance computing in order to analyze with the proba- bilistic model checker PRISM. The Figure Generation Script 22 2 #12;1. Introduction We report in this paper on the use of high performance

Schneider, Carsten

85

The Center for Computational Sciences DOE High Performance Computing Research Center at Oak Ridge National Laboratory  

E-Print Network (OSTI)

1 The Center for Computational Sciences DOE High Performance Computing Research Center at Oak Ridge Sciences DOE High Performance Computing Research Center at Oak Ridge National Laboratory Outline · CCS for Computational Sciences DOE High Performance Computing Research Center at Oak Ridge National Laboratory CCS

86

Effect of memory access and caching on high performance computing.  

E-Print Network (OSTI)

??High-performance computing is often limited by memory access. As speeds increase, processors are often waiting on data transfers to and from memory. Classic memory controllers (more)

Groening, James

2012-01-01T23:59:59.000Z

87

Use of high performance computing in neutronics analysis activities  

NLE Websites -- All DOE Office Websites (Extended Search)

high performance computing in neutronics analysis activities M.A. Smith Argonne National Laboratory 9700 South Cass Avenue, Argonne Illinois 60439, USA Abstract Reactor design is...

88

High Performance Computing Systems Integration, HPC-5: HPC: LANL...  

NLE Websites -- All DOE Office Websites (Extended Search)

Fax: 664-0172 MS B272 Latest in cluster technologies New technology in High Performance Computing and Simulation HPC-5 provides advanced research, development, testing, and...

89

Application of High Performance Computing to the DOE Joint Genomic...  

NLE Websites -- All DOE Office Websites (Extended Search)

Application of High Performance Computing to the DOE Joint Genomic Institute's Data Challenges January 25-26, 2010 DOE Joint Genome Institute, Walnut Creek, CA USA -by invitation...

90

In the OSTI Collections: High-Performance Computing | OSTI, US...  

Office of Scientific and Technical Information (OSTI)

Sandia National Laboratories in the report "Toward a New Metric for Ranking High Performance Computing Systems" SciTech Connect, which describes a new benchmark that represents...

91

Dependable high performance computing on a parallel sysplex cluster  

E-Print Network (OSTI)

Abstract In this paper we address the issue of dependable distributed high performance computing in the field of Symbolic Computation. We describe the extension of a middleware infrastructure designed for high performance computing with efficient checkpointing mechanisms. As target platform an IBM Parallel Sysplex Cluster is used. We consider the satisfiability checking problem for boolean formulae as an example application from the realm of Symbolic Computation. Time measurements for an implementation of this application on top of the described system environment are given.

Wolfgang Blochinger

2000-01-01T23:59:59.000Z

92

Toward a new metric for ranking high performance computing systems.  

SciTech Connect

The High Performance Linpack (HPL), or Top 500, benchmark [1] is the most widely recognized and discussed metric for ranking high performance computing systems. However, HPL is increasingly unreliable as a true measure of system performance for a growing collection of important science and engineering applications. In this paper we describe a new high performance conjugate gradient (HPCG) benchmark. HPCG is composed of computations and data access patterns more commonly found in applications. Using HPCG we strive for a better correlation to real scientific application performance and expect to drive computer system design and implementation in directions that will better impact performance improvement.

Heroux, Michael Allen; Dongarra, Jack. [University of Tennessee, Knoxville, TN

2013-06-01T23:59:59.000Z

93

MSIM 795/895:MSIM 795/895: HIGH PERFORMANCE COMPUTING AND SIMULATIONHIGH PERFORMANCE COMPUTING AND SIMULATION  

E-Print Network (OSTI)

MSIM 795/895:MSIM 795/895: HIGH PERFORMANCE COMPUTING AND SIMULATIONHIGH PERFORMANCE COMPUTING://eng.odu.edu/msve COURSE DESCRIPTION Introduction to modern high performance computing platforms including top their research area. Project presentations are required. COURSE TOPICS 1. Overview of high-performance computing

94

Principles of energy efficiency in high performance computing  

Science Conference Proceedings (OSTI)

High Performance Computing (HPC) is a key technology for modern researchers enabling scientific advances through simulation where experiments are either technically impossible or financially not feasible to conduct and theory is not applicable. However, ... Keywords: HPC, PUE, energy efficiency, high performance computing, power usage effectiveness

Axel Auweter; Arndt Bode; Matthias Brehm; Herbert Huber; Dieter Kranzlmller

2011-08-01T23:59:59.000Z

95

Molecular Dynamics Simulations on High-Performance Reconfigurable Computing Systems  

Science Conference Proceedings (OSTI)

The acceleration of molecular dynamics (MD) simulations using high-performance reconfigurable computing (HPRC) has been much studied. Given the intense competition from multicore and GPUs, there is now a question whether MD on HPRC can be competitive. ... Keywords: FPGA-based coprocessors, application acceleration, bioinformatics, biological sequence alignment, high performance reconfigurable computing

Matt Chiu; Martin C. Herbordt

2010-11-01T23:59:59.000Z

96

High-performance Computing in China: Research and Applications  

Science Conference Proceedings (OSTI)

In this report we review the history of high-performance computing (HPC) system development and applications in China and describe the current status of major government programs, HPC centers and facilities, major research institutions, important HPC ... Keywords: China, High performance computing, research and applications

Ninghui Sun; David Kahaner; Debbie Chen

2010-11-01T23:59:59.000Z

97

High Performance Computational Biology: A Distributed computing Perspective (2010 JGI/ANL HPC Workshop)  

SciTech Connect

David Konerding from Google, Inc. gives a presentation on "High Performance Computational Biology: A Distributed Computing Perspective" at the JGI/Argonne HPC Workshop on January 26, 2010.

Konerding, David [Google, Inc

2010-01-26T23:59:59.000Z

98

On the user-scheduler relationship in high-performance computing  

E-Print Network (OSTI)

1.1. High-Performance Computing . . . . . . . . 1.2. ProblemJournal of High Performance Computing Applications, 19(4):IEEE Conference on High Performance Computing, Networking,

Lee, Cynthia Bailey

2009-01-01T23:59:59.000Z

99

Energy-aware high performance computing: a taxonomy study, in  

E-Print Network (OSTI)

AbstractTo reduce the energy consumption and build a sustainable computer infrastructure now becomes a major goal of the high performance community. A number of research projects have been carried out in the field of energy-aware high performance computing. This paper is devoted to categorize energy-aware computing methods for the high-end computing infrastructures, such as servers, clusters, data centers, and Grids/Clouds. Based on a taxonomy of methods and system scales, this paper reviews the current status of energy-aware HPC research and summarizes open questions and research directions of software architecture for future energy-aware HPC studies.

Chang Cai; Lizhe Wang; Samee U. Khan; Jie Tao

2011-01-01T23:59:59.000Z

100

Commodity High Performance Computing at Commodity Prices  

E-Print Network (OSTI)

The entry price of supercomputing has traditionally been very high. As processing elements, operating systems, and switch technology become cheap commodity parts, building a powerful supercomputer at a fraction of the price of a proprietary system becomes realistic.

Simon Cox Denis; Simon J. Cox; Denis A. Nicole; Kenji Takeda

1998-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


101

Simulation and High-Performance Computing | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Simulation and High-Performance Computing Simulation and High-Performance Computing Simulation and High-Performance Computing October 29, 2010 - 12:22pm Addthis Former Under Secretary Koonin Former Under Secretary Koonin Director - NYU's Center for Urban Science & Progress and Former Under Secretary for Science What are the key facts? The Chinese's Tianhe-1A machine is now the world's most powerful computer, 40% faster than the fastest American machine located at Oak Ridge National Laboratory. Of the top 500 supercomputers in the world, more than half are in the U.S., and 90% were built by U.S. hardware vendors. We are developing the next generation of supercomputers over the next decade, which will be capable of exaflop-class performance (a factor of 1000 more powerful than today's most powerful computers).

102

Littles law and high performance computing  

E-Print Network (OSTI)

This note discuses Littles law and relates the form cited in queuing theory with a form often cited in the field of high performance computing. A rigorous mathematical proof of Littles law is included.

David H. Bailey

1997-01-01T23:59:59.000Z

103

Los Alamos Lab: High-Performance Computing: Cielo supercomputer  

NLE Websites -- All DOE Office Websites (Extended Search)

ADTSC: High Performance Computing, HPC Home About Us Cielo CCC5 Call CCC Template (.doc) ASC Program Links ASC Headquarters ASC LANL ASC LLNL ASC Sandia Cielo Project Contacts...

104

Los Alamos Lab: High-Performance Computing: Roadrunner  

NLE Websites -- All DOE Office Websites (Extended Search)

ron unne r reta LANL has always been an early adopter of transformational high performance computing (HPC) technology. For example, in the 1970s when HPC was scalar; LANL...

105

Humanities and High Performance Computers Connect at NERSC -...  

NLE Websites -- All DOE Office Websites (Extended Search)

or translated into English, which may have been influenced by the classics. "High performance computing really allows us to ask questions on a scale that we haven't been able to...

106

Los Alamos Lab: High-Performance Computing: Roadrunner  

NLE Websites -- All DOE Office Websites (Extended Search)

High-Performance Computing, HPC High-Performance Computing, HPC Home HPC-1 HPC-2 HPC-3 HPC-5 Contacts HPC Division Leader Gary Grider (acting) Deputy Division Leader Randal Rheinheimer (acting) Chief of Staf Angelina Trujillo HPC Division Office MS B260 Los Alamos, NM 87545 (505) 667-6164 Header Luna LUNA Joins LANL Supercomputers The Appro Xtreme-X(tm) Supercomputer was selected as part of the second Tri-Lab Linux Capacity Cluster (TLCC2) program to add 6 petaFlop/s to the computer power of the Department of Energy National Nuclear Security Administration (NNSA). The Los Alamos computer is named Luna (moon) in keeping with its predecessor Cielo (sky), honoring New Mexico's Spanish heritage. These supercomputers are in use by the three national laboratories in NNSA's Advanced Simulation and Computing (ASC) program: Lawrence Livermore (LLNL), Los Alamos (LANL) and Sandia (SNL).

107

NWChem Delivering High-Performance Computational Chemistry to Science  

NLE Websites -- All DOE Office Websites (Extended Search)

NWChem NWChem Delivering High-Performance Computational Chemistry to Science SCientifiC innovation tHrougH integration www.nwchem-sw.org www.emsl.pnl.gov NWChem  High-Performance Computational Chemistry EMSL  Environmental Molecular Sciences Laboratory 2 3 NWChem software » Biomolecules, nanostructures, and solid state » From quantum to classical, and all combinations » Gaussian functions or plane-waves » Scaling from one to thousands of processors » Properties and relativity » Open source NWChem Introduction NWChem is cutting-edge software that offers an extensive array of highly scalable, parallel computational chemistry methods needed to address a wide range of large, challenging scientific questions. As one of the U.S. Department of Energy's premier computational chemistry tools, NWChem is

108

Software Tools for High Performance Computing: Survey and Recommendations  

E-Print Network (OSTI)

Applications programming for High Performance Computing is notoriously difficult. Although Parallel Programming is intrinsically complex, the principal reason why High Performance Computing is difficult is the lack of effective software tools. We believe that the lack of tools in turn is largely due to market forces rather than our inability to design and build such tools. Unfortunately, the poor availability and utilization of parallel tools hurts the entire supercomputing industry and the US High Performance Computing initiative which is focused on applications. A disproportionate amount of resources are being spent on faster hardware and architectures, while tools are being neglected. This paper introduces a taxonomy of tools, analyzes the major factors that contribute to this situation, and suggest ways that the imbalance could be redressed and the likely evolution of tools. 1 Received November 1994 Revised October 1995 Many attendees at the May 1993 Workshop on Parallel Compu...

Bill Appelbe Donna; Donna Bergmark (eds

1996-01-01T23:59:59.000Z

109

Argonne TTRDC - Feature - High-Performance Computing Enables Huge Leap  

NLE Websites -- All DOE Office Websites (Extended Search)

High-Performance Computing Enables Huge Leap Forward in Engine Development High-Performance Computing Enables Huge Leap Forward in Engine Development Engine Modeling and Simulation Team From left, Argonne researchers Raymond Bair, Doug Longman, Qingluan Xue, Marta Garcia, Shashi Aithal (seated) and Sibendu Som are part of a multidisciplinary team working to advance diesel and spark engine modeling and simulation tools into the high-performance computing realm. When we turn the key in our car's ignition, we usually don't think about the combustion process that takes place inside the engine that enables the car to go. We just know that it works. But, what actually takes place inside the engine? How do fuel injectors, turbulent mixing and combustion chemistry impact the fuel efficiency of a vehicle? And how do engine manufacturers improve these hidden-from-view

110

A review of High Performance Computing foundations for scientists  

E-Print Network (OSTI)

The increase of existing computational capabilities has made simulation emerge as a third discipline of Science, lying midway between experimental and purely theoretical branches [1, 2]. Simulation enables the evaluation of quantities which otherwise would not be accessible, helps to improve experiments and provides new insights on systems which are analysed [3-6]. Knowing the fundamentals of computation can be very useful for scientists, for it can help them to improve the performance of their theoretical models and simulations. This review includes some technical essentials that can be useful to this end, and it is devised as a complement for researchers whose education is focused on scientific issues and not on technological respects. In this document we attempt to discuss the fundamentals of High Performance Computing (HPC) [7] in a way which is easy to understand without much previous background. We sketch the way standard computers and supercomputers work, as well as discuss distributed computing and di...

Ibez, Pablo Garca-Risueo Pablo E

2012-01-01T23:59:59.000Z

111

June 8, 2007 Advanced Fault Tolerance Solutions for High Performance Computing  

E-Print Network (OSTI)

June 8, 2007 Advanced Fault Tolerance Solutions for High Performance Computing Workshop on Trends Tolerance Solutions for High Performance Computing Christian Engelmann Oak Ridge National Laboratory, Oak for High Performance Computing Workshop on Trends, Technologies and Collaborative Opportunities in High

Engelmann, Christian

112

Toward Codesign in High Performance Computing Systems - 06386705...  

NLE Websites -- All DOE Office Websites (Extended Search)

s f o r t h i s w o r k . 7 . R E F E R E N C E S 1 J . A n g e t a l . High Performance Computing: From Grids and Clouds to Exascale, c h a p t e r E x a s c a l e C o m p u...

113

High-Performance Computing and Visualization of Unsteady Turbulent Flows  

Science Conference Proceedings (OSTI)

The history of high-performance computing in turbulent flows is reviewed and their recent topics in industrial use are addressed. Special attention is paid to the validity of the method in flow visualization, and three-dimensional unsteady simulation ... Keywords: CAE, DNS, HPC, LES, turbulence

T. Kobayashi; M. Tsubokura; N. Oshima

2008-01-01T23:59:59.000Z

114

Challenges for high-performance networking for exascale computing.  

Science Conference Proceedings (OSTI)

Achieving the next three orders of magnitude performance increase to move from petascale to exascale computing will require a significant advancements in several fundamental areas. Recent studies have outlined many of the challenges in hardware and software that will be needed. In this paper, we examine these challenges with respect to high-performance networking. We describe the repercussions of anticipated changes to computing and networking hardware and discuss the impact that alternative parallel programming models will have on the network software stack. We also present some ideas on possible approaches that address some of these challenges.

Barrett, Brian W.; Hemmert, K. Scott; Underwood, Keith Douglas (Intel Corporation, Hillsboro, OR); Brightwell, Ronald Brian

2010-05-01T23:59:59.000Z

115

May 28, 2007 Middleware in Modern High Performance Computing System Architectures 1/20 Middleware in Modern High Performance  

E-Print Network (OSTI)

May 28, 2007 Middleware in Modern High Performance Computing System Architectures 1/20 Middleware in Modern High Performance Computing System Architectures Christian Engelmann1,2, Hong Ong1, Stephen L 28, 2007 Middleware in Modern High Performance Computing System Architectures 2/20 Talk Outline

Engelmann, Christian

116

Application of High Performance Computing for Automotive Design and Manufacturing  

DOE Green Energy (OSTI)

This project developed new computer simulation tools which can be used in DOE internal combustion engine and weapons simulation programs currently being developed. Entirely new massively parallel computer modeling codes for chemically reactive and incompressible fluid mechanics with interactive physics sub-models were developed. Chemically reactive and aerodynamic flows are central parts in many DOE systems. Advanced computer modeling codes with new chemistry and physics capabilities can be used on massively parallel computers to handle more complex problems associated with chemically reactive propulsion systems, energy efficiency, enhanced performance and durability, multi-fuel capability and reduced pollutant emissions. The work for this project is also relevant to the design, development and application of advanced user-friendly computer codes for new high-performance computing platforms for manufacturing and which will also impact and interact with the U.S.'s advanced communications program. Finite element method (FEM) formulations were developed that are directly usable in simulating rapid deformation resulting from collision, impact, projectiles, etc. This simulation capability is applicable to both DOE (e.g., surety and penetration) and DoD (e.g., armor) applications. The models of plate and shell composite structures were developed for simulation of glass continuous strand mat and braided composite in thermoset polymer matrix. The developed numerical tools based upon the fundamental mechanisms responsible for damage evolution in continuous-fiber organic-matrix composites. This class of materials is especially relevant because of their high strength to mass ratio, anisotropic behavior, and general application in most transportation and weapon delivery systems. The high-performance computational tools developed are generally applicable to a broad spectrum of materials with similar fiber structures.

Zacharia, T.

1999-04-01T23:59:59.000Z

117

Editorial for Advanced Theory and Practice for High Performance Computing and Communications Geoffrey Fox  

E-Print Network (OSTI)

Editorial for Advanced Theory and Practice for High Performance Computing and Communications Theory and Practice for High Performance Computing and Communications. I would like to thank Omer Rana International Conference on High Performance Computing and Communications (HPCC-09) http

118

June 4, 2007 Advanced Fault Tolerance Solutions for High Performance Computing  

E-Print Network (OSTI)

June 4, 2007 Advanced Fault Tolerance Solutions for High Performance Computing Workshop on Trends Tolerance Solutions for High Performance Computing Christian Engelmann Oak Ridge National Laboratory, Oak Solutions for High Performance Computing Workshop on Trends, Technologies and Collaborative Opportunities

Engelmann, Christian

119

A directory service for configuring high-performance distributed computations  

Science Conference Proceedings (OSTI)

High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

Fitzgerald, S.; Kesselman, C. [Univ. of Southern California, Marina del Rey, CA (United States). Information Sciences Institute; Foster, I. [Argonne National Lab., IL (United States)] [and others

1997-08-01T23:59:59.000Z

120

GOCE DATA ANALYSIS: REALIZATION OF THE INVARIANTS APPROACH IN A HIGH PERFORMANCE COMPUTING ENVIRONMENT  

E-Print Network (OSTI)

GOCE DATA ANALYSIS: REALIZATION OF THE INVARIANTS APPROACH IN A HIGH PERFORMANCE COMPUTING) implementation of the algorithms on high performance computing platforms. #12;2. INVARIANTS REPRESENTATION

Stuttgart, Universität

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


121

Energy based performance tuning for large scale high performance computing systems  

Science Conference Proceedings (OSTI)

Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. In response to this challenge, we exploit the unique power measurement ... Keywords: energy efficiency, frequency scaling, high performance computing (HPC), power

James H. Laros, III; Kevin T. Pedretti; Suzanne M. Kelly; Wei Shu; Courtenay T. Vaughan

2012-03-01T23:59:59.000Z

122

High Performance Computing for Wavelet and Wavelet Packet Image Coding  

E-Print Network (OSTI)

The use of high performance computers for wavelet and wavelet packet based image coding is discussed. After a short description of wavelet and wavelet packet methods the existing literature concerning vector, parallel and VLSI wavelet transforms is reviewed. In the following an algorithm for wavelet packet best basis selection on moderate parallel MIMD architectures is introduced and an implementation on a workstation cluster is presented. DV4Z-2/Rel 1.0/October 25 1994 1 CONTENTS PACT Contents 1 Introduction 3 2 Image coding using wavelet and wavelet packet techniques 3 2.1 Wavelets and Multiresolution analysis : : : : : : : : : : : : : : : : : : : : : : 3 2.2 Wavelet packets : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 2.3 Best basis selection : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 6 2.4 Image coding : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 7 3 High performance computing and wavelet techn...

Andreas Uhl

1994-01-01T23:59:59.000Z

123

High Performance Computing: Needs and Application for Thailand  

E-Print Network (OSTI)

This paper presents the overview of High Performance Computing, the required components, and its applications. Current state of the HPC researches and facilities in Thailand has also been reviewed along with the HPC related research conducted in Kasetsart University. In summary, HPC is a technology that has an impact on Thailand competitiveness. Yet, much more qualifed man-power and broader recognition of the field are desparately needed.

Yuen Poovarawan Putchong; Putchong Uthayopas

1997-01-01T23:59:59.000Z

124

The role of interpreters in high performance computing  

SciTech Connect

Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. We show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.

Naumann, Axel; /CERN; Canal, Philippe; /Fermilab

2008-01-01T23:59:59.000Z

125

COMPUTATIONAL STEERING: TOWARDS ADVANCED INTERACTIVE HIGH PERFORMANCE COMPUTING IN ENGINEERING SCIENCES  

E-Print Network (OSTI)

Key-words: Computational steering, high-performance computing, interactive simulation, virtual reality, CFD Computational Science and Engineering faces a continuous increase of speed of computers and availability of very fast networks. Yet, it seems that some opportunities offered by these ongoing developments are only used to a fraction for numerical simulation. Moreover, despite new possibilities from computer visualization, virtual or augmented reality and collaboration models, most available engineering software still follows the classical way of a strict separation of preprocessing, computing and postprocessing. This paper will first identify some of the major obstructions of an interactive computation for complex simulation tasks in engineering sciences. These are especially found in traditional software structures, in the definition of geometric models and boundary conditions and in the often still very tedious work of generating computational meshes. It then presents a generic approach for collaborative computational steering, where pre- and postprocessing is integrated with high

Ernst Rank; Andr Borrmann; Er Dster; Christoph Van Treeck; Petra Wenisch

2008-01-01T23:59:59.000Z

126

SBAC-PAD'2000 12th Symposium on Computer Architecture and High Performance Computing -S~ao Pedro -SP 83  

E-Print Network (OSTI)

SBAC-PAD'2000 12th Symposium on Computer Architecture and High Performance Computing - S~ao Pedro Architecture and High Performance Computing - S~ao Pedro - SP -hard optimization problems

Cruz, Frederico

127

High Performance Computing Update, June 2009 1. A meeting was held with users and potential users of high performance computing systems in April and this  

E-Print Network (OSTI)

High Performance Computing Update, June 2009 1. A meeting was held with users and potential users of high performance computing systems in April and this considered a proposal from the Director and application "advice" and a core system to host and manage high performance computing nodes (or clusters

Sussex, University of

128

MB++: An Integrated Architecture for Pervasive Computing and High-Performance Computing  

E-Print Network (OSTI)

utilization of high-performance computing resources. A comprehensive solution requires not only facilities for man- aging data transport, but also support for managing and in- stantiating computation automatically of applications that MB++ is de- signed to support: A metropolitan-area emergency response infrastructure may have

Ramachandran, Umakishore

129

The failure of TCP in high-performance computational grids  

Science Conference Proceedings (OSTI)

Distributed computational grids depend on TCP to ensure reliable end-to-end communication between nodes across the wide-area network (WAN). Unfortunately, TCP performance can be abysmal even when buffers on the end hosts are manually optimized. Recent ... Keywords: network traffic characterization, self-similarity,TCP, computational grid, distributed computing

W. Feng; P. Tinnakornsrisuphap

2000-11-01T23:59:59.000Z

130

Use of high performance computing resources for underwater acoustic modeling.  

Science Conference Proceedings (OSTI)

The majority of standard underwater propagation models provide a two?dimensional (range and depth) acoustic field for a single frequency point source. Computational resource demand increases considerably when the three?dimensional acoustic field of a broad?band spatially extended source is of interest. An upgrade of the standard parabolic equationmodel RAM for use in a high?performance computing (HPC) environment is discussed. A benchmarked upgraded version of RAM is used in the Louisiana Optical Network Initiative HPC?environment to model the three?dimensional acoustic field of a seismic airgun array. Four?dimensional visualization (time and space) of the generated data volume is also addressed. [Research supported by the Louisiana Optical Network Initiative

Anca M. Niculescu; Natalia A. Sidorovskaia; Peter Achi; Arslan M. Tashmukhambetov; George E. Ioup; Juliette W. Ioup

2009-01-01T23:59:59.000Z

131

Energy Efficiency Opportunities in Federal High Performance Computing Data Centers  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Efficiency Efficiency Opportunities in Federal High Performance Computing Data Centers Prepared for the U.S. Department of Energy Federal Energy Management Program By Lawrence Berkeley National Laboratory Rod Mahdavi, P.E. LEED A.P. September 2013 2 Contacts Rod Mahdavi, P.E. LEED AP Lawrence Berkeley National Laboratory (510) 495-2259 rmahdavi@lbl.gov For more information on FEMP: Will Lintner, P.E. Federal Energy Management Program U.S. Department of Energy (202) 586-3120 william.lintner@ee.doe.gov 3 Contents Executive Summary .................................................................................................... 6 Overview .................................................................................................................... 7

132

Participation by Columbia Researchers in Shared Central High Performance Computing (HPC) Resources  

E-Print Network (OSTI)

) to create a shared central high performance computing (HPC) cluster, ColumbiaParticipation by Columbia Researchers in Shared Central High Performance Computing (HPC) Resources Shared Research Computing Policy Advisory Committee (SRCPAC) Chair, Professor

Champagne, Frances A.

133

High Performance Computing Systems and Applications edited by Nikitas J. Dimopoulos; Dept. of Electrical and Computer Engineering, University of  

E-Print Network (OSTI)

High Performance Computing Systems and Applications edited by Nikitas J. Dimopoulos; Dept AND COMPUTER SCIENCE 657 November 2001 Hardbound 544 pp. ISBN 0-7923-7617-X High Performance Computing Systems on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book

Baranoski, Gladimir V. G.

134

The future of high performance computers in science and engineering  

Science Conference Proceedings (OSTI)

A vast array of new, highly parallel machines are opening up new opportunities for new applications and new ways of computing.

C. Gordon Bell

1989-09-01T23:59:59.000Z

135

High-performance Computing Applied to Semantic Databases  

SciTech Connect

To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.

Goodman, Eric L.; Jimenez, Edward; Mizell, David W.; al-Saffar, Sinan; Adolf, Robert D.; Haglin, David J.

2011-06-02T23:59:59.000Z

136

High-performance computing applied to semantic databases.  

Science Conference Proceedings (OSTI)

To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.

al-Saffar, Sinan (Pacific Northwest National Laboratory, Richland, WA); Jimenez, Edward Steven, Jr.; Adolf, Robert (Pacific Northwest National Laboratory, Richland, WA); Haglin, David (Pacific Northwest National Laboratory, Richland, WA); Goodman, Eric L.; Mizell, David (Cray, Inc., Seattle, WA)

2010-12-01T23:59:59.000Z

137

Algorithmic Based Fault Tolerance Applied to High Performance Computing  

E-Print Network (OSTI)

We present a new approach to fault tolerance for High Performance Computing system. Our approach is based on a careful adaptation of the Algorithmic Based Fault Tolerance technique (Huang and Abraham, 1984) to the need of parallel distributed computation. We obtain a strongly scalable mechanism for fault tolerance. We can also detect and correct errors (bit-flip) on the fly of a computation. To assess the viability of our approach, we have developed a fault tolerant matrix-matrix multiplication subroutine and we propose some models to predict its running time. Our parallel fault-tolerant matrix-matrix multiplication scores 1.4 TFLOPS on 484 processors (cluster jacquard.nersc.gov) and returns a correct result while one process failure has happened. This represents 65% of the machine peak efficiency and less than 12% overhead with respect to the fastest failure-free implementation. We predict (and have observed) that, as we increase the processor count, the overhead of the fault tolerance drops significantly.

Bosilca, George; Dongarra, Jack; Langou, Julien

2008-01-01T23:59:59.000Z

138

High Performance Computing (HPC) Survey 1. Choose the category that best describes you  

E-Print Network (OSTI)

High Performance Computing (HPC) Survey 1. Choose the category that best describes you Response on your (home or work) computer to access the High Performance Computing Facilities (HPC) - (tick all High Performance Computing (HPC) Facilities? Response Percent Response Count Daily 26.7% 23 Weekly 27

Martin, Stephen John

139

Event Services for High Performance Computing Greg Eisenhauer Fabian E. Bustamante Karsten Schwan  

E-Print Network (OSTI)

Event Services for High Performance Computing Greg Eisenhauer Fabi´an E. Bustamante Karsten Schwan,fabianb,schwang@cc.gatech.edu Abstract The Internet and the Grid are changing the face of high performance computing. Rather than tightly computing has been a strong fo- cus of research in high performance computing. This has resulted

Kuzmanovic, Aleksandar

140

High Performance Computing in the Life/Medical SciencesVirginiaBioinformaticsInstitute  

E-Print Network (OSTI)

High Performance Computing in the Life/Medical SciencesVirginiaBioinformaticsInstitute 2 week in high performance computing and data intensive computing 4. basic knowledge of relational databases (i Program Dates: July 17 - 25 Application deadline is March 30, 2012 High Performance Computing in the Life

Virginia Tech

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


141

The Gateway Computational Web Portal: Developing Web Services for High Performance Computing  

E-Print Network (OSTI)

We describe the Gateway computational web portal, which follows a traditional three-tiered approach to portal design. Gateway provides a simplified, ubiquitously available user interface to high performance computing and related resources. This approach, while successful for straightforward applications, has limitations that make it difficult to support loosely federated, interoperable web portal systems. We examine the emerging standards in the so-called web services approach to business-to-business electronic commerce for possible solutions to these shortcomings and outline topics of research in the emerging area of computational grid web services.

Marlon Pierce; Choonhan Youn; Geoffrey Fox

2002-01-01T23:59:59.000Z

142

Graph 500 Performance on a Distributed-Memory Cluster REU Site: Interdisciplinary Program in High Performance Computing  

E-Print Network (OSTI)

traditional performance benchmarks for high- performance computers measure the speed of arithmetic operations benchmark is intended to rank high-performance computers based on speed of memory retrieval-memory cluster tara in the UMBC High Performance Computing Facility (www.umbc.edu/hpcf). The cluster tara has 82

Gobbert, Matthias K.

143

Time Division Multiplexing of Network Access by Security Groups in High Performance Computing Environments.  

E-Print Network (OSTI)

??It is commonly known that High Performance Computing (HPC) systems are most frequently used by multiple users for batch job, parallel computations. Less well known, (more)

Ferguson, Joshua

2013-01-01T23:59:59.000Z

144

Investigating the Mobility of Light Autonomous Tracked Vehicles Using a High Performance Computing  

E-Print Network (OSTI)

Investigating the Mobility of Light Autonomous Tracked Vehicles Using a High Performance Computing limiting the scope and impact of high performance computing (HPC). This scenario is rapidly changing due

145

Welcome to the EK131 Module entitled High Performance Computing: Bringing Ideas to Life. Basic Information  

E-Print Network (OSTI)

Welcome to the EK131 Module entitled High Performance Computing: Bringing Ideas to Life. Basic and for some selected activities. Most of the time, we will meet in the High Performance Computing Lab (PHO207

Goldberg, Bennett

146

THE FAILURE OF TCP IN HIGH-PERFORMANCE COMPUTATIONAL GRIDS  

Science Conference Proceedings (OSTI)

Distributed computational grids depend on TCP to ensure reliable end-to-end communication between nodes across the wide-area network (WAN). Unfortunately, TCP performance can be abysmal even when buffers on the end hosts are manually optimized. Recent studies blame the self-similar nature of aggregate network traffic for TCP's poor performance because such traffic is not readily amenable to statistical multiplexing in the Internet, and hence computational grids. In this paper we identify a source of self-similarity previously ignored, a source that is readily controllable--TCP. Via an experimental study, we examine the effects of the TCP stack on network traffic using different implementations of TCP. We show that even when aggregate application traffic ought to smooth out as more applications' traffic are multiplexed, TCP induces burstiness into the aggregate traffic loud, thus adversely impacting network performance. Furthermore, our results indicate that TCP performance will worsen as WAN speeds continue to increase.

W. FENG; ET AL

2000-08-01T23:59:59.000Z

147

NREL: News - NREL Selects Partners for New High Performance Computer...  

NLE Websites -- All DOE Office Websites (Extended Search)

cooling to accelerate innovation in more efficient use of energy critical for achieving exascale performance by end of the decade," Stephen Wheat, general manager of High...

148

Measuring Productivity on High Performance Computers Marvin Zelkowitz1,2  

E-Print Network (OSTI)

Measuring Productivity on High Performance Computers Marvin Zelkowitz1,2 Victor Basili1,2 Sima, lorin, hollings, nakamura}@cs.umd.edu Abstract In the high performance computing domain, the speed of concern to high performance computing developers. In this paper we will discuss the problems of defining

Basili, Victor R.

149

UNIVERSITY OF SOUTHERN CALIFORNIA CSCI 653 (High Performance Computing and Simulations) : Fall 2013  

E-Print Network (OSTI)

UNIVERSITY OF SOUTHERN CALIFORNIA CSCI 653 (High Performance Computing and Simulations) : Fall 2013 Performance Computing and Simulations). My PhD work is in the area of resiliency for future Exascale High. 2 DESCRIPTION OF AN HPCS APPLICATION Simulation of Large Scale High Performance Computing System

Southern California, University of

150

1st International Workshop on High Performance Computing, Networking and Analytics for the Power Grid  

E-Print Network (OSTI)

1st International Workshop on High Performance Computing, Networking and Analytics for the Power Transient Stability" #12;1st International Workshop on High Performance Computing, Networking and Analytics (University of Vermont). "Developing a Dynamic Model of Cascading Failure for High Performance Computing using

151

PRESENT AND FUTURE OF HIGH PERFORMANCE COMPUTING Trends, Challenges, and Opportunities  

E-Print Network (OSTI)

PRESENT AND FUTURE OF HIGH PERFORMANCE COMPUTING Trends, Challenges, and Opportunities November 17 laboratories of the HPC facilities and resources. First, the EPFL high performance computing facilities of Modeling and Simulation through High Performance Computing. Leading research activities of various groups

Ceragioli, Francesca

152

10 January 2009 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING High Performance Computing in Remote Sensing  

E-Print Network (OSTI)

10 January 2009 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING High Performance Computing in Remote Book ReviewBook Review High Performance Computing in Remote Sensing introduces the most recent advances in the incorporation of the high-performance computing (HPC) paradigm in remote sensing missions. Eighteen well

Plaza, Antonio J.

153

MAUI HIGH PERFORMANCE COMPUTING CENTER 550 Lipoa Parkway, Kihei-Maui, HI 96753  

E-Print Network (OSTI)

MAUI HIGH PERFORMANCE COMPUTING CENTER i 550 Lipoa Parkway, Kihei-Maui, HI 96753 (808) 879-5077 · Fax: (808) 879-5018 E-mail: info@mhpcc.hpc.mil URL: www.mhpcc.hpc.mil MAUI HIGH PERFORMANCE COMPUTING This is the fourteenth annual edition of Maui High Performance Computing Center's (MHPCC) Application Briefs which

Olsen, Stephen L.

154

High performance computing and the simplex method Julian Hall, Qi Huangfu and Edmund Smith  

E-Print Network (OSTI)

High performance computing and the simplex method Julian Hall, Qi Huangfu and Edmund Smith School of Mathematics University of Edinburgh 12th April 2011 High performance computing and the simplex method #12;The... ... but methods for all three depend on it! High performance computing and the simplex method 1 #12;Overview · LP

Hall, Julian

155

Judging the Impact of Conference and Journal Publications in High Performance Computing  

E-Print Network (OSTI)

Judging the Impact of Conference and Journal Publications in High Performance Computing dimensions that count most, conferences are superior. This is particularly true in high performance computing and are never published in journals. The area of high performance computing is broad, and we divide venues

Zhou, Yuanyuan

156

The Use of Java in High Performance Computing: A Data Mining Example  

E-Print Network (OSTI)

The Use of Java in High Performance Computing: A Data Mining Example David Walker and Omer Rana in high performance computing is discussed with particular reference to the efforts of the Java Grande Java, Parallel Computing, Neu­ ral Networks, Distributed Objects 1 Introduction High performance

Walker, David W.

157

A Pilot Study to Evaluate Development Effort for High Performance Computing  

E-Print Network (OSTI)

1 A Pilot Study to Evaluate Development Effort for High Performance Computing Victor Basili1 the development time for programs written for high performance computers (HPC). To attack this relatively novel students in a graduate level High Performance Computing class at the University of Maryland. We collected

Basili, Victor R.

158

Third International Workshop on Software Engineering for High Performance Computing (HPC) Applications  

E-Print Network (OSTI)

Third International Workshop on Software Engineering for High Performance Computing (HPC, and financial modeling. The TOP500 website (http://www.top500.org) lists the top 500 high performance computing to define new ways of measuring high performance computing systems that take into account not only the low

Carver, Jeffrey C.

159

Modeling and Simulation Environment for Photonic Interconnection Networks in High Performance Computing  

E-Print Network (OSTI)

at the scale of high performance computer clusters and warehouse scale data centers, system level simulations and results for rack scale photonic interconnection networks for high performance computing. Keywords: optical to the newsworthy power consumption [3], latency [4] and bandwidth challenges [5] of high performance computing (HPC

Bergman, Keren

160

Performance Computing with  

E-Print Network (OSTI)

High Performance Computing with Iceberg. Mike Griffiths Bob Booth November 2005 AP-Unix4 © University of Sheffield #12;Bob Booth High Performance Computing with Iceberg Contents 1. REVIEW OF AVAILABLE 23 7.1 USING FUSE TO MOUNT FILE SYSTEMS ON ICEBERG 23 2 #12;Bob Booth High Performance Computing

Martin, Stephen John

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


161

Special Section Guest Editorial: High-Performance Computing in Applied Remote Sensing  

E-Print Network (OSTI)

Special Section Guest Editorial: High-Performance Computing in Applied Remote Sensing Bormin Huanga-performance computing in applied remote sensing presents the state-of-the-art research in incorporating high-performance computing (HPC) facilities and algorithms for effective and efficient remote sensing applications

Plaza, Antonio J.

162

Complex version of high performance computing LINPACK benchmark (HPL)  

Science Conference Proceedings (OSTI)

This paper describes our effort to enhance the performance of the AORSA fusion energy simulation program through the use of high-performance LINPACK (HPL) benchmark, commonly used in ranking the top 500 supercomputers. The algorithm used by HPL, enhanced ... Keywords: HPL, parallel dense solver

R. F. Barrett; T. H. F. Chan; E. F. D'Azevedo; E. F. Jaeger; K. Wong; R. Y. Wong

2010-04-01T23:59:59.000Z

163

Materials by computational design -- High performance thermoelectric materials  

DOE Green Energy (OSTI)

The objective of the project was to utilize advanced computing techniques to guide the development of new material systems that significantly improve the performance of thermoelectric devices for solid state refrigeration. Lockheed Martin Energy Systems, Inc. (LMES) was to develop computational approaches to refine the theory of the thermoelectric effect, establish physical limits, and motivate new materials development. Prior to the project, no major activity in thermoelectric research was visible as an observed limit in experimental data was commonly accepted as a practical limit by the majority of informed opinion in the physics and thermoelectric community. Due to the efforts of the project, new compounds have been isolated which indicates that there is a physical reason to search through the remaining uncharacterized compounds from a top down theoretical approach.

Sales, B. [Lockheed Martin Energy Systems, Inc., Oak Ridge, TN (United States); Lyon, H. [Marlow Industries, Inc., Dallas, TX (United States)

1997-04-15T23:59:59.000Z

164

High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility  

SciTech Connect

Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.

Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

2010-08-01T23:59:59.000Z

165

High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility  

SciTech Connect

Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.

Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

2010-08-01T23:59:59.000Z

166

High-Performance Embedded Computing: Architectures, Applications, and Methodologies  

Science Conference Proceedings (OSTI)

Over the past several years, embedded systems have emerged as an integral though unseen part of many consumer, industrial, and military devices. The explosive growth of these systems has resulted in embedded computing becoming an increasingly important ... Keywords: Computer Architecture

Wayne Wolf

2006-09-01T23:59:59.000Z

167

High Performance Computing Symposium 1996 Evaluating the Performance of Parallel Programs in a PseudoParallel MPI  

E-Print Network (OSTI)

High Performance Computing Symposium 1996 Evaluating the Performance of Parallel Programs 28­Feb­1981 Born in Halifax, Nova Scotia (Canada) Obtained BSc at Dalhousie University. Currently.'' In High Performance Computing Symposium '95. Demaine, Erik D. 1994. ``Heterogeneous Organization

Demaine, Erik

168

A Comparative Study of Stochastic Unit Commitment and Security-Constrained Unit Commitment Using High Performance Computing  

E-Print Network (OSTI)

High Performance Computing Anthony Papavasiliou and Shmuel S. Oren Abstract-- The large decomposition. The proposed algorithms are implemented in a high performance computing environment

Oren, Shmuel S.

169

Theory and practice of dynamic voltage/frequency scaling in the high performance computing environment.  

E-Print Network (OSTI)

?? This dissertation provides a comprehensive overview of the theory and practice of Dynamic Voltage/Frequency Scaling (DVFS) in the High Performance Computing (HPC) environment. We (more)

Rountree, Barry Louis

2010-01-01T23:59:59.000Z

170

Measuring and tuning energy efficiency on large scale high performance computing platforms.  

E-Print Network (OSTI)

??Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never (more)

Laros, James Howard III

2012-01-01T23:59:59.000Z

171

Theory and Practice of Dynamic Voltage/Frequency Scaling in the High Performance Computing Environment .  

E-Print Network (OSTI)

??This dissertation provides a comprehensive overview of the theory and practice of Dynamic Voltage/Frequency Scaling (DVFS) in the High Performance Computing (HPC) environment. We summarize (more)

Rountree, Barry

2009-01-01T23:59:59.000Z

172

High Performance Computing Based Methods for Simulation and Optimisation of Flow Problems.  

E-Print Network (OSTI)

??The thesis is concerned with the study of methods in high-performance computing for simulation and optimisation of flow problems that occur in the framework of (more)

Bockelmann, Hendryk

2010-01-01T23:59:59.000Z

173

Dynamic power management: from portable devices to high performance computing.  

E-Print Network (OSTI)

??Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate (more)

Bartolini, Andrea and#60;1981and#62

2011-01-01T23:59:59.000Z

174

High Performance Computing Innovation Center marks second anniversary  

NLE Websites -- All DOE Office Websites (Extended Search)

utilized HPC modeling and simulation to develop technologies that increase semi-truck fuel efficiency by at least 17 percent. The advanced computing technique used in this...

175

High performance computing network for cloud environment using simulators  

E-Print Network (OSTI)

Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional data center or had to design a new application for the cloud computing environment. The security issue, latency, fault tolerance are some parameter which we need to keen care before deploying, all this we only know after deploying but by using simulation we can do the experiment before deploying it to real environment. By simulation we can understand the real environment of cloud computing and then after it successful result we can start deploying your application in cloud computing environment. By using the simulator it...

Singh, N Ajith

2012-01-01T23:59:59.000Z

176

HIGH PERFORMANCE INTEGRATION OF DATA PARALLEL FILE SYSTEMS AND COMPUTING  

E-Print Network (OSTI)

Facility Application Requirements and Strategy OLCF Table of Contents iii CONTENTS TABLES ..................................................................................................................................84 APPENDIX A. OLCF OVERVIEW ...........................................................................................................................................119 #12;ORNL Leadership Computing Facility Application Requirements and Strategy OLCF Tables v TABLES

177

Introduction to High Performance Computers Richard Gerber NERSC User Services  

NLE Websites -- All DOE Office Websites (Extended Search)

What are the main parts of a What are the main parts of a computer? Merit Badge Requirements ... 4. Explain the following to your counselor: a. The five major parts of a computer. ... Boy Scouts of America Offer a Computers Merit Badge 5 What are the "5 major parts"? 6 Five Major Parts eHow.com Answers.com Fluther.com Yahoo! Wikipedia CPU CPU CPU CPU Motherboard RAM Monitor RAM RAM Power Supply Hard Drive Printer Storage Power Supply Removable Media Video Card Mouse Keyboard/ Mouse Video Card Secondary Storage Monitor Motherboard Keyboard Motherboard Motherboard Sound Card Case / Power Supply IO Peripherals 7 * What is a computer? - It depends what you are interested in. * CPU, memory, video card, motherboard, ... * Monitor, mouse, keyboard, speakers, camera,

178

ISSN0249-0803ISRNINRIA/RT--0395--FR+ENG Distributed and High Performance Computing  

E-Print Network (OSTI)

apport technique ISSN0249-0803ISRNINRIA/RT--0395--FR+ENG Distributed and High Performance Computing Performance Computing ?quipe-Projet Myriads Rapport technique n° 0395 -- Octobre 2010 -- 15 pages Abstract-Dina Tîra , Pierre Riteau , Jérôme Gallard , Christine Morin , Yvon Jégou Theme : Distributed and High

Paris-Sud XI, Université de

179

Energy-aware job scheduler for high-performance computing  

Science Conference Proceedings (OSTI)

In recent years energy-aware computing has become a major topic, not only in wireless and mobile devices but also in devices using wired technology. The ICT industry is consuming an increasing amount of energy and a large part of the consumption is generated ... Keywords: Energy-efficiency, HPC, Power consumption, Scheduling, Simulation, Testbed

Olli Mmmel; Mikko Majanen; Robert Basmadjian; Hermann Meer; Andr Giesler; Willi Homberg

2012-11-01T23:59:59.000Z

180

A system architecture supporting high-performance and cloud computing in an academic consortium environment  

Science Conference Proceedings (OSTI)

The University of Colorado (CU) and the National Center for Atmospheric Research (NCAR) have been deploying complimentary and federated resources supporting computational science in the Western United States since 2004. This activity has expanded to ... Keywords: Data-centric computing, High-performance computing, Regional and federated supercomputing initiatives

Michael Oberg; Matthew Woitaszek; Theron Voran; Henry M. Tufo

2011-06-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


181

A case for high performance computing with virtual machines  

Science Conference Proceedings (OSTI)

Virtual machine (VM) technologies are experiencing a resurgence in both industry and research communities. VMs offer many desirable features such as security, ease of management, OS customization, performance isolation, check-pointing, and migration, ...

Wei Huang; Jiuxing Liu; Bulent Abali; Dhabaleswar K. Panda

2006-06-01T23:59:59.000Z

182

High Performance Computing Systems for Autonomous Spaceborne Missions  

Science Conference Proceedings (OSTI)

Future-generation space missions across the solar system to the planets, moons, asteroids, and comets may someday incorporate supercomputers both to expand the range of missions being conducted and to significantly reduce their cost. By performing science ...

Thomas Sterling; Daniel S. Katz; Larry Bergman

2001-08-01T23:59:59.000Z

183

Power Efficiency in High Performance Computing Shoaib Kamil  

E-Print Network (OSTI)

of 192 cores per cabinet. The power feed to each cabinet is 208 VAC 3-phase and is capable of handling 25 KW per rack. Each cabinet has a single 92 percent efficient power supply at the bottom of the rack system performance (ssp) metric. LBNL Tech Report 58868, 2005. [13] L. Oliker, A. Canning, J. Carter, J

184

High-Performance Interconnects and Computing Systems: Quantitative Studies A thesis submitted to the Department of  

E-Print Network (OSTI)

. To measure and to predict the performance of parallel computer systems, parallel benchmarks are designedHigh-Performance Interconnects and Computing Systems: Quantitative Studies By Ying Qian A thesis characteristics, parallel programming paradigms used by the applications, and the machine system's architecture

Afsahi, Ahmad

185

High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility  

SciTech Connect

Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where appropriate, changes in Center metrics were introduced. This report covers CY 2010 and CY 2011 Year to Date (YTD) that unless otherwise specified, denotes January 1, 2011 through June 30, 2011. User Support remains an important element of the OLCF operations, with the philosophy 'whatever it takes' to enable successful research. Impact of this center-wide activity is reflected by the user survey results that show users are 'very satisfied.' The OLCF continues to aggressively pursue outreach and training activities to promote awareness - and effective use - of U.S. leadership-class resources (Reference Section 2). The OLCF continues to meet and in many cases exceed DOE metrics for capability usage (35% target in CY 2010, delivered 39%; 40% target in CY 2011, 54% January 1, 2011 through June 30, 2011). The Schedule Availability (SA) and Overall Availability (OA) for Jaguar were exceeded in CY2010. Given the solution to the VRM problem the SA and OA for Jaguar in CY 2011 are expected to exceed the target metrics of 95% and 90%, respectively (Reference Section 3). Numerous and wide-ranging research accomplishments, scientific support, and technological innovations are more fully described in Sections 4 and 6 and reflect OLCF leadership in enabling high-impact science solutions and vision in creating an exascale-ready center. Financial Management (Section 5) and Risk Management (Section 7) are carried out using best practices approved of by DOE. The OLCF has a valid cyber security plan and Authority to Operate (Section 8). The proposed metrics for 2012 are reflected in Section 9.

Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

2011-08-01T23:59:59.000Z

186

Reliable High Performance Peta- and Exa-Scale Computing  

SciTech Connect

As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continue to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty operation. By synthesizing models of individual components into a whole-system behavior models my work is making it possible to automatically understand the behavior of arbitrary real-world systems to enable them to tolerate a wide range of system faults. My project is following a multi-pronged research strategy. Section II discusses my work on modeling the behavior of existing applications and systems. Section II.A discusses resilience in the face of soft faults and Section II.B looks at techniques to tolerate performance faults. Finally Section III presents an alternative approach that studies how a system should be designed from the ground up to make resilience natural and easy.

Bronevetsky, G

2012-04-02T23:59:59.000Z

187

Parallel application-level behavioral attributes for performance and energy management of high-performance computing systems  

Science Conference Proceedings (OSTI)

Run time variability of parallel applications continues to present significant challenges to their performance and energy efficiency in high-performance computing (HPC) systems. When run times are extended and unpredictable, application developers perceive ... Keywords: Performance and energy management, Performance measurement, Run time attributes

Jeffrey J. Evans; Charles E. Lucas

2013-03-01T23:59:59.000Z

188

Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing  

SciTech Connect

This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

Khaleel, Mohammad A.

2009-10-01T23:59:59.000Z

189

A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.  

SciTech Connect

The human brain (volume=1200cm3) consumes 20W and is capable of performing>10%5E16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m%5E3 and 3MW, giving the brain a 10%5E12 advantage in operations/s/W/cm%5E3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Micheal J.; Partridge, L. Donald [University of New Mexico, Albuquerque, NM; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

2013-10-01T23:59:59.000Z

190

NERSC 2011: High Performance Computing Facility Operational Assessment for the National Energy Research Scientific Computing Center  

E-Print Network (OSTI)

the Argonne and Oak Ridge Leadership Computing Facilitieslike Leadership Computing Facilities at Argonne and Oak

Antypas, Katie

2013-01-01T23:59:59.000Z

191

An evaluation of Java's I/O capabilities for high-performance computing.  

SciTech Connect

Java is quickly becoming the preferred language for writing distributed applications because of its inherent support for programming on distributed platforms. In particular, Java provides compile-time and run-time security, automatic garbage collection, inherent support for multithreading, support for persistent objects and object migration, and portability. Given these significant advantages of Java, there is a growing interest in using Java for high-performance computing applications. To be successful in the high-performance computing domain, however, Java must have the capability to efficiently handle the significant I/O requirements commonly found in high-performance computing applications. While there has been significant research in high-performance I/O using languages such as C, C++, and Fortran, there has been relatively little research into the I/O capabilities of Java. In this paper, we evaluate the I/O capabilities of Java for high-performance computing. We examine several approaches that attempt to provide high-performance I/O--many of which are not obvious at first glance--and investigate their performance in both parallel and multithreaded environments. We also provide suggestions for expanding the I/O capabilities of Java to better support the needs of high-performance computing applications.

Dickens, P. M.; Thakur, R.

2000-11-10T23:59:59.000Z

192

High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)  

SciTech Connect

Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

Oehmen, Chris [PNNL

2010-01-25T23:59:59.000Z

193

High Performance Computing at TJNAF| U.S. DOE Office of Science...  

Office of Science (SC) Website

(301) 903-3833 E: sc.np@science.doe.gov More Information Spinoff Archives High Performance Computing at TJNAF Print Text Size: A A A RSS Feeds FeedbackShare Page Application...

194

The demand for high performance computing research has been significantly increasing over the past few years. Various  

E-Print Network (OSTI)

The demand for high performance computing research has been significantly increasing over the past to promote the effective use of High Performance Computing in the research environment. In addition facility has enabled cutting-edge computations material research, "Having a high-performance computing

Akhmedov, Azer

195

High Performance Computing at TJNAF| U.S. DOE Office of Science (SC)  

Office of Science (SC) Website

Performance Computing at TJNAF Performance Computing at TJNAF Nuclear Physics (NP) NP Home About Research Facilities Science Highlights Benefits of NP Spinoff Applications Spinoff Archives SBIR/STTR Applications of Nuclear Science and Technology Funding Opportunities Nuclear Science Advisory Committee (NSAC) News & Resources Contact Information Nuclear Physics U.S. Department of Energy SC-26/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301) 903-3613 F: (301) 903-3833 E: sc.np@science.doe.gov More Information » Spinoff Archives High Performance Computing at TJNAF Print Text Size: A A A RSS Feeds FeedbackShare Page Application/instrumentation: High Performance Computing Developed at: Thomas Jefferson National Laboratory Developed in: 1998 - 2010 Result of NP research: NP computational studies in LQCD

196

High-Performance Computing Acquisitions Based on the Factors that Matter  

Science Conference Proceedings (OSTI)

The US Department of Defense High Performance Computing Modernization Program has developed an evaluation process that combines qualitative usability factors with quantitative performance and price-per-performance factors to determine which HPC systems to acquire in its annual modernization process.

Larry P. Davis; Roy L. Campbell, Jr.; William A. Ward, Jr.; Cray J. Henry

2007-01-01T23:59:59.000Z

197

Process for selecting NEAMS applications for access to Idaho National Laboratory high performance computing resources  

Science Conference Proceedings (OSTI)

INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

Michael Pernice

2010-09-01T23:59:59.000Z

198

Novel Kinetic 3D MHD Algorithm for High Performance Parallel Computing Systems  

E-Print Network (OSTI)

The impressive progress of the kinetic schemes in the solution of gas dynamics problems and the development of effective parallel algorithms for modern high performance parallel computing systems led to the development of advanced methods for the solution of the magnetohydrodynamics problem in the important area of plasma physics. The novel feature of the method is the formulation of the complex Boltzmann-like distribution function of kinetic method with the implementation of electromagnetic interaction terms. The numerical method is based on the explicit schemes. Due to logical simplicity and its efficiency, the algorithm is easily adapted to modern high performance parallel computer systems including hybrid computing systems with graphic processors.

B. Chetverushkin; N. D'Ascenzo; V. Saveliev

2013-05-03T23:59:59.000Z

199

An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers  

Science Conference Proceedings (OSTI)

In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on high performance computing platforms.

Wang, Dali [ORNL; Zhao, Ziliang [University of Tennessee, Knoxville (UTK); Shaw, Shih-Lung [ORNL

2011-01-01T23:59:59.000Z

200

High Performance Computing: From Grids and Clouds to Exascale Volume 20 Advances in Parallel Computing  

Science Conference Proceedings (OSTI)

In the last decade, parallel computing technologies have transformed highperformance computing. Two trends have emerged: massively parallel computing leading to exascale on the one hand and moderately parallel applications, which have opened up highperformance ...

I. Foster; W. Gentzsch; L. Grandinetti; G. R. Joubert

2011-09-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


201

HIGH-PERFORMANCE COMPUTING FOR THE STUDY OF EARTH AND ENVIRONMENTAL SCIENCE MATERIALS USING SYNCHROTRON X-RAY COMPUTED MICROTOMOGRAPHY.  

SciTech Connect

Synchrotron x-ray computed microtomography (CMT) is a non-destructive method for examination of rock, soil, and other types of samples studied in the earth and environmental sciences. The high x-ray intensities of the synchrotron source make possible the acquisition of tomographic volumes at a high rate that requires the application of high-performance computing techniques for data reconstruction to produce the three-dimensional volumes, for their visualization, and for data analysis. These problems are exacerbated by the need to share information between collaborators at widely separated locations over both local and tide-area networks. A summary of the CMT technique and examples of applications are given here together with a discussion of the applications of high-performance computing methods to improve the experimental techniques and analysis of the data.

FENG,H.; JONES,K.W.; MCGUIGAN,M.; SMITH,G.J.; SPILETIC,J.

2001-10-12T23:59:59.000Z

202

High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility  

Science Conference Proceedings (OSTI)

Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of fuel.

Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

2012-02-01T23:59:59.000Z

203

NUG 2013 User Day: Trends and Innovation in High Performance Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Home » For Users » NERSC Users Group » Annual Meetings » NUG 2013 » Home » For Users » NERSC Users Group » Annual Meetings » NUG 2013 » User Day NUG 2013 User Day: Trends, Discovery, and Innovation in High Performance Computing Wednesday, Feb. 13 Berkeley Lab Building 50 Auditorium Live streaming: http://hosting.epresence.tv/LBL/1.aspx 8:45 - Welcome: Kathy Yelick, Berkeley Lab Associate Director for Computing Sciences Trends 9:00 - The Future of High Performance Scientific Computing, Kathy Yelick, Berkeley Lab Associate Director for Computing Sciences 9:45 - NERSC Today and over the Next Ten Years, Sudip Dosanjh, NERSC Director 10:30 - The 2013 NERSC Achievement Awards 10:45 - Break Discovery 11:00 - Discovery of the Higgs Boson and the role of LBNL and World-Wide Computing , Ian Hinchliffe, Berkeley Lab 11:30 - Discovery of the θ13 Weak Mixing Angle at Daya Bay using NERSC &

204

Feb. 11, 2008 Advanced Fault Tolerance Solutions for High Performance Computing 1/47 Advanced Fault Tolerance Solutions  

E-Print Network (OSTI)

Feb. 11, 2008 Advanced Fault Tolerance Solutions for High Performance Computing 1/47 RAS RAS Advanced Fault Tolerance Solutions for High Performance Computing Christian Engelmann Oak Ridge National Solutions for High Performance Computing 2/47 · Nation's largest energy laboratory · Nation's largest

Engelmann, Christian

205

February 13, 2008 Virtualized Environments for the Harness High Performance Computing Workbench 1/17 Virtualized Environments for the Harness  

E-Print Network (OSTI)

February 13, 2008 Virtualized Environments for the Harness High Performance Computing Workbench 1/17 Virtualized Environments for the Harness High Performance Computing Workbench Björn Könning1,2, Christian Virtualized Environments for the Harness High Performance Computing Workbench 4/17 Harness HPC Workbench

Engelmann, Christian

206

3rd Workshop on System-level Virtualization for High Performance Computing (HPCVirt) 2009, Nuremberg, Germany, March 30, 2009  

E-Print Network (OSTI)

3rd Workshop on System-level Virtualization for High Performance Computing (HPCVirt) 2009 for High Performance Computing (HPCVirt) 2009, Nuremberg, Germany, March 30, 2009 Outline · Background work #12;3/193rd Workshop on System-level Virtualization for High Performance Computing (HPCVirt) 2009

Engelmann, Christian

207

Modeling and analysis of transient vehicle underhood thermo- hydrodynamic events using computational fluid dynamics and high performance computing.  

DOE Green Energy (OSTI)

This work has explored the preliminary design of a Computational Fluid Dynamics (CFD) tool for the analysis of transient vehicle underhood thermo-hydrodynamic events using high performance computing platforms. The goal of this tool will be to extend the capabilities of an existing established CFD code, STAR-CD, allowing the car manufacturers to analyze the impact of transient operational events on the underhood thermal management by exploiting the computational efficiency of modern high performance computing systems. In particular, the project has focused on the CFD modeling of the radiator behavior during a specified transient. The 3-D radiator calculations were performed using STAR-CD, which can perform both steady-state and transient calculations, on the cluster computer available at ANL in the Nuclear Engineering Division. Specified transient boundary conditions, based on experimental data provided by Adapco and DaimlerChrysler were used. The possibility of using STAR-CD in a transient mode for the entire period of time analyzed has been compared with other strategies which involve the use of STAR-CD in a steady-state mode at specified time intervals, while transient heat transfer calculations would be performed for the rest of the time. The results of these calculations have been compared with the experimental data provided by Adapco/DaimlerChrysler and recommendations for future development of an optimal strategy for the CFD modeling of transient thermo-hydrodynamic events have been made. The results of this work open the way for the development of a CFD tool for the transient analysis of underhood thermo-hydrodynamic events, which will allow the integrated transient thermal analysis of the entire cooling system, including both the engine block and the radiator, on high performance computing systems.

Tentner, A.; Froehle, P.; Wang, C.; Nuclear Engineering Division

2004-01-01T23:59:59.000Z

208

Modeling and analysis of transient vehicle underhood thermo - hydrodynamic events using computational fluid dynamics and high performance computing.  

DOE Green Energy (OSTI)

This work has explored the preliminary design of a Computational Fluid Dynamics (CFD) tool for the analysis of transient vehicle underhood thermo-hydrodynamic events using high performance computing platforms. The goal of this tool will be to extend the capabilities of an existing established CFD code, STAR-CD, allowing the car manufacturers to analyze the impact of transient operational events on the underhood thermal management by exploiting the computational efficiency of modern high performance computing systems. In particular, the project has focused on the CFD modeling of the radiator behavior during a specified transient. The 3-D radiator calculations were performed using STAR-CD, which can perform both steady-state and transient calculations, on the cluster computer available at ANL in the Nuclear Engineering Division. Specified transient boundary conditions, based on experimental data provided by Adapco and DaimlerChrysler were used. The possibility of using STAR-CD in a transient mode for the entire period of time analyzed has been compared with other strategies which involve the use of STAR-CD in a steady-state mode at specified time intervals, while transient heat transfer calculations would be performed for the rest of the time. The results of these calculations have been compared with the experimental data provided by Adapco/DaimlerChrysler and recommendations for future development of an optimal strategy for the CFD modeling of transient thermo-hydrodynamic events have been made. The results of this work open the way for the development of a CFD tool for the transient analysis of underhood thermo-hydrodynamic events, which will allow the integrated transient thermal analysis of the entire cooling system, including both the engine block and the radiator, on high performance computing systems.

Froehle, P.; Tentner, A.; Wang, C.

2003-09-05T23:59:59.000Z

209

DOE Greenbook - Needs and Directions in High-Performance Computing for the Office of Science  

Science Conference Proceedings (OSTI)

The NERSC Users Group (NUG) encompasses all investigators utilizing the NERSC computational and storage resources of the Department of Energy Office of Science facility. At the February 2001 meeting held at the National Energy Research Scientific Computing (NERSC) facility, the NUG executive committee (NUGEX) began the process to assess the role of computational science and determine the computational needs in future Office of Science (OS) programs. The continuing rapid development of the computational science fields and computer technology (both hardware and software) suggest frequent periodic review of user requirements and the role that computational science should play in meeting OS program commitments. Over the last decade, NERSC (and many other supercomputer centers) have transitioned from a center based on vector supercomputers to one almost entirely dedicated to massively parallel platforms (MPPs). Users have had to learn and transform their application codes to make use of these parallel computers. NERSC computer time requests suggest that a vast majority of NERSC users have accomplished this transition and are ready for production parallel computing. Tools for debugging, mathematical toolsets, and robust communication software have enabled this transition. The large memory and CPU power of these parallel machines are allowing simulations at resolutions, timescales, and levels of realism in physics that were never before possible. Difficulties and performance issues in using MPP systems remain linked to the access of non-uniform memory: cache, local, and remote memory. This issue includes both the speed of access and the methods of access to the memory architecture. Optimized mathematical tools to perform standard functions on parallel machines are available. Users should be encouraged to make heavy use of those tools to enhance productivity and system performance. There are at least four underlying components to the computational resources used by OS researchers. (1) High-Performance Computing Technology; (2) Advanced Software Technology and Algorithms; (3) Energy Sciences Network; and (4) Basic Research and Human Resources. In addition to the availability from the vendor community, these components determine the implementation and direction of the development of the supercomputing resources for the OS community. In this document we will identify scientific and computational needs from across the five Office of Science organizations: High Energy and Nuclear Physics, Basic Energy Sciences, Fusion Energy Science, Biological and Environmental Research, and Advanced Scientific Computing Research. We will also delineate the current suite of NERSC computational and human resources. Finally, we will provide a set of recommendations that will guide the utilization of current and future computational resources at the DOE NERSC.

Rotman, D; Harding, P

2002-04-01T23:59:59.000Z

210

Geographic Information Systems Applications on an ATM-Based Distributed High Performance Computing System  

E-Print Network (OSTI)

. We present a distributed geographic information system (DGIS) built on a distributed high performance computing environment using a number of software infrastructural building blocks and computational resources interconnected by an ATM-based broadband network. Archiving, access and processing of scientific data are discussed in the context of geographic and environmental applications with special emphasis on the potential for local-area weather, agriculture, soil and land management products. Software technologies such as tiling and caching techniques can be used to optimise storage requirements and response time for applications requiring very large data sets such as multi-channel satellite data. Distributed High Performance Computing hardware technology underpins our proposed system. In particular, we discuss the capabilities of a distributed hardware environment incorporating: high bandwidth communications networks such as Telstra's Experimental Broadband Network (EBN); large capa...

November Hawick

1997-01-01T23:59:59.000Z

211

Acts -- A collection of high performing software tools for scientific computing  

Science Conference Proceedings (OSTI)

During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Further, many new discoveries depend on high performance computer simulations to satisfy their demands for large computational resources and short response time. The Advanced CompuTational Software (ACTS) Collection brings together a number of general-purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS collection promotes code portability, reusability, reduction of duplicate efforts, and tool maturity. This paper presents a brief introduction to the functionality available in ACTS. It also highlight the tools that are in demand by Climate and Weather modelers.

Drummond, L.A.; Marques, O.A.

2002-11-01T23:59:59.000Z

212

Research Note: A high performance algorithm for static task scheduling in heterogeneous distributed computing systems  

Science Conference Proceedings (OSTI)

Effective task scheduling is essential for obtaining high performance in heterogeneous distributed computing systems (HeDCSs). However, finding an effective task schedule in HeDCSs requires the consideration of both the heterogeneity of processors and ... Keywords: Directed acyclic graph, Heterogeneous systems, Heuristics, Parallel processing, Task scheduling

Mohammad I. Daoud; Nawwaf Kharma

2008-04-01T23:59:59.000Z

213

UNBC HPC Policy last revised: 2/1/2007, 2:49:29 PM UNBC Enhanced High Performance Computing Center  

E-Print Network (OSTI)

UNBC HPC Policy last revised: 2/1/2007, 2:49:29 PM UNBC Enhanced High Performance Computing Center and contracts are sought by Principal Investigators. Membership Policies The UNBC Enhanced High Performance Computing Center (called " UNBC HPC" hereafter) provides computing resources and services to members

Northern British Columbia, University of

214

A hardware and software computational platform for the HiPerDNO (high performance distribution network operation) project  

Science Conference Proceedings (OSTI)

The HiPerDNO project aims to develop new applications to enhance the operational capabilities of Distribution Network Operators (DNO). Their delivery requires an advanced computational strategy. This paper describes a High Performance Computing (HPC) ... Keywords: high performance computing applications, smart grid, systems design

Stefano Salvini; Piotr Lopatka; David Wallom

2011-11-01T23:59:59.000Z

215

Integrating High Performance Computing into the Undergraduate Curriculum: How PACI and the Education Center on Computational Science & Engineering Can Succeed  

E-Print Network (OSTI)

The Education Center on Computational Science and Engineering at San Diego State University assists the Partnership for Advanced Computational Infrastructure (PACI) in its goal of encouraging the integration of high performance computing technology (HPC) into the undergraduate curriculum. Because the means by which to best effect the undergraduate curriculum are still unclear, EOTNPACI, which provides part of the Center's funding, asked the LEAD evaluation team to evaluate SDSU's Education Center over its second year of operation (1998-99). This report summarizes what was learned during LEAD's evaluation regarding the obstacles to incorporating HPC-based instruction into the undergraduate curriculum and the strategies that have the highest probability of overcoming these obstacles. Among the strategies that hold the most promise are programs like the Education Center's Faculty Fellows program, which provides buyout time, technical and logistical support, and a community of curricular r...

Julie Foertsch; Baine B. Alex

1999-01-01T23:59:59.000Z

216

2011 DoD High Performance Computing Modernization Program Users Group Conference A Web-based High-Throughput Tool for Next-Generation Sequence Annotation  

E-Print Network (OSTI)

320 2011 DoD High Performance Computing Modernization Program Users Group Conference A Web deployed on the Mana Linux cluster at the Maui High Performance Computing Center. The two components

217

NERSC 2011: High Performance Computing Facility Operational Assessment for the National Energy Research Scientific Computing Center  

E-Print Network (OSTI)

Inability to meet DOE IPv6 requirements Med High ImpactMitigation NERSC/DOE has a requirement for IPv6 andscale. Science Requirements Workshops In 2009 NERSC and DOE

Antypas, Katie

2013-01-01T23:59:59.000Z

218

The University of Missouri Bioinformatics Consortium (UMBC) provides an integrated array of high performance computing and communications products  

E-Print Network (OSTI)

performance computing and communications products and related services to their users, including The University of Missouri Bioinformatics Consortium (UMBC) provides an integrated array of high

Glaser, Rainer

219

Opening Remarks from the Joint Genome Institute and Argonne Lab High Performance Computing Workshop (2010 JGI/ANL HPC Workshop)  

SciTech Connect

DOE JGI Director Eddy Rubin gives opening remarks at the JGI/Argonne High Performance Computing (HPC) Workshop on January 25, 2010.

Rubin, Eddy

2010-01-25T23:59:59.000Z

220

Bringing high performance computing to the biologists workbench: approaches, applications and challenges  

Science Conference Proceedings (OSTI)

Data-intensive and high-performance computing are poised to significantly impact the future of biological research which is increasingly driven by the prevalence of high-throughput experimental methodologies for genome sequencing, transcriptomics, proteomics, and other areas. Large centers such as NIHs National Center for Biotechnology Information (NCBI), The Institute for Genomic Research (TIGR), and the DOEs Joint Genome Institute (JGI) Integrated Microbial Genome (IMG) have made extensive use of multiprocessor architectures to deal with some of the challenges of processing, storing and curating exponentially growing genomic and proteomic datasetsenabling end users to rapidly access a growing public data source, as well as utilize analysis tools transparently on high-performance computing resources. Applying this computational power to single-investigator analysis, however, often relies on users to provide their own computational resources, forcing them to endure the learning curve of porting, building, and running software on multiprocessor architectures. Solving the next generation of large-scale biology challenges using multiprocessor machinesfrom small clusters to emerging petascale machinescan most practically be realized if this learning curve can be minimized through a combination of workflow management, data management and resource allocation as well as intuitive interfaces and compatibility with existing common data formats.

Oehmen, Christopher S.; Cannon, William R.

2008-09-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


221

Towards Real-Time High Performance Computing For Power Grid Analysis  

SciTech Connect

Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored. Imposing real-time constraints on a parallel (cluster) computing environment introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we give a motivating example to demonstrate the need for such a system--- an application to estimate the electromechanical states of the power grid--- and we introduce a formal method for performing verification of certain temporal properties within a system of parallel processes. We describe our work towards a full real-time implementation of the target application--- namely, our progress towards extracting a key mathematical kernel from the application, the formal process by which we analyze the intricate timing behavior of the processes on the cluster, as well as timing measurements taken on our test cluster to demonstrate use of these concepts.

Hui, Peter SY; Lee, Barry; Chikkagoudar, Satish

2012-11-16T23:59:59.000Z

222

Matrix multiplication operations with data pre-conditioning in a high performance computing architecture  

SciTech Connect

Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.

Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

2013-11-05T23:59:59.000Z

223

Using high performance computing and Monte Carlo simulation for pricing american options  

E-Print Network (OSTI)

High performance computing (HPC) is a very attractive and relatively new area of research, which gives promising results in many applications. In this paper HPC is used for pricing of American options. Although the American options are very significant in computational finance; their valuation is very challenging, especially when the Monte Carlo simulation techniques are used. For getting the most accurate price for these types of options we use Quasi Monte Carlo simulation, which gives the best convergence. Furthermore, this algorithm is implemented on both GPU and CPU. Additionally, the CUDA architecture is used for harnessing the power and the capability of the GPU for executing the algorithm in parallel which is later compared with the serial implementation on the CPU. In conclusion this paper gives the reasons and the advantages of applying HPC in computational finance.

Cvetanoska, Verche

2012-01-01T23:59:59.000Z

224

Astrocomp: a web service for the use of high performance computers in Astrophysics  

E-Print Network (OSTI)

Astrocomp is a joint project, developed by the INAF-Astrophysical Observatory of Catania, University of Roma La Sapienza and Enea. The project has the goal of providing the scientific community of a web-based user-friendly interface which allows running parallel codes on a set of high-performance computing (HPC) resources, without any need for specific knowledge about parallel programming and Operating Systems commands. Astrocomp provides, also, computing time on a set of parallel computing systems, available to the authorized user. At present, the portal makes a few codes available, among which: FLY, a cosmological code for studying three-dimensional collisionless self-gravitating systems with periodic boundary conditions; ATD, a parallel tree-code for the simulation of the dynamics of boundary-free collisional and collisionless self-gravitating systems and MARA, a code for stellar light curves analysis. Other codes are going to be added to the portal.

U. Becciani; R. Capuzzo Dolcetta; A. Costa; P. Di Matteo; P. Miocchi; V. Rosato

2004-07-27T23:59:59.000Z

225

Evaluating Performance, Power, and Cooling in High Performance Computing (HPC) Data Centers  

SciTech Connect

This chapter explores current research focused on developing our understanding of the interrelationships involved with HPC performance and energy management. The first section explores data center instrumentation, measurement, and performance analysis techniques, followed by a section focusing on work in data center thermal management and resource allocation. This is followed by an exploration of emerging techniques to identify application behavioral attributes that can provide clues and advice to HPC resource and energy management systems for the purpose of balancing HPC performance and energy efficiency.

Evans, Jeffrey; Sandeep, Gupta; Karavanic, Karen; Marquez, Andres; Varsamopoulos, Girogios

2012-01-24T23:59:59.000Z

226

Implementing Molecular Dynamics on Hybrid High Performance Computers - Particle-Particle Particle-Mesh  

Science Conference Proceedings (OSTI)

The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with an approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle-particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.

Brown, W Michael [ORNL; Kohlmeyer, Axel [Temple University; Plimpton, Steven J [ORNL; Tharrington, Arnold N [ORNL

2012-01-01T23:59:59.000Z

227

Implementing Molecular Dynamics on Hybrid High Performance Computers - Three-Body Potentials  

SciTech Connect

The use of coprocessors or accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power re- quirements. Hybrid high-performance computers, defined as machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. Although there has been extensive research into methods to efficiently use accelerators to improve the performance of molecular dynamics (MD) employing pairwise potential energy models, little is reported in the literature for models that include many-body effects. 3-body terms are required for many popular potentials such as MEAM, Tersoff, REBO, AIREBO, Stillinger-Weber, Bond-Order Potentials, and others. Because the per-atom simulation times are much higher for models incorporating 3-body terms, there is a clear need for efficient algo- rithms usable on hybrid high performance computers. Here, we report a shared-memory force-decomposition for 3-body potentials that avoids memory conflicts to allow for a deterministic code with substantial performance improvements on hybrid machines. We describe modifications necessary for use in distributed memory MD codes and show results for the simulation of water with Stillinger-Weber on the hybrid Titan supercomputer. We compare performance of the 3-body model to the SPC/E water model when using accelerators. Finally, we demonstrate that our approach can attain a speedup of 5.1 with acceleration on Titan for production simulations to study water droplet freezing on a surface.

Brown, W Michael [ORNL] [ORNL; Yamada, Masako [GE Global Research] [GE Global Research

2013-01-01T23:59:59.000Z

228

Measuring and tuning energy efficiency on large scale high performance computing platforms.  

Science Conference Proceedings (OSTI)

Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

Laros, James H., III

2011-08-01T23:59:59.000Z

229

Toward a Performance/Resilience Tool for Hardware/Software Co-Design of High-Performance Computing Systems  

SciTech Connect

xSim is a simulation-based performance investigation toolkit that permits running high-performance computing (HPC) applications in a controlled environment with millions of concurrent execution threads, while observing application performance in a simulated extreme-scale system for hardware/software co-design. The presented work details newly developed features for xSim that permit the injection of MPI process failures, the propagation/detection/notification of such failures within the simulation, and their handling using application-level checkpoint/restart. These new capabilities enable the observation of application behavior and performance under failure within a simulated future-generation HPC system using the most common fault handling technique.

Engelmann, Christian [ORNL] [ORNL; Naughton, III, Thomas J [ORNL

2013-01-01T23:59:59.000Z

230

Validation of Broadly Filtered Diagonalization Method for Extracting Frequencies and Modes from High-Performance Computations  

Science Conference Proceedings (OSTI)

Recent developments have shown that one can get around the difficulties of finding the eigenvalues and eigenmodes of the large systems studied with high performance computation by using broadly filtered diagonalization [G. R. Werner and J. R. Cary, J. Compo Phys. 227, 5200 (2008)]. This method can be used in conjunction with any time-domain computation, in particular those that scale very well up to 10000s of processors and beyond. Here we present results that show that this method accurately obtains both modes and frequencies of electromagnetic cavities, even when frequencies are nearly degenerate. The application was to a well-characterized Kaon separator cavity, the A15. The computations are shown to have a precision to a few parts in 10{sup 5}. Because the computed frequency differed from the measured frequency by more than this amount, a careful validation study to determine all sources of difference was undertaken. Ultimately, more precise measurements of the cavity showed that the computations were correct, with remaining differences accounted for by uncertainties in cavity dimensions and atmospheric and thermal conditions. Thus, not only was the method validated, but it was shown to have the ability to predict differences in cavity dimensions from fabrication specifications.

Austin, T.M.; Cary, J.R.; /Colorado U.; Werner, G.R.; /Colorado U.; Bellantoni, L.; /Fermilab

2009-06-01T23:59:59.000Z

231

198 Int. J. High Performance Computing and Networking, Vol. 4, Nos. 3/4, 2006 Copyright 2006 Inderscience Enterprises Ltd.  

E-Print Network (OSTI)

198 Int. J. High Performance Computing and Networking, Vol. 4, Nos. 3/4, 2006 Copyright © 2006 Performance Computing and Networking, Vol. 4, Nos. 3/4, pp.198­206. Biographical notes: Xiao Chen, J. (2006) `Improved schemes for power-efficient broadcast in ad hoc networks', Int. J. High

Shen, Jian - Department of Mathematics, Texas State University

232

High-Performance Computing Enables Huge Leap Forward in Engine Development  

NLE Websites -- All DOE Office Websites (Extended Search)

Computing Enables Huge Leap Forward in Engine Development When we turn the key in our car's ignition, we usually don't think about the combustion process that takes place inside the engine that enables the car to go. We just know that it works. From left, Argonne researchers Raymond Bair, Doug Longman, Qingluan Xue, Marta Garcia, Shashi Aithal (seated) and Sibendu Som are part of a multidisciplinary team working to advance diesel and spark engine modeling and simulation tools into the high-performance computing realm. TransForum News from Argonne's Transportation Technology R&D Center www.transportation.anl.gov Reprint from Volume 13 | Issue 1 | Winter 2013 2 Volume 13 | Issue 1 | Winter 2013 2 3 TransForum TransForum facilities, Argonne is one of the few places in the world with the

233

CISE Research Instrumentation for Integration of Virtual Reality into High Performance Computing Environment  

E-Print Network (OSTI)

s of Research Projects . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3 Emerging VR Program at Syracuse University . . . . . . . . . . . . . . . . 6 3.1 General Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.2 Program Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2.1 Computational Science . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2.2 VR Hardware Technologies . . . . . . . . . . . . . . . . . . . . . . . 10 3.2.3 Neuroscience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.4 Cognitive Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.3 Program Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4 Description of Research Projects . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.1 Parallel Databases and VR Interfaces for Large Scale Data Fusion . . 14 4.2 MOVIE System Based Operating Shell for High Performance VR . . . 15 4.3 Virtual ...

Geoffrey C. Fox; Wojtek Furmanski

1992-01-01T23:59:59.000Z

234

Proc. Fourth IDEA Workshop, Magnetic Island, 17-20 May 1997, and Technical Note DHPC-006. Trends in High Performance Computing  

E-Print Network (OSTI)

in High Performance Computing K.A.Hawick Department of Computer Science, University of Adelaide, South computing. What used to be referred to as ``Supercomputing'' became ``High Performance Computing speeds. A possible new acronym for the collective field is ``Distributed, High Performance, Computing

Hawick, Ken

235

Investigating Operating System Noise in Extreme-Scale High-Performance Computing Systems using Simulation  

SciTech Connect

Hardware/software co-design for future-generation high-performance computing (HPC) systems aims at closing the gap between the peak capabilities of the hardware and the performance realized by applications (application-architecture performance gap). Performance profiling of architectures and applications is a crucial part of this iterative process. The work in this paper focuses on operating system (OS) noise as an additional factor to be considered for co-design. It represents the first step in including OS noise in HPC hardware/software co-design by adding a noise injection feature to an existing simulation-based co-design toolkit. It reuses an existing abstraction for OS noise with frequency (periodic recurrence) and period (duration of each occurrence) to enhance the processor model of the Extreme-scale Simulator (xSim) with synchronized and random OS noise simulation. The results demonstrate this capability by evaluating the impact of OS noise on MPI_Bcast() and MPI_Reduce() in a simulated future-generation HPC system with 2,097,152 compute nodes.

Engelmann, Christian [ORNL

2013-01-01T23:59:59.000Z

236

Extending the MPI Specification for Process Fault Tolerance on High Performance Computing Systems  

E-Print Network (OSTI)

End-users and application developers of high performance computing systems have today access to larger machines and more processors than ever. Systems such as the Earth Simulator, the ASCI-Q machines or the IBM Blue Gene consist of thousands or even tens of thousand of processors. Machines comprising 100,000 processors are expected for the next years. A critical issue of systems consisting of such large numbers of processors is the ability of the machine to deal with process failures. Concluding from the current experiences on the top-end machines, a 100,000-processor machine will experience a process failure every few minutes[1]. While on earlier massively parallel processing systems (MPPs) crashing nodes often lead to a crash of the whole system, current architectures are more robust. Typically, the applications utilizing the failed processor will have to abort, the machine, as an entity is however not affected by the failure. This robustness has been the result of improvements at the hardware as well as on the level of system software. 1.2 Current Parallel Programming Paradigms Current parallel programming paradigms for high-performance computing systems are mainly relying on message passing, especially on the Message-Passing Interface (MPI) [12][13

Graham E. Fagg; Edgar Gabriel; George Bosilca; Thara Angskun; Zhizhong Chen; Jelena Pjesivac-grbovic; Kevin London; Jack J. Dongarra

2003-01-01T23:59:59.000Z

237

Effects of computer-assisted instruction on performance of senior high school biology students in Ghana  

Science Conference Proceedings (OSTI)

This study investigated the comparative efficiency of computer-assisted instruction (CAI) and conventional teaching method in biology on senior high school students. A science class was selected in each of two randomly selected schools. The pretest-posttest ... Keywords: Achievement, Cell cycle, Computer-assisted instruction, Conventional approach, ICT and senior high school

K. A. Owusu; K. A. Monney; J. Y. Appiah; E. M. Wilmot

2010-09-01T23:59:59.000Z

238

Microsoft Word - The Essential Role of New Network Services for High Performance Distributed Computing - PARENG.CivilComp.2011.  

NLE Websites -- All DOE Office Websites (Extended Search)

International Conference on Parallel, Distributed, Grid and International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering 12-15 April 2011, Ajaccio - Corsica - France In "Trends in Parallel, Distributed, Grid and Cloud Computing for Engineering," Edited by: P. Iványi, B.H.V. Topping, Civil-Comp Press. Network Services for High Performance Distributed Computing and Data Management W. E. Johnston, C. Guok, J. Metzger, and B. Tierney ESnet and Lawrence Berkeley National Laboratory, Berkeley California, U.S.A Keywords: high performance distributed computing and data management, high throughput networks, network services, science use of networks Much of modern science is dependent on high performance distributed computing and data handling. This distributed infrastructure, in turn, depends on

239

A Lightweight, High-performance I/O Management Package for Data-intensive Computing  

Science Conference Proceedings (OSTI)

Our group has been working with ANL collaborators on the topic ??bridging the gap between parallel file system and local file system? during the course of this project period. We visited Argonne National Lab -- Dr. Robert Ross??s group for one week in the past summer 2007. We looked over our current project progress and planned the activities for the incoming years 2008-09. The PI met Dr. Robert Ross several times such as HEC FSIO workshop 08, SC??08 and SC??10. We explored the opportunities to develop a production system by leveraging our current prototype to (SOGP+PVFS) a new PVFS version. We delivered SOGP+PVFS codes to ANL PVFS2 group in 2008.We also talked about exploring a potential project on developing new parallel programming models and runtime systems for data-intensive scalable computing (DISC). The methodology is to evolve MPI towards DISC by incorporating some functions of Google MapReduce parallel programming model. More recently, we are together exploring how to leverage existing works to perform (1) coordination/aggregation of local I/O operations prior to movement over the WAN, (2) efficient bulk data movement over the WAN, (3) latency hiding techniques for latency-intensive operations. Since 2009, we start applying Hadoop/MapReduce to some HEC applications with LANL scientists John Bent and Salman Habib. Another on-going work is to improve checkpoint performance at I/O forwarding Layer for the Road Runner super computer with James Nuetz and Gary Gridder at LANL. Two senior undergraduates from our research group did summer internships about high-performance file and storage system projects in LANL since 2008 for consecutive three years. Both of them are now pursuing Ph.D. degree in our group and will be 4th year in the PhD program in Fall 2011 and go to LANL to advance two above-mentioned works during this winter break. Since 2009, we have been collaborating with several computer scientists (Gary Grider, John bent, Parks Fields, James Nunez, Hsing-Bung Chen, etc) from HPC5 and James Ahrens from Advanced Computing Laboratory in Los Alamos National Laboratory. We hold a weekly conference and/or video meeting on advancing works at two fronts: the hardware/software infrastructure of building large-scale data intensive cluster and research publications. Our group members assist in constructing several onsite LANL data intensive clusters. Two parties have been developing software codes and research papers together using both sides?? resources.

Wang, Jun

2011-06-22T23:59:59.000Z

240

Development of high performance scientific components for interoperability of computing packages  

Science Conference Proceedings (OSTI)

Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

Gulabani, Teena Pratap

2008-12-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


241

A High Performance Computing Network and System Simulator for the Power Grid: NGNS^2  

SciTech Connect

Designing and planing next generation power grid sys- tems composed of large power distribution networks, monitoring and control networks, autonomous generators and consumers of power requires advanced simulation infrastructures. The objective is to predict and analyze in time the behavior of networks of systems for unexpected events such as loss of connectivity, malicious attacks and power loss scenarios. This ultimately allows one to answer questions such as: What could happen to the power grid if .... We want to be able to answer as many questions as possible in the shortest possible time for the largest possible systems. In this paper we present a new High Performance Computing (HPC) oriented simulation infrastructure named Next Generation Network and System Simulator (NGNS2 ). NGNS2 allows for the distribution of a single simulation among multiple computing elements by using MPI and OpenMP threads. NGNS2 provides extensive configuration, fault tolerant and load balancing capabilities needed to simulate large and dynamic systems for long periods of time. We show the preliminary results of the simulator running approximately two million simulated entities both on a 64-node commodity Infiniband cluster and a 48-core SMP workstation.

Villa, Oreste; Tumeo, Antonino; Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.

2012-11-11T23:59:59.000Z

242

Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing Based Approach  

SciTech Connect

For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent- and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required {approx}100 hours until termination, whereas a parallel approach required only {approx}2.5 hours (42 compute nodes) - a 40x speed-up. Tools developed for this parallel execution are discussed.

Fillippi, Anthony [Texas A& M University; Bhaduri, Budhendra L [ORNL; Naughton, III, Thomas J [ORNL; King, Amy L [ORNL; Scott, Stephen L [ORNL; Guneralp, Inci [Texas A& M University

2012-01-01T23:59:59.000Z

243

Hyperspectral Aquatic Radiative Transfer Modeling Using a High-Performance Cluster Computing-Based Approach  

SciTech Connect

Abstract For aquatic studies, radiative transfer (RT) modeling can be used to compute hyperspectral above-surface remote sensing reflectance that can be utilized for inverse model development. Inverse models can provide bathymetry and inherent-and bottom-optical property estimation. Because measured oceanic field/organic datasets are often spatio-temporally sparse, synthetic data generation is useful in yielding sufficiently large datasets for inversion model development; however, these forward-modeled data are computationally expensive and time-consuming to generate. This study establishes the magnitude of wall-clock-time savings achieved for performing large, aquatic RT batch-runs using parallel computing versus a sequential approach. Given 2,600 simulations and identical compute-node characteristics, sequential architecture required ~100 hours until termination, whereas a parallel approach required only ~2.5 hours (42 compute nodes) a 40x speed-up. Tools developed for this parallel execution are discussed.

Filippi, Anthony M [ORNL; Bhaduri, Budhendra L [ORNL; Naughton, III, Thomas J [ORNL; King, Amy L [ORNL; Scott, Stephen L [ORNL; Guneralp, Inci [Texas A& M University

2012-01-01T23:59:59.000Z

244

National cyber defense high performance computing and analysis : concepts, planning and roadmap.  

SciTech Connect

There is a national cyber dilemma that threatens the very fabric of government, commercial and private use operations worldwide. Much is written about 'what' the problem is, and though the basis for this paper is an assessment of the problem space, we target the 'how' solution space of the wide-area national information infrastructure through the advancement of science, technology, evaluation and analysis with actionable results intended to produce a more secure national information infrastructure and a comprehensive national cyber defense capability. This cybersecurity High Performance Computing (HPC) analysis concepts, planning and roadmap activity was conducted as an assessment of cybersecurity analysis as a fertile area of research and investment for high value cybersecurity wide-area solutions. This report and a related SAND2010-4765 Assessment of Current Cybersecurity Practices in the Public Domain: Cyber Indications and Warnings Domain report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.

Hamlet, Jason R.; Keliiaa, Curtis M.

2010-09-01T23:59:59.000Z

245

Model-driven Memory Optimizations for High Performance Computing: From Caches to I/o.  

E-Print Network (OSTI)

??High performance systems are quickly evolving to keep pace with application demands, and we observe greater complexity in system design at all scales. Parallelism, in (more)

Frasca, Michael

2012-01-01T23:59:59.000Z

246

Turner Construction is looking for interns at our Massachusetts Green High Performance Computing Center (MGHPCC) job site in Holyoke, MA. The ideal candidates would have the following qualifications  

E-Print Network (OSTI)

Turner Construction is looking for interns at our Massachusetts Green High Performance Computing summer positions: May­June, July­August. ABOUT MASSACHUSETTS GREEN HIGH PERFORMANCE COMPUTING CENTER Engineer attdriscoll@tcco.com. Massachusetts Green High Performance Computing Center Intern Positions #12;

Spence, Harlan Ernest

247

SEPTEMBER 2011 VOLUME 4 NUMBER 3 IJSTHZ (ISSN 1939-1404) SPECIAL ISSUE ON HIGH PERFORMANCE COMPUTING IN EARTH OBSERVATION AND REMOTE SENSING  

E-Print Network (OSTI)

1939-1404) SPECIAL ISSUE ON HIGH PERFORMANCE COMPUTING IN EARTH OBSERVATION AND REMOTE SENSING Foreword to the Special Issue on High Performance Computing in Earth Observation and Remote Sensing .................................... ................................................................ C. A. Lee, S. D. Gasster, A. Plaza, C.-I Chang, and B. Huang 508 High Performance Computing

Plaza, Antonio J.

248

iSSH v. Auditd: Intrusion Detection in High Performance Computing  

SciTech Connect

The goal is to provide insight into intrusions in high performance computing, focusing on tracking intruders motions through the system. The current tools, such as pattern matching, do not provide sufficient tracking capabilities. We tested two tools: an instrumented version of SSH (iSSH) and Linux Auditing Framework (Auditd). First discussed is Instrumented Secure Shell (iSSH): a version of SSH developed at Lawrence Berkeley National Laboratory. The goal is to audit user activity within a computer system to increase security. Capabilities are: Keystroke logging, Records user names and authentication information, and Catching suspicious remote and local commands. Strengths for iSSH are: (1) Good for keystroke logging, making it easier to track malicious users by catching suspicious commands; (2) Works with Bro to send alerts; could be configured to send pages to systems administrators; and (3) Creates visibility into SSH sessions. Weaknesses are: (1) Relatively new, so not very well documented; and (2) No capabilities to see if files have been edited, moved, or copied within the system. Second we discuss Auditd, the user component of the Linux Auditing System. It creates logs of user behavior, and monitors systems calls and file accesses. Its goal is to improve system security by keeping track of users actions within the system. Strenghts of Auditd are: (1) Very thorough logs; (2) Wider variety of tracking abilities than iSSH; and (3) Older, so better documented. Weaknesses are: (1) Logs record everything, not just malicious behavior; (2) The size of the logs can lead to overflowing directories; and (3) This level of logging leads to a lot of false alarms. Auditd is better documented than iSSH, which would help administrators during set up and troubleshooting. iSSH has a cleaner notification system, but the logs are not as detailed as Auditd. From our performance testing: (1) File transfer speed using SCP is increased when using iSSH; and (2) Network benchmarks were roughly the same regardless of which tool was running.

Karns, David M. [Los Alamos National Laboratory; Protin, Kathryn S. [Los Alamos National Laboratory; Wolf, Justin G. [Los Alamos National Laboratory

2012-07-30T23:59:59.000Z

249

Computational performance of ultra-high-resolution capability in the Community Earth System Model  

Science Conference Proceedings (OSTI)

With the fourth release of the Community Climate System Model, the ability to perform ultra-high-resolution climate simulations is now possible, enabling eddy-resolving ocean and sea-ice models to be coupled to a finite-volume atmosphere model for a ... Keywords: Earth system modeling, Performance engineering, application optimization, climate modeling, high-resolution

John M. Dennis; Mariana Vertenstein; Patrick H. Worley; Arthur A. Mirin; Anthony P. Craig; Robert Jacob; Sheri Mickelson

2012-02-01T23:59:59.000Z

250

Systems Engineering For High Performance Computing Software: The Hdda/dagh Infrastructure For Implementation Of Parallel Structured Adaptive Mesh Refinement  

E-Print Network (OSTI)

. This paper defines, describes and illustrates a systems engineering process for development of software systems implementing high performance computing applications. The example which drives the creation of this process is development of a flexible and extendible program development infrastructure for parallel structured adaptive meshes, the HDDA/DAGH package. The fundamental systems engineering principles used (hierarchical abstractions based on separation of concerns) are well-known but are not commonly applied in the context of high performance computing software. Application of these principles will be seen to enable implementation of an infrastructure which combines breadth of applicability and portability with high performance. Key words. Software systems engineering, Structured adaptive mesh-refinement, High performance software development, Distributed dynamic data-structures. 1. Overview. This paper describes the systems engineering process which was followed in the develop...

Manish Parashar; James; C. Browne

1997-01-01T23:59:59.000Z

251

On the Feasibility of Optical Circuit Switching for High Performance Computing Systems  

Science Conference Proceedings (OSTI)

The interconnect plays a key role in both the cost and performance of large-scale HPC systems. The cost of future high-bandwidth electronic interconnects mushrooms due to expensive optical transceivers needed between electronic switches. We describe ...

Kevin J. Barker; Alan Benner; Ray Hoare; Adolfy Hoisie; Alex K. Jones; Darren K. Kerbyson; Dan Li; Rami Melhem; Ram Rajamony; Eugen Schenfeld; Shuyi Shao; Craig Stunkel; Peter Walker

2005-11-01T23:59:59.000Z

252

Bayesian uncertainty quantification and propagation in molecular dynamics simulations: A high performance computing framework  

Science Conference Proceedings (OSTI)

We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore

Panagiotis Angelikopoulos; Costas Papadimitriou; Petros Koumoutsakos

2012-01-01T23:59:59.000Z

253

The Role of Models, Software Tools, and Applications in High Performance Computing  

E-Print Network (OSTI)

In this paper we identify and discuss technical issues we consider crucial to the HPCC program. The focus is on the usefulness of scalable parallel computers for National Challenge problems. We identify three interrelated aspects of usefulness: performance, programmability, and the role of an applicationdriven design philosophy. We discuss the importance of algorithm design and computational model development and advocate the design of libraries and software environments to bridge the gap between algorithm designer and application programmer. Finally, we consider the role of applications for solving National Challenge problems. This work was supported by the Advanced Research Projects Agency under contract DABT6392 -C-0022. The content of the information does not necessarily reflect the position or policy of the United States Government and no official endorsement should be inferred. 1 Introduction During the last several years significant progress has been made on the Grand Challe...

Leah H. Jamieson; Susanne E. Hambrusch; Ashfaq A. Khokhar; Edward J. Delp

1995-01-01T23:59:59.000Z

254

High performance systems  

SciTech Connect

This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

Vigil, M.B. [comp.

1995-03-01T23:59:59.000Z

255

Proceedings of the first international workshop on High performance computing, networking and analytics for the power grid  

Science Conference Proceedings (OSTI)

It is our great pleasure to welcome you to the 1st International Workshop on High Performance Computing, Networking and Analytics for the Power Grid -- HiPCNA-PG 2011. Sensor deployments on the grid are expected to increase geometrically in the ...

Daniel Chavarra-Miranda; Bora Akyol

2011-11-01T23:59:59.000Z

256

High Performance Computing in the U.S. 1995- An Analysis on the Basis of the TOP500 List  

Science Conference Proceedings (OSTI)

In 1993 for the first time a list of the top 500 supercomputer sites worldwide has been made available. The TOP500 list allows a much more detailed and well founded analysis of the state of high performance computing. Previously data such as the number ...

Jack J. Dongarra; Horst D. Simon

1995-11-01T23:59:59.000Z

257

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

the application of high performance computing (HPC) to theacceleration and high performance computing. He was thelibraries, and high performance computing. Lee is an active

Gerber, Richard A.

2011-01-01T23:59:59.000Z

258

Advanced Institute for Computational Science (AICS): Japanese National High-Performance Computing Research Institute and its 10-petaflops supercomputer "K"  

Science Conference Proceedings (OSTI)

Advanced Institute for Computational Science (AICS) was created in July 2010 at RIKEN under the supervision of Japanese Ministry of Education, Culture, Sports, Science, and Technology (MEXT) in order to establish the national center of excellence (COE) ... Keywords: AICS, K computer, center of excellence, supercomputer

Akinori Yonezawa; Tadashi Watanabe; Mitsuo Yokokawa; Mitsuhisa Sato; Kimihiko Hirao

2011-11-01T23:59:59.000Z

259

High Performance Scientific and Engineering Computing: Proceedings of the International Fortwihr Conference on Hpsec, Munich, March 16-18, 1998, 1st edition  

Science Conference Proceedings (OSTI)

From the Publisher:This volume contains the proceedings of an international conference on high performance scientific and engineering computing held in Munich in March 1998 and organized by FORTWIHR, the Bavarian Consortium for High Performance Scientific ...

Hans-Joachim -J Bungartz; F. Durst; C. Zenger

1999-01-01T23:59:59.000Z

260

High-Performance Computing in Remote Sensing Antonio J. Plaza1  

E-Print Network (OSTI)

Flight Center. Chapter 8. Distributed computing for inverse modeling of hyperspectral data Authors. Chapter 11. Grid computing for remote sensing data and data analysis Authors: Samuel D. Gasster, Craig Lee Laboratory Department of Electrical and Computer Engineering University of Maryland Baltimore County

Chang, Chein-I

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


261

Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs  

SciTech Connect

High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

Drewmark Communications; Sartor, Dale; Wilson, Mark

2010-07-01T23:59:59.000Z

262

High Performance Computing in the U.S. in 1995 -- An Analysis on the Basis of the TOP500 List  

E-Print Network (OSTI)

In 1993 for the first time a list of the top 500 supercomputer sites worldwide has been made available. The TOP500 list allows a much more detailed and well founded analysis of the state of high performance computing. Previously data such as the number and geographical distribution of supercomputer installations were difficult to obtain, and only a few analysts undertook the effort to track the press releases by dozens of vendors. With the TOP500 report now generally and easily available it is possible to present an analysis of the state of High Performance Computing (HPC) in the U.S. This note summarizes some of the most important observations about HPC in the U.S. as of late 1995, in particular the continued dominance of the world market in HPC by the U.S, the market penetration by commodity microprocessor based systems, and the growing industrial use of supercomputers. 1 Introduction The rapid transformation of the high performance computing market in the U.S. which began in 1994...

Jack Dongarra; Horst D. Simon

1996-01-01T23:59:59.000Z

263

High performance computing and algorithm development: application of dataset development to algorithm parameterization.  

E-Print Network (OSTI)

??A number of technologies exist that captures data from biological systems. In addition, several computational tools, which aim to organize the data resulting from these (more)

Jonas, Mario Ricardo Edward

2006-01-01T23:59:59.000Z

264

Tideflow| A dataflow-inspired execution model for high performance computing programs.  

E-Print Network (OSTI)

?? Traditional programming, execution and optimization techniques have been shown to be inadequate to exploit the features of computer processors with many cores. In particular, (more)

Orozco, Daniel A.

2012-01-01T23:59:59.000Z

265

Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing  

SciTech Connect

Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data between replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.

Fiala, David J [ORNL; Mueller, Frank [North Carolina State University; Engelmann, Christian [ORNL; Ferreira, Kurt Brian [Sandia National Laboratories (SNL); Brightwell, Ron [Sandia National Laboratories (SNL); Riesen, Rolf [IBM Research, Ireland

2013-01-01T23:59:59.000Z

266

Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing  

SciTech Connect

Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data between replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.

Fiala, David J [ORNL; Mueller, Frank [North Carolina State University; Engelmann, Christian [ORNL; Ferreira, Kurt Brian [Sandia National Laboratories (SNL); Brightwell, Ron [Sandia National Laboratories (SNL); Riesen, Rolf [IBM Research, Ireland

2012-07-01T23:59:59.000Z

267

Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing  

SciTech Connect

Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data between replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.

Fiala, David J [ORNL; Mueller, Frank [North Carolina State University; Engelmann, Christian [ORNL; Ferreira, Kurt Brian [Sandia National Laboratories (SNL); Brightwell, Ron [Sandia National Laboratories (SNL); Riesen, Rolf [IBM Research, Ireland

2012-07-01T23:59:59.000Z

268

Coordinated Fault-Tolerance for High-Performance Computing Final Project Report  

Science Conference Proceedings (OSTI)

With the Coordinated Infrastructure for Fault Tolerance Systems (CIFTS, as the original project came to be called) project, our aim has been to understand and tackle the following broad research questions, the answers to which will help the HEC community analyze and shape the direction of research in the field of fault tolerance and resiliency on future high-end leadership systems. #15; Will availability of global fault information, obtained by fault information exchange between the different HEC software on a system, allow individual system software to better detect, diagnose, and adaptively respond to faults? If fault-awareness is raised throughout the system through fault information exchange, is it possible to get all system software working together to provide a more comprehensive end-to-end fault management on the system? #15; What are the missing fault-tolerance features that widely used HEC system software lacks today that would inhibit such software from taking advantage of systemwide global fault information? #15; What are the practical limitations of a systemwide approach for end-to-end fault management based on fault awareness and coordination? #15; What mechanisms, tools, and technologies are needed to bring about fault awareness and coordination of responses on a leadership-class system? #15; What standards, outreach, and community interaction are needed for adoption of the concept of fault awareness and coordination for fault management on future systems? Keeping our overall objectives in mind, the CIFTS team has taken a parallel fourfold approach. #15; Our central goal was to design and implement a light-weight, scalable infrastructure with a simple, standardized interface to allow communication of fault-related information through the system and facilitate coordinated responses. This work led to the development of the Fault Tolerance Backplane (FTB) publish-subscribe API specification, together with a reference implementation and several experimental implementations on top of existing publish-subscribe tools. #15; We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. #15; Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. #15; Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.

Panda, Dhabaleswar Kumar [The Ohio State University; Beckman, Pete

2011-07-01T23:59:59.000Z

269

A High-Performance Hybrid Computing Approach to Massive Contingency Analysis in the Power Grid  

Science Conference Proceedings (OSTI)

Operating the electrical power grid to prevent power black-outs is a complex task. An important aspect of this is contingency analysis, which involves understanding and mitigating potential failures in power grid elements such as transmission lines. ... Keywords: hybrid computational systems, middleware, power grid, contingency analysis

Ian Gorton; Zhenyu Huang; Yousu Chen; Benson Kalahar; Shuangshuang Jin; Daniel Chavarra-Miranda; Doug Baxter; John Feo

2009-12-01T23:59:59.000Z

270

VNET/P: Bridging the Cloud and High Performance Computing Through Fast  

E-Print Network (OSTI)

and adaptation ­ Monitor application communication/computation behavior ­ Adaptive and autonomic mapping/supercomputer with gigabit or 10 gigabit networks A. Sundararaj, P. Dinda, Towards Virtual Networks for Virtual Machine Grid User Space Physical Network #12;Data Path (packet transmission) 10 Guest Palacios VNET/P Time TCP

Bustamante, Fabián E.

271

Reducing electricity cost through virtual machine placement in high performance computing clouds  

Science Conference Proceedings (OSTI)

In this paper, we first study the impact of load placement policies on cooling and maximum data center temperatures in cloud service providers that operate multiple geographically distributed data centers. Based on this study, we then propose dynamic ... Keywords: computing cloud, cooling, energy, multi-data-center

Kien Le; Ricardo Bianchini; Jingru Zhang; Yogesh Jaluria; Jiandong Meng; Thu D. Nguyen

2011-11-01T23:59:59.000Z

272

Study of Machine Round-Off response on Weather Forecasting Simulations Using High Performance Computing Systems  

Science Conference Proceedings (OSTI)

The weather forecasting model T80L18 is found to be sensitive to variations in the computing platform. The global spectral model simulation variation due to machine round off is examined using rounding mode analysis and the perturbation methods. The ...

S. Janakiraman; J. V. Ratnam; Akshara Kaginalkar

2000-05-01T23:59:59.000Z

273

Ab Initio potential grid based docking: From High Performance Computing to In Silico Screening  

Science Conference Proceedings (OSTI)

We present a new and completely parallel method for protein ligand docking. The potential of the docking target structure is obtained directly from the electron density derived through an ab initio computation. A large subregion of the crystal structure of Isocitrate Lyase

Marc R. de Jonge; H. Maarten Vinkers; Joop H. van Lenthe; Frits Daeyaert; Ian J. Bush; Huub J. J. van Dam; Paul Sherwood; Martyn F. Guest

2007-01-01T23:59:59.000Z

274

DAGuE: A generic distributed DAG engine for High Performance Computing  

Science Conference Proceedings (OSTI)

The frenetic development of the current architectures places a strain on the current state-of-the-art programming environments. Harnessing the full potential of such architectures is a tremendous task for the whole scientific computing community. We ... Keywords: Architecture aware scheduling, HPC, Heterogeneous architectures, Micro-task DAG

George Bosilca; Aurelien Bouteiller; Anthony Danalis; Thomas Herault; Pierre Lemarinier; Jack Dongarra

2012-01-01T23:59:59.000Z

275

Accelerated computational discovery of high-performance materials for organic photovoltaics by means of cheminformatics  

E-Print Network (OSTI)

is currently derived from fossil fuels2 and all renewable energy sources will be needed in order to satisfy the present and future demand for clean energy. Solar power is a prominent source for renewable energy­voltage and efficiency characteristics of candidate molecules. The descriptors are readily computed which allows us

Heller, Eric

276

High-performance computing tools for the integrated assessment and modelling of social-ecological systems  

Science Conference Proceedings (OSTI)

Integrated spatio-temporal assessment and modelling of complex social-ecological systems is required to address global environmental challenges. However, the computational demands of this modelling are unlikely to be met by traditional Geographic Information ... Keywords: AML, CPU, Cluster, Concurrency, Environmental, GIS, GPU, Global challenges, Graphics processing unit (GPU), Grid, HPC, Multi-core, NPV, Parallel programming

Brett A. Bryan

2013-01-01T23:59:59.000Z

277

High Performance Computing for Stability Problems - Applications to Hydrodynamic Stability and Neutron Transport Criticality.  

E-Print Network (OSTI)

??In this work we examine two kinds of applications in terms of stability and perform numerical evaluations and benchmarks on parallel platforms. We consider the (more)

Subramanian, Chandramowli

2011-01-01T23:59:59.000Z

278

Fair Share on High Performance Computing Systems: What Does Fair Really Mean?  

Science Conference Proceedings (OSTI)

We report on a performance evaluation of a Fair Sharesystem at the ASCI Blue Mountain supercomputer cluster.We study the impacts of share allocation under Fair Shareon wait times and expansion factor. We also measure theService Ratio, a typical figure ...

Stephen D. Kleban; Scott H. Clearwater

2003-05-01T23:59:59.000Z

279

Asynchronous and Multiprecision Linear Solvers - Scalable and Fault-Tolerant Numerics for Energy Efficient High Performance Computing.  

E-Print Network (OSTI)

??Asynchronous methods minimize idle times by removing synchronization barriers, and therefore allow the efficient usage of computer systems. The implied high tolerance with respect to (more)

Anzt, Hartwig

2012-01-01T23:59:59.000Z

280

Proceedings of the 20th international symposium on High performance distributed computing  

Science Conference Proceedings (OSTI)

Welcome to ACM HPDC 2011! This is the twentieth anniversary year of HPDC, and I am pleased to report that we continue to be a growing, engaged, and exciting community. The program consists of three days packed full of the latest developments in high ...

Arthur "Barney" Maccabe; Douglas Thain

2011-06-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


281

Process fault tolerance: semantics, design and applications for high performance computing  

E-Print Network (OSTI)

With increasing numbers of processors on current machines, the probability for node or link failures is also increasing. Therefore, application-level fault tolerance is becoming more of an important issue for both end-users and the institutions running the machines. In this paper we present the semantics of a fault-tolerant version of the message passing interface (MPI), the de-facto standard for communication in scientific applications, which gives applications the possibility to recover from a node or link error and continue execution in a well-defined way. We present the architecture of fault-tolerant MPI, an implementation of MPI using the semantics presented above as well as benchmark results with various applications. An example of a fault-tolerant parallel equation solver, performance results as well as the time for recovering from a process failure are furthermore detailed.

Jack J; Graham E. Fagg; Graham E. Fagg; Edgar Gabriel; Edgar Gabriel; Zizhong Chen; Zizhong Chen; Thara Angskun; Thara Angskun; George Bosilca; George Bosilca; Jelena Pjesivac-grbovic; Jelena Pjesivac-grbovic; Jack J. Dongarra

2004-01-01T23:59:59.000Z

282

ScalaTrace: Scalable Compression and Replay of Communication Traces for High Performance Computing  

Science Conference Proceedings (OSTI)

Characterizing the communication behavior of large-scale applications is a difficult and costly task due to code/system complexity and long execution times. While many tools to study this behavior have been developed, these approaches either aggregate information in a lossy way through high-level statistics or produce huge trace files that are hard to handle. We contribute an approach that provides orders of magnitude smaller, if not near-constant size, communication traces regardless of the number of nodes while preserving structural information. We introduce intra- and inter-node compression techniques of MPI events that are capable of extracting an application's communication structure. We further present a replay mechanism for the traces generated by our approach and discuss results of our implementation for BlueGene/L. Given this novel capability, we discuss its impact on communication tuning and beyond. To the best of our knowledge, such a concise representation of MPI traces in a scalable manner combined with deterministic MPI call replay are without any precedent.

Noeth, M; Ratn, P; Mueller, F; Schulz, M; de Supinski, B R

2008-05-16T23:59:59.000Z

283

Photons, Photosynthesis, and High-Performance Computing: Challenges, Progress, and Promise of Modeling Metabolism in Green Algae  

DOE Green Energy (OSTI)

The complexity associated with biological metabolism considered at a kinetic level presents a challenge to quantitative modeling. In particular, the relatively sparse knowledge of parameters for enzymes with known kinetic responses is problematic. The possible space of these parameters is of high-dimension, and sampling of such a space typifies a combinatorial explosion of possible dynamic states. However, with sufficient quantitative transcriptomics, proteomics, and metabolomics data at hand, these challenges could be met by high-performance software with sampling, fitting, and optimization capabilities. With this in mind, we present the High-Performance Systems Biology Toolkit HiPer SBTK, an evolving software package to simulate, fit, and optimize metabolite concentrations and fluxes within the space of rate and binding parameters associated with detailed enzyme kinetic models. We present our chosen modeling paradigm for the formulation of metabolic pathway models, the means to address the challenge of representing such models in a precise and persistent fashion using the standardized Systems Biology Markup Language, and our second-generation model of H2-associated Chlamydomonas metabolism. Processing of such models for hierarchically parallelized simulation and optimization, job specification by the user through a GUI interface, software capabilities and initial scaling data, and the mapping of the computation to biological questions is also discussed. Moreover, we present near-term future software and model development goals.

Chang, C. H.; Graf, P.; Alber, D. M.; Kim, K.; Murray, G.; Posewitz, M.; Seibert, M.

2008-01-01T23:59:59.000Z

284

Harnessing the Department of Energys High-Performance Computing Expertise to Strengthen the U.S. Chemical Enterprise  

DOE Green Energy (OSTI)

High-performance computing (HPC) is one area where the DOE has developed extensive expertise and capability. However, this expertise currently is not properly shared with or used by the private sector to speed product development, enable industry to move rapidly into new areas, and improve product quality. Such use would lead to substantial competitive advantages in global markets and yield important economic returns for the United States. To stimulate the dissemination of DOE's HPC expertise, the Council for Chemical Research (CCR) and the DOE jointly held a workshop on this topic. Four important energy topic areas were chosen as the focus of the meeting: Biomass/Bioenergy, Catalytic Materials, Energy Storage, and Photovoltaics. Academic, industrial, and government experts in these topic areas participated in the workshop to identify industry needs, evaluate the current state of expertise, offer proposed actions and strategies, and forecast the expected benefits of implementing those strategies.

Dixon, David A.; Dupuis, Michel; Garrett, Bruce C.; Neaton, Jeffrey B.; Plata, Charity; Tarr, Matthew A.; Tomb, Jean-Francois; Golab, Joseph T.

2012-01-17T23:59:59.000Z

285

Published in proceedings of the 1996 EUROSIM Int'l Conf., June 10--12, 1996, Delft, The Netherlands, pp. 421--428. Automatic Code Generation for High Performance Computing in  

E-Print Network (OSTI)

, The Netherlands, pp. 421--428. Automatic Code Generation for High Performance Computing in Environmental Modeling Robert van Engelen a#+ , Lex Wolters a+# , and Gerard Cats b# a High Performance Computing Division, Dept: cats@knmi.nl In this paper we will discuss automatic code generation for high performance computer

van Engelen, Robert A.

286

CMS Computing: Performance and Outlook  

E-Print Network (OSTI)

After years of development, the CMS distributed computing system is now in full operation. The LHC continues to set records for instantaneous luminosity, and CMS continues to record data at 300 Hz. Because of the intensity of the beams, there are multiple proton-proton interactions per beam crossing, leading to larger and larger event sizes and processing times. The CMS computing system has responded admirably to these challenges. We present the current status of the system, describe the recent performance, and discuss the challenges ahead and how we intend to meet them.

Kenneth Bloom; for the CMS Collaboration

2011-10-02T23:59:59.000Z

287

High Performance Rooftop Units  

NLE Websites -- All DOE Office Websites (Extended Search)

Presentationname High Performance RTUs Life Cycle Cost Comparison Calculator * Web-based tool for comparing costs of standard and high performance RTUs. * Weather data for 237...

288

Measuring the performance of parallel computers with distributed memory  

Science Conference Proceedings (OSTI)

Basic techniques for measuring the performance of parallel computers with distributed memory are considered. The results obtained via the de-facto standard LINPACK benchmark suite are shown to be weakly related to the efficiency of applied parallel programs. ... Keywords: HPC, MIMD, cluster, communication expenses, data processing, high-performance computing, optimization, parallel computations, performance, supercomputer

R. A. Iushchenko

2009-11-01T23:59:59.000Z

289

Green HPC : a system design approach to energy-efficient datacenters; Green High Performance Computing : a system design approach to energy-efficient datacenters; System design approach to energy-efficient data centers.  

E-Print Network (OSTI)

??Green HPC is the new standard for High Performance Computing (HPC). This has now become the primary interest among HPC researchers because of a renewed (more)

Keville, Kurt (Kurt Lawrence)

2011-01-01T23:59:59.000Z

290

High Performance Computing in the U.S. - An Analysis on the Basis of the TOP500 List Horst D. Simon  

E-Print Network (OSTI)

In 1993 for the first time a list of the top 500 supercomputer sites worldwide has been made available. The TOP500 list allows a much more detailed and well founded analysis of the state of high performance computing. Previously data such as the number and geographical distribution of supercomputer installations were difficult to obtain, and only a few analysts undertook the effort to track the press releases by dozens of vendors. With the TOP500 report now generally and easily available it is possible to present an analysis of the state of High Performance Computing (HPC) in the U.S. This note summa- rizes some of the most important observations about HPC in the U.S.

Applied Research Branch; Horst D. Simon

1994-01-01T23:59:59.000Z

291

Coordinated resource management for guaranteed high performance and efficient utilization in Lambda-Grids  

E-Print Network (OSTI)

Journal of High Performance Computing Applications, AugustConference on High Performance Computing and Communication (Symposium on High-Performance Computing in an Advanced

Taesombut, Nut

2007-01-01T23:59:59.000Z

292

High Performance Computing in Multi-Body System Design S. Baldini, L. Giraud, J. M. Jimenez, L. M. Matey and J. G. Izaguirre  

E-Print Network (OSTI)

. This paper presents recent developments of high performance computing and networking techniques in the field of computer-aided multi-body analysis and design. We describe the main achievements obtained in the development of a tool to aid in the design of new industrial mechanical systems by performing parallel parametric multi-body simulations. The parallel software is composed of four main modules, that are: two user-friendly interfaces to input the data and visualise the results, a simulation module and a parallel manager module that is the main focus of this paper. We show that the implementation, using PVM [1], of simple and well-known ideas leads to efficient and flexible parallel software targeted for heterogeneous networks of non-dedicated workstations, which is the parallel platform available in most mechanical design departments. We describe the main features of this module that implements for sake of efficiency and robustness a load balancing strategy and fault tolerance c...

Tr Pa; S. Baldini; L. Giraud; J. M. Jimenez; L. M. Matey; J. G. Izaguirre; Sandra Baldini; Jose M. Jimenez; Luis M. Matey; Javier G. Izaguirre

1997-01-01T23:59:59.000Z

293

Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.  

Science Conference Proceedings (OSTI)

Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

2011-02-01T23:59:59.000Z

294

Building The Next Generation Of High Performance Computing Researchers In Engineering And Science: The Ncsa/arl Msrc Pet Summer Internship Program  

E-Print Network (OSTI)

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at UrbanaChampaign (UIUC) is lead academic institution for the Army Research Laboratory Major Shared Resource Center Programming Environment and Training Program (ARL MSRC PET). This program is part of the Department of Defense (DoD) High Performance Computing Modernization Program. ARL MSRC PET has a scientific advancement, outreach and training mission. With US-wide faculty and ARL engineers and scientists, the ARL MSRC PET Training Team offered its Summer Intern Program in High Performance Computing (HPC) in 1998, 1999, and will again in 2000. It encourages young Americans to consider computer science and engineering careers in DoD and elsewhere. A program focus is outreach to underrepresented minorities and women. Mentors and program administrators play a crucial role. This paper discusses the development of this innovative governmentuniversity collaborative education program and lessons learned for those wishing to establish similar programs to introduce young Americans to real-life HPC research and applications.

Mary Bea Walker; Emma C. Grove; Virginia A. To

2000-01-01T23:59:59.000Z

295

Automatic music performance with computers  

Science Conference Proceedings (OSTI)

Developments of the Utah?BYU music project which have culminated in the design and implementation of a portable computer?driven music?generating system capable of interpretive playing of transcribed musical scores will be discussed

A. C. Ashton; R. F. Bennion

1979-01-01T23:59:59.000Z

296

What is High Performance UnionCollegeAlbanyWorkshopon  

E-Print Network (OSTI)

What is High Performance Computing? UnionCollegeAlbanyWorkshopon "High Performance Computing IN 47404 gcf@indiana.edu http://www.infomall.org #12;What is High Performance Computing? The meaning of this was clear 20 years ago when we were planning/starting the HPCC (High Performance Computing and Communication

Barr, Valerie

297

Supporting Computational Data Model Representation with High-performance I/O in Parallel netCDF  

E-Print Network (OSTI)

staycompetitive Argonne Leadership Computing Facility industry@alcf.anl.gov #12;MIRA RANKS third on the TOP500, and create more accurate models for your business with Mira, the ALCF's new petascale IBM Blue Gene/Q system, ALCF Science Director CUTTING-EDGE SUPERCOMPUTING KEEPS YOU COMPETITIVE A key driver of our nation

Choudhary, Alok

298

High Performance Tooling Materials  

Science Conference Proceedings (OSTI)

High performance tools are necessary for the successful manufacturing of every consumer product as well as oil drilling and mining operations. Increasing...

299

Lawrence Livermore National Laboratory opens High Performance...  

NLE Websites -- All DOE Office Websites (Extended Search)

06302011 | NR-11-06-08 Lawrence Livermore National Laboratory opens High Performance Computing Innovation Center for collaboration with industry Donald B Johnston, LLNL,...

300

High Performance Network Monitoring  

SciTech Connect

Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

Martinez, Jesse E [Los Alamos National Laboratory

2012-08-10T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


301

System Software and Tools for High Performance Computing Environments: A report on the findings of the Pasadena Workshop, April 14--16, 1992  

SciTech Connect

The Pasadena Workshop on System Software and Tools for High Performance Computing Environments was held at the Jet Propulsion Laboratory from April 14 through April 16, 1992. The workshop was sponsored by a number of Federal agencies committed to the advancement of high performance computing (HPC) both as a means to advance their respective missions and as a national resource to enhance American productivity and competitiveness. Over a hundred experts in related fields from industry, academia, and government were invited to participate in this effort to assess the current status of software technology in support of HPC systems. The overall objectives of the workshop were to understand the requirements and current limitations of HPC software technology and to contribute to a basis for establishing new directions in research and development for software technology in HPC environments. This report includes reports written by the participants of the workshop`s seven working groups. Materials presented at the workshop are reproduced in appendices. Additional chapters summarize the findings and analyze their implications for future directions in HPC software technology development.

Sterling, T. [Universities Space Research Association, Washington, DC (United States); Messina, P. [Jet Propulsion Lab., Pasadena, CA (United States); Chen, M. [Yale Univ., New Haven, CT (United States)] [and others

1993-04-01T23:59:59.000Z

302

High Performance Sustainable Buildings  

NLE Websites -- All DOE Office Websites (Extended Search)

become a High Performance Sustainable Building in 2013. On the former County landfill, a photovoltaic array field uses solar energy to provide power for Los Alamos County and the...

303

High Performance Window Attachments  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

High Performance Window High Performance Window Attachments D. Charlie Curcija Lawrence Berkeley National Laboratory dccurcija@lbl.gov 510-495-2602 April 4, 2013 2 | Building Technologies Office eere.energy.gov Purpose & Objectives Impact of Project: * Motivate manufacturers to make improvements in Window systems U-Factors, SHGC and daylighting utilization * Increase awareness of benefits from energy efficient window attachments Problem Statement: * A wide range of residential window attachments are available, but they have widely unknown

304

High Performance Computing (HPC) Technologies  

LLNL expertise and capabilities are available on a nonexclusive basis subject to DOE requirements and priorities, and subject to complementary ...

305

Applications: n High performance computing  

E-Print Network (OSTI)

Division Licensable Technologies EnergyFitTM An Equal Opportunity Employer / Operated by Los Alamos National Security LLC for DOE/NNSA www.lanl.gov/partnerships/license/technologies/ Summary: Energy use central databases. Hundreds of millions of dollars are wasted annu- ally due to power consump- tion

306

Performance of Massively Parallel Computers for Spectral Atmospheric Models  

Science Conference Proceedings (OSTI)

Massively parallel processing (MPP) computer systems use high-speed interconnection networks to link hundreds or thousands of RISC microprocessors. With each microprocessor having a peak performance of 100 or more megaflops per second, there is ...

Ian T. Foster; Brian Toonen; Patrick H. Worley

1996-10-01T23:59:59.000Z

307

High Performance Windows Volume Purchase: About the High Performance  

NLE Websites -- All DOE Office Websites (Extended Search)

High Performance Windows Volume Purchase: About the High Performance Windows Volume Purchase: About the High Performance Windows Volume Purchase Program to someone by E-mail Share High Performance Windows Volume Purchase: About the High Performance Windows Volume Purchase Program on Facebook Tweet about High Performance Windows Volume Purchase: About the High Performance Windows Volume Purchase Program on Twitter Bookmark High Performance Windows Volume Purchase: About the High Performance Windows Volume Purchase Program on Google Bookmark High Performance Windows Volume Purchase: About the High Performance Windows Volume Purchase Program on Delicious Rank High Performance Windows Volume Purchase: About the High Performance Windows Volume Purchase Program on Digg Find More places to share High Performance Windows Volume Purchase:

308

High performance steam development  

SciTech Connect

DOE has launched a program to make a step change in power plant to 1500 F steam, since the highest possible performance gains can be achieved in a 1500 F steam system when using a topping turbine in a back pressure steam turbine for cogeneration. A 500-hour proof-of-concept steam generator test module was designed, fabricated, and successfully tested. It has four once-through steam generator circuits. The complete HPSS (high performance steam system) was tested above 1500 F and 1500 psig for over 102 hours at full power.

Duffy, T.; Schneider, P.

1995-12-31T23:59:59.000Z

309

Performance tuning for high performance computing systems.  

E-Print Network (OSTI)

??A Distributed System is composed by integration between loosely coupled software components and the underlying hardware resources that can be distributed over the standard internet (more)

Pahuja, Himanshu

2011-01-01T23:59:59.000Z

310

High Performance www.rrze.uni-erlangen.de  

E-Print Network (OSTI)

High Performance Computing at RRZE 2008 HPCatRRZE www.rrze.uni-erlangen.de #12;G. Hager, T. Zeiser and G. Wellein: Concepts of High Performance Computing. In: Fehske et al. Lect. Notes Phys. 739, 681 Optimization Techniques for the Hitachi SR8000 architecture. In: A. Bode (Ed.) : High Performance Computing

Fiebig, Peter

311

High Performance Buildings Database  

DOE Data Explorer (OSTI)

The High Performance Buildings Database is a shared resource for the building industry. The Database, developed by the U.S. Department of Energy and the National Renewable Energy Laboratory (NREL), is a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The Database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses.

The Database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site. Early partners using the database include:

  • The Federal Energy Management Program
  • The U.S. Green Building Council
  • The American Institute of Architects' Committee on the Environment
  • The Massachusetts Technology Collaborative
  • Efficiency Vermont
    • Copied (then edited) from http://eere.buildinggreen.com/partnering.cfm

312

Can quantum computer perform better than classical?  

E-Print Network (OSTI)

A theoretical model of a quantum device which can factorize any number N in two steps i.e. by preparing an input state and performing a measurement is discussed. The analysis reveals that the duration of state preparation and measurement is proportional to N while the energy consumption grows like log N. These results suggest the existence of Heisenberg-type relation putting limits on the efficiency of a quantum computer in terms of a total computation time, a total energy consumption and a classical complexity of the problem.

Robert Alicki

2000-06-05T23:59:59.000Z

313

Can quantum computer perform better than classical?  

E-Print Network (OSTI)

A theoretical model of a quantum device which can factorize any number N in two steps i.e. by preparing an input state and performing a measurement is discussed. The analysis reveals that the duration of state preparation and measurement is proportional to N while the energy consumption grows like log N. These results suggest the existence of Heisenberg-type relation putting limits on the efficiency of a quantum computer in terms of a total computation time, a total energy consumption and a classical complexity of the problem.

Alicki, R

2000-01-01T23:59:59.000Z

314

Scaling up transit priority modelling using high-throughput computing  

Science Conference Proceedings (OSTI)

The optimization of Road Space Allocation (RSA) from a network perspective is computationally challenging. An analogue to the Network Design Problem (NDP), RSA can be classified NP-hard. In large-scale networks when the number of alternatives increases ... Keywords: genetic algorithm, high-performance computing, high-throughput computing, transport modelling

Mahmoud Mesbah, Majid Sarvi, Jefferson Tan, Fateme Karimirad

2012-01-01T23:59:59.000Z

315

Elucidating geochemical response of shallow heterogeneous aquifers to CO2 leakage using high-performance computing: Implications for monitoring of CO2 sequestration  

SciTech Connect

Predicting and quantifying impacts of potential carbon dioxide (CO2) leakage into shallow aquifers that overlie geologic CO2 storage formations is an important part of developing reliable carbon storage techniques. Leakage of CO2 through fractures, faults or faulty wellbores can reduce groundwater pH, inducing geochemical reactions that release solutes into the groundwater and pose a risk of degrading groundwater quality. In order to help quantify this risk, predictions of metal concentrations are needed during geologic storage of CO2. Here, we present regional-scale reactive transport simulations, at relatively fine-scale, of CO2 leakage into shallow aquifers run on the PFLOTRAN platform using high-performance computing. Multiple realizations of heterogeneous permeability distributions were generated using standard geostatistical methods. Increased statistical anisotropy of the permeability field resulted in more lateral and vertical spreading of the plume of impacted water, leading to increased Pb2+ (lead) concentrations and lower pH at a well down gradient of the CO2 leak. Pb2+ concentrations were higher in simulations where calcite was the source of Pb2+ compared to galena. The low solubility of galena effectively buffered the Pb2+ concentrations as galena reached saturation under reducing conditions along the flow path. In all cases, Pb2+ concentrations remained below the maximum contaminant level set by the EPA. Results from this study, compared to natural variability observed in aquifers, suggest that bicarbonate (HCO3) concentrations may be a better geochemical indicator of a CO2 leak under the conditions simulated here.

Navarre-Sitchler, Alexis K.; Maxwell, Reed M.; Siirila, Erica R.; Hammond, Glenn E.; Lichtner, Peter C.

2013-03-01T23:59:59.000Z

316

High Energy Physics from High Performance Computing  

E-Print Network (OSTI)

We discuss Quantum Chromodynamics calculations using the lattice regulator. The theory of the strong force is a cornerstone of the Standard Model of particle physics. We present USQCD collaboration results obtained on Argonne National Lab's Intrepid supercomputer that deepen our understanding of these fundamental theories of Nature and provide critical support to frontier particle physics experiments and phenomenology.

T. Blum

2009-08-06T23:59:59.000Z

317

Intelligent Management of the Power Grid: An Anticipatory, Multi-Agent, High Performance Computing Approach: EPRI/DoD Complex Intera ctive Networks/Systems Initiative: Second Annual Report  

Science Conference Proceedings (OSTI)

This report details the second-year research accomplishments for one of six research consortia established under the Complex Interactive Networks/Systems Initiative. This particular document details an anticipatory, multi-agent, high performance computing approach for intelligent management of the power grid.

2001-06-21T23:59:59.000Z

318

Java Performance for Scientific Applications on LLNL Computer Systems  

Science Conference Proceedings (OSTI)

Languages in use for high performance computing at the laboratory--Fortran (f77 and f90), C, and C++--have many years of development behind them and are generally considered the fastest available. However, Fortran and C do not readily extend to object-oriented programming models, limiting their capability for very complex simulation software. C++ facilitates object-oriented programming but is a very complex and error-prone language. Java offers a number of capabilities that these other languages do not. For instance it implements cleaner (i.e., easier to use and less prone to errors) object-oriented models than C++. It also offers networking and security as part of the language standard, and cross-platform executables that make it architecture neutral, to name a few. These features have made Java very popular for industrial computing applications. The aim of this paper is to explain the trade-offs in using Java for large-scale scientific applications at LLNL. Despite its advantages, the computational science community has been reluctant to write large-scale computationally intensive applications in Java due to concerns over its poor performance. However, considerable progress has been made over the last several years. The Java Grande Forum [1] has been promoting the use of Java for large-scale computing. Members have introduced efficient array libraries, developed fast just-in-time (JIT) compilers, and built links to existing packages used in high performance parallel computing.

Kapfer, C; Wissink, A

2002-05-10T23:59:59.000Z

319

Creating high performance enterprises  

E-Print Network (OSTI)

How do enterprises successfully conceive, design, deliver, and operate large-scale, engineered systems? These large-scale projects often involve high complexity, significant technical challenges, a large number of diverse ...

Stanke, Alexis K. (Alexis Kristen), 1977-

2006-01-01T23:59:59.000Z

320

High Performance and Sustainable Buildings Guidance | Department...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

High Performance and Sustainable Buildings Guidance High Performance and Sustainable Buildings Guidance High Performance and Sustainable Buildings Guidance More Documents &...

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


321

MPICH | High-Performance Portable MPI  

NLE Websites -- All DOE Office Websites (Extended Search)

MPICH MPICH High-Performance Portable MPI Skip to content Home About MPICH Overview News and Events Collaborators MPI Forum Downloads Releases Release Timeline Pending Tickets Source Changes Documentation Guides Publications MPICH Wiki Hydra Usage Developer Docs Contributor Docs Support FAQs Mailing Lists Bug Reports ABI Compatibility Initiative MPICH is a high performance and widely portable implementation of the Message Passing Interface (MPI) standard. The goals of MPICH are: to provide an MPI implementation that efficiently supports different computation and communication platforms including commodity clusters (desktop systems, shared-memory systems, multicore architectures), high-speed networks and proprietary high-end computing systems (Blue Gene, Cray) to enable cutting-edge research in MPI through an easy-to-extend

322

Poster: performance modeling and computational quality of service (CQoS) in synergia2 accelerator simulations  

Science Conference Proceedings (OSTI)

High-precision accelerator modeling is essential for particle accelerator design and optimization. However, this modeling presents a significant computational challenge. We discuss performance modeling of and computational quality of service (CQoS) results ... Keywords: accelerator simulation, computational quality of service, performance modeling, synergia

Steve Goldhaber; Stefan Muszala; Nanbor Wang; James F. Amundson; Eric G. Stern; Boyana Norris; Daihee Kim

2011-11-01T23:59:59.000Z

323

Fast algorithms and solvers in computational electromagnetics and micromagnetics on GPUs  

E-Print Network (OSTI)

algebra, in High Performance Computing, Networking, Storagearchitecture high performance computing for computationalConference on High Performance Computing Networking, Storage

Li, Shaojing

2012-01-01T23:59:59.000Z

324

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

Computing and Storage Requirements for High Energy Physics [for High Energy Physics Computational and Storage for High Energy Physics Computational and Storage

Gerber, Richard A.

2011-01-01T23:59:59.000Z

325

High Performance Networks for High Impact Science  

Science Conference Proceedings (OSTI)

This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

Scott, Mary A.; Bair, Raymond A.

2003-02-13T23:59:59.000Z

326

Emerging Computing Technologies in High Energy Physics  

E-Print Network (OSTI)

While in the early 90s High Energy Physics (HEP) lead the computing industry by establishing the HTTP protocol and the first web-servers, the long time-scale for planning and building modern HEP experiments has resulted in a generally slow adoption of emerging computing technologies which rapidly become commonplace in business and other scientific fields. I will overview some of the fundamental computing problems in HEP computing and then present the current state and future potential of employing new computing technologies in addressing these problems.

Amir Farbin

2009-10-19T23:59:59.000Z

327

Computer system performance problem detection using time series models  

Science Conference Proceedings (OSTI)

Computer systems require monitoring to detect performance anomalies such as runaway processes, but problem detection and diagnosis is a complex task requiring skilled attention. Although human attention was never ideal for this task, as networks of computers ...

Peter Hoogenboom; Jay Lepreau

1993-06-01T23:59:59.000Z

328

A survey of computer systems for expressive music performance  

Science Conference Proceedings (OSTI)

We present a survey of research into automated and semiautomated computer systems for expressive performance of music. We will examine the motivation for such systems and then examine the majority of the systems developed over the last 25 years. To highlight ... Keywords: Music performance, computer music, generative performance, machine learning

Alexis Kirke; Eduardo Reck Miranda

2009-12-01T23:59:59.000Z

329

INL High Performance Building Strategy  

SciTech Connect

High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nations premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, Federal Leadership in Environmental, Energy, and Economic Performance [2009], EO 13423, Strengthening Federal Environmental, Energy, and Transportation Management [2007], and DOE Order 430.2B, Departmental Energy, Renewable Energy, and Transportation Management [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design (LEED) Green Building Rating System (LEED 2009). The document employs a two-level approach for high performance building at INL. The first level identifies the requirements of the Guiding Principles for Sustainable New Construction and Major Renovations, and the second level recommends which credits should be met when LEED Gold certification is required.

Jennifer D. Morton

2010-02-01T23:59:59.000Z

330

Performing an allreduce operation on a plurality of compute nodes of a parallel computer  

DOE Patents (OSTI)

Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

Faraj, Ahmad (Rochester, MN)

2012-04-17T23:59:59.000Z

331

Sandia`s network for SC `97: Supporting visualization, distributed cluster computing, and production data networking with a wide area high performance parallel asynchronous transfer mode (ATM) network  

Science Conference Proceedings (OSTI)

The advanced networking department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past several years as a forum to demonstrate and focus communication and networking developments. At SC `97, Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL), and Lawrence Livermore National Laboratory (LLNL) combined their SC `97 activities within a single research booth under the Advance Strategic Computing Initiative (ASCI) banner. For the second year in a row, Sandia provided the network design and coordinated the networking activities within the booth. At SC `97, Sandia elected to demonstrate the capability of the Computation Plant, the visualization of scientific data, scalable ATM encryption, and ATM video and telephony capabilities. At SC `97, LLNL demonstrated an application, called RIPTIDE, that also required significant networking resources. The RIPTIDE application had computational visualization and steering capabilities. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia`s overall strategies in ATM networking.

Pratt, T.J.; Martinez, L.G.; Vahle, M.O.; Archuleta, T.V.; Williams, V.K.

1998-05-01T23:59:59.000Z

332

A high-level approach to synthesis of high-performance codes for quantum chemistry  

Science Conference Proceedings (OSTI)

This paper discusses an approach to the synthesis of high-performance parallel programs for a class of computations encountered in quantum chemistry and physics. These computations are expressible as a set of tensor contractions and arise in electronic ...

Gerald Baumgartner; David E. Bernholdt; Daniel Cociorva; Robert Harrison; So Hirata; Chi-Chung Lam; Marcel Nooijen; Russell Pitzer; J. Ramanujam; P. Sadayappan

2002-11-01T23:59:59.000Z

333

High-performance land surface modeling with a Linux cluster  

Science Conference Proceedings (OSTI)

The Land Information System (LIS) was developed at NASA to perform global land surface simulations at a resolution of 1-km or finer in real time. Such unprecedented scales and intensity pose many computational challenges. In this article, we demonstrate ... Keywords: Beowulf cluster, Distributed computing, High-resolution simulation, Hydrology modeling, Parallel computing, Peer-to-peer network

Y. Tian; C. D. Peters-Lidard; S. V. Kumar; J. Geiger; P. R. Houser; J. L. Eastman; P. Dirmeyer; B. Doty; J. Adams

2008-11-01T23:59:59.000Z

334

High Performance Photovoltaic Project Overview  

DOE Green Energy (OSTI)

The High-Performance Photovoltaic (HiPerf PV) Project was initiated by the U.S. Department of Energy to substantially increase the viability of photovoltaics (PV) for cost-competitive applications so that PV can contribute significantly to our energy supply and environment in the 21st century. To accomplish this, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices. In this paper, we describe the recent research accomplishments in the in-house directed efforts and the research efforts under way in the subcontracted area.

Symko-Davies, M.; McConnell, R.

2005-01-01T23:59:59.000Z

335

Application of redundant computation in software performance analysis  

Science Conference Proceedings (OSTI)

Redundant computation is an execution of a program statement(s) that does not contribute to the program output. The same statement on one execution may exhibit redundant computation whereas on a different execution, it contributes to the program output. ... Keywords: control dependence, data dependence, dependence analysis, performance analysis, redundant code, redundant computation

Zakarya Alzamil; Bogdan Korel

2005-07-01T23:59:59.000Z

336

Performance analysis of memory hierachies in high performance systems  

SciTech Connect

This thesis studies memory bandwidth as a performance predictor of programs. The focus of this work is on computationally intensive programs. These programs are the most likely to access large amounts of data, stressing the memory system. Computationally intensive programs are also likely to use highly optimizing compilers to produce the fastest executables possible. Methods to reduce the amount of data traffic by increasing the average number of references to each item while it resides in the cache are explored. Increasing the average number of references to each cache item reduces the number of memory requests. Chapter 2 describes the DLX architecture. This is the architecture on which all the experiments were performed. Chapter 3 studies memory moves as a performance predictor for a group of application programs. Chapter 4 introduces a model to study the performance of programs in the presence of memory hierarchies. Chapter 5 explores some compiler optimizations that can help increase the references to each item while it resides in the cache.

Yogesh, A.

1993-07-01T23:59:59.000Z

337

Performance driven multi-objective distributed scheduling for parallel computations  

Science Conference Proceedings (OSTI)

With the advent of many-core architectures and strong need for Petascale (and Exascale) performance in scientific domains and industry analytics, efficient scheduling of parallel computations for higher productivity and performance has become very important. ...

Ankur Narang; Abhinav Srivastava; Naga Praveen Kumar Katta; Rudrapatna K. Shyamasundar

2011-07-01T23:59:59.000Z

338

Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing.  

SciTech Connect

The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classes is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory usage errors, and will be much more robust to later refactoring and maintenance. The level of debug-mode runtime checking provided by the Teuchos memory management classes is stronger in many respects than what is provided by memory checking tools like Valgrind and Purify while being much less expensive. However, tools like Valgrind and Purify perform a number of types of checks (like usage of uninitialized memory) that makes these tools very valuable and therefore complement the Teuchos memory management debug-mode runtime checking. The Teuchos memory management classes and idioms largely address the technical issues in resolving the fragile built-in C++ memory management model (with the exception of circular references which has no easy solution but can be managed as discussed). All that remains is to teach these classes and idioms and expand their usage in C++ codes. The long-term viability of C++ as a usable and productive language depends on it. Otherwise, if C++ is no safer than C, then is the greater complexity of C++ worth what one gets as extra features? Given that C is smaller and easier to learn than C++ and since most programmers don't know object-orientation (or templates or X, Y, and Z features of C++) all that well anyway, then what really are most programmers getting extra out of C++ that would outweigh the extra complexity of C++ over C? C++ zealots will argue this point but the reality is that C++ popularity has peaked and is becoming less popular while the popularity of C has remained fairly stable over the last decade22. Idioms like are advocated in this paper can help to avert this trend but it will require wide community buy-in and a change in the way C++ is taught in order to have the greatest impact. To make these programs more secure, compiler vendors or static analysis tools (e.g. klocwork23) could implement a preprocessor-like language similar to OpenMP24 that would allow the programmer to declare (in comments) that certain blocks of code should be ''pointer-free'' or allow smaller blocks to be 'pointers allowed'. This would signific

Bartlett, Roscoe Ainsworth

2010-05-01T23:59:59.000Z

339

High Performance Adaptive Distributed Scheduling Algorithm  

Science Conference Proceedings (OSTI)

Exascale computing requires complex runtime systems that need to consider affinity, load balancing and low time and message complexity for scheduling massive scale parallel computations. Simultaneous consideration of these objectives makes online distributed ... Keywords: Distributed Scheduling, Adaptive Scheduling, Performance Analysis

Ankur Narang, Abhinav Srivastava, R. K. Shyamasundar

2013-05-01T23:59:59.000Z

340

Scientific Computing Kernels on the Cell Processor  

E-Print Network (OSTI)

Journal of High Performance Computing Applications, 2004. [Conference on High Performance Computing in the Asia Paci?cMeeting on High Performance Computing for Computational

Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

2008-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


341

Proceedings of the ATIP/A*CRC Workshop on Accelerator Technologies for High-Performance Computing: Does Asia Lead the Way?: Does Asia Lead the Way?  

Science Conference Proceedings (OSTI)

The June and November 2011 editions of the Top500 List of Supercomputers serve as clear evidence that within the past twelve months, Asian Supercomputing players have advanced to the very high end of the list. As of the November 2011 ranking of the world's ...

2012-05-01T23:59:59.000Z

342

High Performance Windows Volume Purchase: Events  

NLE Websites -- All DOE Office Websites (Extended Search)

Events to someone by E-mail Share High Performance Windows Volume Purchase: Events on Facebook Tweet about High Performance Windows Volume Purchase: Events on Twitter Bookmark High...

343

High Performance Windows Volume Purchase: News  

NLE Websites -- All DOE Office Websites (Extended Search)

News to someone by E-mail Share High Performance Windows Volume Purchase: News on Facebook Tweet about High Performance Windows Volume Purchase: News on Twitter Bookmark High...

344

TORCH Computational Reference Kernels - A Testbed for Computer Science Research  

E-Print Network (OSTI)

Intl. Journal of High Performance Computing Applications [In Proc. SC2009: High performance computing, networking, andIn Proc. SC2008: High performance computing, networking, and

Kaiser, Alex

2011-01-01T23:59:59.000Z

345

High-End Computing in the 21st Century - CECM  

E-Print Network (OSTI)

Theory. Computing. & Simulation. Who Needs High-End Computers? ... help scientists understand turbulent plasmas in nuclear fusion reactor designs.

346

Engineered Cathodes for High Performance SOFCs  

Science Conference Proceedings (OSTI)

Computational design analysis of a high performance cathode is a cost-effective means of exploring new microstructure and material options for solid oxide fuel cells. A two-layered porous cathode design has been developed that includes a thinner layer with smaller grain diameters at the cathode/electrolyte interface followed by a relatively thicker outer layer with larger grains at the electrode/oxidant interface. Results are presented for the determination of spatially dependent current generation distributions, assessment of the importance of concentration polarization, and sensitivity to measureable microstructural variables. Estimates of the electrode performance in air at 700C indicate that performance approaching 3.1 A/cm2 at 0.078 V is theoretically possible. The limitations of the model are described, along with efforts needed to verify and refine the predictions. The feasibility of fabricating the electrode configuration is also discussed.

Williford, Rick E.; Singh, Prabhakar

2004-03-29T23:59:59.000Z

347

Mercury: Enabling Remote Procedure Call for High-Performance...  

NLE Websites -- All DOE Office Websites (Extended Search)

of High-Performance Computing (HPC), allows the execution of routines to be delegated to remote nodes, which can be set aside and dedicated to specific tasks. However, existing...

348

A HIGH PERFORMANCE/LOW COST ACCELERATOR CONTROL SYSTEM  

E-Print Network (OSTI)

LOW COST ACCELERATOR CONTROL SYSTEM S. Hagyary, J. Glat H.LOW COST ACCELERATOR CONTROL SYSTEM S. Magyary, J. Glatz, H.a high performance computer control system tailored to the

Magyary, S.

2010-01-01T23:59:59.000Z

349

Situating workplace surveillance: Ethicsand computer based performance monitoring  

Science Conference Proceedings (OSTI)

This paper examines the study of computer based performance monitoring (CBPM) in the workplace as an issue dominated by questions of ethics. Its central contention paper is that any investigation of ethical monitoring practice is inadequate if it ...

Kirstie S. Ball

2001-09-01T23:59:59.000Z

350

Kathy Yelick Co-authors NRC Report on Computer Performance -...  

NLE Websites -- All DOE Office Websites (Extended Search)

the lab's NERSC Division, was a panelist in a March 22 discussion of "The Future of Computer Performance: Game Over or Next Level?" a new report by the National Research Council....

351

Center Information Innovative Computing Laboratory  

E-Print Network (OSTI)

of Tennessee as a world leader in advanced scientific and high performance computing through research Computing Distributed Computing is an integral part of the high performance computing landscape

Tennessee, University of

352

High Performance Computing Data Center Metering Protocol  

NLE Websites -- All DOE Office Websites (Extended Search)

electricity used in the US at that time. The report then suggested that the overall consumption would rise to about 100 billion kWh by 2011 or about 2.9% of total US consumption....

353

Scientific Software Engineering: High Performance Computing,...  

NLE Websites -- All DOE Office Websites (Extended Search)

1 Home Services Software Quality Assurance Software Engineering CONTACTS Group Leader Steve Painter Deputy Group Leader Cecilia Rivenburgh Software Engineering Lead Scott Matthews...

354

SciTech Connect: "high performance computing"  

Office of Scientific and Technical Information (OSTI)

Renewable Energy Laboratory (NREL), Golden, CO (United States) Naval Petroleum and Oil Shale Reserves (United States) Navarro Nevada Environmental Services Nevada Field...

355

THE DoD HIGH PERFORMANCE COMPUTING ...  

Science Conference Proceedings (OSTI)

... This cache- conscious factorization of the DFT including the data ... personal IT aids, sensors for the disabled, smart robots, entertainment, artistic ...

2010-11-24T23:59:59.000Z

356

Proceedings of the High Performance Computing Symposium  

Science Conference Proceedings (OSTI)

Welcome to the Spring Simulation Multi-Conference 2013 (SpringSim'13) in San Diego, CA. As the General Chair of this year's SpringSim, it is an honor and privilege to be your host for these exciting four days of activities driven and organized by the ...

Fang (Cherry) Liu, Karl Rupp, Rhonda Phillips, William I. Thacker

2013-04-01T23:59:59.000Z

357

Building Technologies Office: High Performance Windows Volume...  

NLE Websites -- All DOE Office Websites (Extended Search)

Building Technologies Office: High Performance Windows Volume Purchase to someone by E-mail Share Building Technologies Office: High Performance Windows Volume Purchase on Facebook...

358

High Performance Windows Volume Purchase: For Builders  

NLE Websites -- All DOE Office Websites (Extended Search)

For Builders to someone by E-mail Share High Performance Windows Volume Purchase: For Builders on Facebook Tweet about High Performance Windows Volume Purchase: For Builders on...

359

High Performance Windows Volume Purchase: For Manufacturers  

NLE Websites -- All DOE Office Websites (Extended Search)

For Manufacturers to someone by E-mail Share High Performance Windows Volume Purchase: For Manufacturers on Facebook Tweet about High Performance Windows Volume Purchase: For...

360

Grid-Controlled Lightpaths for High Performance Grid Applications  

E-Print Network (OSTI)

be shared among users and easily integrated with data and computation Grids. Keywords: network support for a data Grid supported by a high-performance network. Another concern in deploying Grids over the InternetGrid-Controlled Lightpaths for High Performance Grid Applications Raouf Boutaba, Wojciech Golab

Boutaba, Raouf

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


361

High-Precision Computation and Mathematical Physics  

SciTech Connect

At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

Bailey, David H.; Borwein, Jonathan M.

2008-11-03T23:59:59.000Z

362

A study of hardware performance monitoring counter selection in power modeling of computing systems  

Science Conference Proceedings (OSTI)

Power management and energy savings in high-performance computing has become an increasingly important design constraint. The foundation of many power/energy saving methods is based on power consumption models, which commonly rely on hardware performance ... Keywords: energy saving,performance monitoring counters,power modeling

Reza Zamani; Ahmad Afsahi

2012-06-01T23:59:59.000Z

363

High Performance Buildings - Alternative/Renewable Energy  

Science Conference Proceedings (OSTI)

... Buildings - Alternative/Renewable Energy. High Performance Buildings - Alternative/Renewable Energy Information at NIST. ...

2010-09-23T23:59:59.000Z

364

Guide for High-Performance Buildings Available  

SciTech Connect

This article is an overview of the new "Sustainable, High-Performance Operations and Maintenance" guidelines.

Bartlett, Rosemarie

2012-10-01T23:59:59.000Z

365

Measured Performance of Energy-Efficient Computer Systems  

E-Print Network (OSTI)

The intent of this study is to explore the potential performance of both Energy Star computers/printers and add-on control devices individually, and their expected savings if collectively applied in a typical office building in a hot and humid climate. Recent surveys have shown that the use of personal computer systems in commercial office buildings is expanding rapidly. The energy consumption of such a growing end-use also has a significant impact on the total building power demand. In warmer climates, office equipment energy use has important implications for building cooling loads as well as those directly associated with computing tasks. Recently, the Environmental Protection Agency (EPA) has developed an Energy Star (ES) rating system intended to endorse more efficient equipment. To research the comparative performance of conventional and low-energy computer systems, four Energy Star computer systems and two computer systems equipped with energy saving devices were monitored for power demand. Comparative data on the test results are summarized. In addition, a brief analysis uses the DOE-2.1E computer simulation to examine the impact of the test results and HVAC interactions if generically applied to computer systems in a modern office building in Florida's climate.

Floyd, D. B.; Parker, D. S.

1996-01-01T23:59:59.000Z

366

High performance RDMA-based MPI implementation over InfiniBand  

Science Conference Proceedings (OSTI)

Although InfiniBand Architecture is relatively new in the high performance computing area, it offers many features which help us to improve the performance of communication subsystems. One of these features is Remote Direct Memory Access (RDMA) operations. ... Keywords: InfiniBand, MPI, cluster computing, high performance computing

Jiuxing Liu; Jiesheng Wu; Sushmitha P. Kini; Pete Wyckoff; Dhabaleswar K. Panda

2003-06-01T23:59:59.000Z

367

National Energy Research Scientific Computing Center 2007 Annual Report  

E-Print Network (OSTI)

and Directions in High Performance Computing for the Officein the evolution of high performance computing and networks.Hectopascals High performance computing High Performance

Hules, John A.

2008-01-01T23:59:59.000Z

368

Scientific Computing Programs and Projects  

Science Conference Proceedings (OSTI)

... High Performance Computing Last Updated Date: 03/05/2012 High Performance Computing (HPC) enables work on challenging ...

2010-05-24T23:59:59.000Z

369

High Performance Windows Volume Purchase: Information Resources  

NLE Websites -- All DOE Office Websites (Extended Search)

Information Information Resources to someone by E-mail Share High Performance Windows Volume Purchase: Information Resources on Facebook Tweet about High Performance Windows Volume Purchase: Information Resources on Twitter Bookmark High Performance Windows Volume Purchase: Information Resources on Google Bookmark High Performance Windows Volume Purchase: Information Resources on Delicious Rank High Performance Windows Volume Purchase: Information Resources on Digg Find More places to share High Performance Windows Volume Purchase: Information Resources on AddThis.com... Home About For Builders For Residential Buyers For Light Commercial Buyers For Manufacturers For Utilities Information Resources Information Resources Numerous publications will be available to help educate buyers, product

370

Multiclass Classification of Distributed Memory Parallel Computations  

E-Print Network (OSTI)

95616 b Abstract High Performance Computing (HPC) is a ?eldorganizing maps, High performance computing, Communicationpowerful known High Performance Computing (HPC) systems in

Whalen, Sean; Peisert, Sean; Bishop, Matt

2012-01-01T23:59:59.000Z

371

Can quantum chemistry be performed on a small quantum computer?  

E-Print Network (OSTI)

As quantum computing technology improves and quantum computers with a small but non-trivial number of N > 100 qubits appear feasible in the near future the question of possible applications of small quantum computers gains importance. One frequently mentioned application is Feynman's original proposal of simulating quantum systems, and in particular the electronic structure of molecules and materials. In this paper, we analyze the computational requirements for one of the standard algorithms to perform quantum chemistry on a quantum computer. We focus on the quantum resources required to find the ground state of a molecule twice as large as what current classical computers can solve exactly. We find that while such a problem requires about a ten-fold increase in the number of qubits over current technology, the required increase in the number of gates that can be coherently executed is many orders of magnitude larger. This suggests that for quantum computation to become useful for quantum chemistry problems, drastic algorithmic improvements will be needed.

Dave Wecker; Bela Bauer; Bryan K. Clark; Matthew B. Hastings; Matthias Troyer

2013-12-05T23:59:59.000Z

372

Fractured bodies : gesture, pleasure, and politics in contemporary computer music performance  

E-Print Network (OSTI)

Failure : Post Digital Tendencies in Contemporary ComputerMusic, Computer Music Journal 24(4) (Winter, 2000): 12-18.Politics in Contemporary Computer Music Performance A Thesis

Ponce, Jason Benjamin

2007-01-01T23:59:59.000Z

373

IDD High Performance Resilience Program  

Science Conference Proceedings (OSTI)

... construction issues related to: Blast, earthquake, high wind, and flood resistance, and cyber ... 3D propagation ? FLEX finite element software is ...

374

Network-Theoretic Classification of Parallel Computation Patterns  

E-Print Network (OSTI)

computation in a high performance computing en- vironmentAs the ?eld of high performance computing (HPC) plans for

Whalen, Sean; Peisert, Sean; Bishop, Matt

2011-01-01T23:59:59.000Z

375

Statistical Power and Performance Modeling for Optimizing the Energy Efficiency of Scientific Computing  

Science Conference Proceedings (OSTI)

High-performance computing (HPC) has become an indispensable resource in science and engineering, and it has oftentimes been referred to as the "thirdpillar" of science, along with theory and experimentation. Performance tuning is a key aspect in utilizing ... Keywords: energy-efficiency tuning, green supercomputing, regression modeling

Balaji Subramaniam; Wu-chun Feng

2010-12-01T23:59:59.000Z

376

Webinar: ENERGY STAR Hot Water Systems for High Performance Homes  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Star Hot Water Systems for High Performance Homes Star Hot Water Systems for High Performance Homes 1 | Building America Program www.buildingamerica.gov Buildings Technologies Program Date: September 30, 2011 ENERGY STAR ® Hot Water Systems for High Performance Homes Welcome to the Webinar! We will start at 11:00 AM Eastern. There is no call in number. The audio will be sent through your computer speakers. All questions will be submitted via typing. Video of presenters Energy Star Hot Water Systems for High Performance Homes 2 | Building America Program www.buildingamerica.gov Energy Star Hot Water Systems for High Performance Homes 3 | Building America Program www.buildingamerica.gov Building America Program: Introduction Building Technologies Program Energy Star Hot Water Systems for High Performance Homes

377

Performance Technology for Tera-Class Parallel Computers: Evolution of the TAU Performance System  

Science Conference Proceedings (OSTI)

In this project, we proposed to create new technology for performance observation and analysis of large-scale tera-class parallel computer systems and applications in this project.

Allen D. Malony

2005-06-21T23:59:59.000Z

378

High Performance I/O  

Science Conference Proceedings (OSTI)

Parallelisation, serial optimisation, compiler tuning, and many more techniques are used to optimise and improve the performance scaling of parallel programs. One area which is frequently not optimised is file I/O. This is because it is often not considered ... Keywords: I/O, HPC, optimisation, parallelisation, Lustre, GPFS, MPI-I/O, HDF5, NetCDF

Adrian Jackson; Fiona Reid; Joachim Hein; Alejandro Soba; Xavier Saez

2011-02-01T23:59:59.000Z

379

Ultra-high resolution computed tomography imaging  

DOE Patents (OSTI)

A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

Paulus, Michael J. (Knoxville, TN); Sari-Sarraf, Hamed (Knoxville, TN); Tobin, Jr., Kenneth William (Harriman, TN); Gleason, Shaun S. (Knoxville, TN); Thomas, Jr., Clarence E. (Knoxville, TN)

2002-01-01T23:59:59.000Z

380

Applied and Computational Mathematics Division  

Science Conference Proceedings (OSTI)

... Computing and Communications Theory Group; High Performance Computing and Visualization Group. Staff Directory. Employment ...

2013-05-09T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


381

High Performance Windows Volume Purchase: Contacts  

NLE Websites -- All DOE Office Websites (Extended Search)

Contacts to Contacts to someone by E-mail Share High Performance Windows Volume Purchase: Contacts on Facebook Tweet about High Performance Windows Volume Purchase: Contacts on Twitter Bookmark High Performance Windows Volume Purchase: Contacts on Google Bookmark High Performance Windows Volume Purchase: Contacts on Delicious Rank High Performance Windows Volume Purchase: Contacts on Digg Find More places to share High Performance Windows Volume Purchase: Contacts on AddThis.com... Home About For Builders For Residential Buyers For Light Commercial Buyers For Manufacturers For Utilities Information Resources Contacts Web site and High Performance Windows Volume Purchase Program contacts are provided below. Website Contact Send us your comments, report problems, and/or ask questions about

382

RIKEN HPCI Program for Computational Life Sciences  

E-Print Network (OSTI)

of computational resources offered by the High Performance Computing Infrastructure, with the K computer long-term support. High Performance Computing Development Education and Outreach Strategic Programs

Fukai, Tomoki

383

Computational Fluid Dynamics Framework for Turbine Biological Performance Assessment  

SciTech Connect

In this paper, a method for turbine biological performance assessment is introduced to bridge the gap between field and laboratory studies on fish injury and turbine design. Using this method, a suite of biological performance indicators is computed based on simulated data from a computational fluid dynamics (CFD) model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. If the relationship between the dose of an injury mechanism and frequency of injury (dose-response) is known from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from various turbine designs, the engineer can identify the more-promising designs. Discussion here is focused on Kaplan-type turbines, although the method could be extended to other designs. Following the description of the general methodology, we will present sample risk assessment calculations based on CFD data from a model of the John Day Dam on the Columbia River in the USA.

Richmond, Marshall C.; Serkowski, John A.; Carlson, Thomas J.; Ebner, Laurie L.; Sick, Mirjam; Cada, G. F.

2011-05-04T23:59:59.000Z

384

Middleware support for many-task computing  

Science Conference Proceedings (OSTI)

Many-task computing aims to bridge the gap between two computing paradigms, high throughput computing and high performance computing. Many-task computing denotes high-performance computations comprising multiple distinct activities, coupled via file ... Keywords: Computing, Data-intensive distributed computing, Falkon, High-performance computing, High-throughput, Loosely-coupled applications, Many-task computing, Petascale, Swift

Ioan Raicu; Ian Foster; Mike Wilde; Zhao Zhang; Kamil Iskra; Peter Beckman; Yong Zhao; Alex Szalay; Alok Choudhary; Philip Little; Christopher Moretti; Amitabh Chaudhary; Douglas Thain

2010-09-01T23:59:59.000Z

385

Large Scale Computing and Storage Requirements for High Energy Physics  

Science Conference Proceedings (OSTI)

The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.

Gerber, Richard A.; Wasserman, Harvey

2010-11-24T23:59:59.000Z

386

Fermilab | Science at Fermilab | Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

In this Section: High-performance Computing Grid Computing Networking Mass Storage Computing Computing is indispensable to science at Fermilab. High-energy physics experiments...

387

TOWARD END-TO-END MODELING FOR NUCLEAR EXPLOSION MONITORING: SIMULATION OF UNDERGROUND NUCLEAR EXPLOSIONS AND EARTHQUAKES USING HYDRODYNAMIC AND ANELASTIC SIMULATIONS, HIGH-PERFORMANCE COMPUTING AND THREE-DIMENSIONAL EARTH MODELS  

Science Conference Proceedings (OSTI)

This paper describes new research being performed to improve understanding of seismic waves generated by underground nuclear explosions (UNE) by using full waveform simulation, high-performance computing and three-dimensional (3D) earth models. The goal of this effort is to develop an end-to-end modeling capability to cover the range of wave propagation required for nuclear explosion monitoring (NEM) from the buried nuclear device to the seismic sensor. The goal of this work is to improve understanding of the physical basis and prediction capabilities of seismic observables for NEM including source and path-propagation effects. We are pursuing research along three main thrusts. Firstly, we are modeling the non-linear hydrodynamic response of geologic materials to underground explosions in order to better understand how source emplacement conditions impact the seismic waves that emerge from the source region and are ultimately observed hundreds or thousands of kilometers away. Empirical evidence shows that the amplitudes and frequency content of seismic waves at all distances are strongly impacted by the physical properties of the source region (e.g. density, strength, porosity). To model the near-source shock-wave motions of an UNE, we use GEODYN, an Eulerian Godunov (finite volume) code incorporating thermodynamically consistent non-linear constitutive relations, including cavity formation, yielding, porous compaction, tensile failure, bulking and damage. In order to propagate motions to seismic distances we are developing a one-way coupling method to pass motions to WPP (a Cartesian anelastic finite difference code). Preliminary investigations of UNE's in canonical materials (granite, tuff and alluvium) confirm that emplacement conditions have a strong effect on seismic amplitudes and the generation of shear waves. Specifically, we find that motions from an explosion in high-strength, low-porosity granite have high compressional wave amplitudes and weak shear waves, while an explosion in low strength, high-porosity alluvium results in much weaker compressional waves and low-frequency compressional and shear waves of nearly equal amplitude. Further work will attempt to model available near-field seismic data from explosions conducted at NTS, where we have accurate characterization of the sub-surface from the wealth of geological and geophysical data from the former nuclear test program. Secondly, we are modeling seismic wave propagation with free-surface topography in WPP. We have model the October 9, 2006 and May 25, 2009 North Korean nuclear tests to investigate the impact of rugged topography on seismic waves. Preliminary results indicate that the topographic relief causes complexity in the direct P-waves that leads to azimuthally dependent behavior and the topographic gradient to the northeast, east and southeast of the presumed test locations generate stronger shear-waves, although each test gives a different pattern. Thirdly, we are modeling intermediate period motions (10-50 seconds) from earthquakes and explosions at regional distances. For these simulations we run SPECFEM3D{_}GLOBE (a spherical geometry spectral element code). We modeled broadband waveforms from well-characterized and well-observed events in the Middle East and central Asia, as well as the North Korean nuclear tests. For the recent North Korean test we found that the one-dimensional iasp91 model predicts the observed waveforms quite well in the band 20-50 seconds, while waveform fits for available 3D earth models are generally poor, with some exceptions. Interestingly 3D models can predict energy on the transverse component for an isotropic source presumably due to surface wave mode conversion and/or multipathing.

Rodgers, A; Vorobiev, O; Petersson, A; Sjogreen, B

2009-07-06T23:59:59.000Z

388

HIGH-PERFORMANCE COATING MATERIALS  

DOE Green Energy (OSTI)

Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

SUGAMA,T.

2007-01-01T23:59:59.000Z

389

Centre of Excellence forCentre of Excellence for High Performance ComputingHigh Performance Computing  

E-Print Network (OSTI)

·Quantummechanical many-body problems ·Exact diagonalization (sparse/dense) · DMRG Methods Brenner/Durst (h001y) Breuer/Durst (h001v,h0011) Fehske (h0441) He? (h023z) Hofmann (h008z) Rüde (h0671) #12;COMPAS (SMP

Sanderson, Yasmine

390

High Performance Buildings Database | Open Energy Information  

Open Energy Info (EERE)

High Performance Buildings Database High Performance Buildings Database Jump to: navigation, search The High Performance Buildings Database (HPBD), developed by the United States Department of Energy and the National Renewable Energy Laboratory, is "a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad."[1] Map of HPBD entries Loading map... {"format":"googlemaps3","type":"ROADMAP","types":["ROADMAP","SATELLITE","HYBRID","TERRAIN"],"limit":1000,"offset":0,"link":"all","sort":[""],"order":[],"headers":"show","mainlabel":"-","intro":"","outro":"","searchlabel":"\u2026

391

Conversion of Ultra High Performance Carbon Fiber  

Conversion of Ultra High Performance Carbon Fiber Note: The technology described above is an early stage opportunity. Licensing rights to this intellectual property may

392

Related Links on High-Performance Schools  

Energy.gov (U.S. Department of Energy (DOE))

Below are related links to resources for incorporating energy efficiency and renewable energy into building or renovating high-performance schools.

393

MPICH | High-Performance Portable MPI  

NLE Websites -- All DOE Office Websites (Extended Search)

MPICH High-Performance Portable MPI Skip to content Home About MPICH Overview News and Events Collaborators Downloads Documentation Guides MPICH Wiki Hydra Usage Developer Docs...

394

Method of making a high performance ultracapacitor  

DOE Patents (OSTI)

A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

Farahmandi, C. Joseph (Auburn, AL); Dispennette, John M. (Auburn, AL)

2000-07-26T23:59:59.000Z

395

Related Links on High-Performance Buildings  

Energy.gov (U.S. Department of Energy (DOE))

Below are related links to resources for incorporating energy efficiency and renewable energy into high-performance commercial and residential buildings.

396

Gas-Filled Panels, High Performance Insulation  

NLE Websites -- All DOE Office Websites (Extended Search)

Gas-Filled Panels high performance insulation Windows & Daylighting | Building Technologies | Environmental Energy Technologies Division | Berkeley Lab gfp4b.jpg (5624 bytes)...

397

High-density Fuel Development for High Performance Research ...  

Science Conference Proceedings (OSTI)

Abstract Scope, High density UMo (7-12wt% Mo) fuel for high performance research ... High Energy X-ray Diffraction Study of Deformation Behavior of Alloy HT9.

398

Strategy Guideline: High Performance Residential Lighting  

SciTech Connect

The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

Holton, J.

2012-02-01T23:59:59.000Z

399

In this paper, we argue that the deployment of high performance wide area networks coupled with the availability of commodity  

E-Print Network (OSTI)

with the availability of commodity middleware will produce a new paradigm of high performance computing that we call community is on the cusp of a new era in high-performance computing. In order to understand the trends the next five to ten years. Traditional High Performance Computing (HPC) - Up until only a few years ago

Stodghill, Paul

400

High school computing teachers' beliefs and practices: A case study  

Science Conference Proceedings (OSTI)

The aim of this work is threefold. Firstly, an empirical study was designed with the aim of investigating the beliefs that High School Computing (HSC) teachers hold about: (a) their motivational orientation, self-efficacy, and self-expectations as Computing ... Keywords: High school computing teachers, Secondary education, Teacher beliefs and practices, Teacher professional development, Teaching/learning strategies

Maria Kordaki

2013-10-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


401

Energy Proportionality and Performance in Data Parallel Computing Clusters  

Science Conference Proceedings (OSTI)

Energy consumption in datacenters has recently become a major concern due to the rising operational costs andscalability issues. Recent solutions to this problem propose the principle of energy proportionality, i.e., the amount of energy consumedby the server nodes must be proportional to the amount of work performed. For data parallelism and fault tolerancepurposes, most common file systems used in MapReduce-type clusters maintain a set of replicas for each data block. A coveringset is a group of nodes that together contain at least one replica of the data blocks needed for performing computing tasks. In thiswork, we develop and analyze algorithms to maintain energy proportionality by discovering a covering set that minimizesenergy consumption while placing the remaining nodes in lowpower standby mode. Our algorithms can also discover coveringsets in heterogeneous computing environments. In order to allow more data parallelism, we generalize our algorithms so that itcan discover k-covering sets, i.e., a set of nodes that contain at least k replicas of the data blocks. Our experimental results showthat we can achieve substantial energy saving without significant performance loss in diverse cluster configurations and workingenvironments.

Kim, Jinoh; Chou, Jerry; Rotem, Doron

2011-02-14T23:59:59.000Z

402

An Overview of the Advanced CompuTational Software (ACTS) Collection  

E-Print Network (OSTI)

Meeting on High Performance Computing for ComputationalSciences, High Performance Computing 1. MOTIVATION ANDto state-of-the-art high performance computing environments.

Drummond, Leroy A.; Marques, Osni A.

2005-01-01T23:59:59.000Z

403

Optimization of a Lattice Boltzmann Computation on State-of-the-Art Multicore Platforms  

E-Print Network (OSTI)

In Proc. SC2005: High performance computing, networking, andMeeting on High Performance Computing for ComputationalIn Proc. SC2005: High performance computing, networking, and

Williams, Samuel

2009-01-01T23:59:59.000Z

404

High-performance high-resolution semi-Lagrangian tracer transport on a sphere  

Science Conference Proceedings (OSTI)

Current climate models have a limited ability to increase spatial resolution because numerical stability requires the time step to decrease. We describe a semi-Lagrangian method for tracer transport that is stable for arbitrary Courant numbers, and we ... Keywords: Cubed sphere, High resolution, High-performance computing, Semi-Lagrangian, Spherical geometry, Tracer transport

J. B. White, III; J. J. Dongarra

2011-07-01T23:59:59.000Z

405

The Potential of the Cell Processor for Scientific Computing  

E-Print Network (OSTI)

Journal of High Performance Computing Applications, 2004. L.Novel Processor Architecture for High Performance Computing.High Performance Computing in the Asia- Pacific Region,

Williams, Samuel; Shalf, John; Oliker, Leonid; Husbands, Parry; Kamil, Shoaib; Yelick, Katherine

2005-01-01T23:59:59.000Z

406

Large Scale Computing and Storage Requirements for Nuclear Physics Research  

E-Print Network (OSTI)

proceedings of High Performance Computing 2011 (HPC-2011)In recent years, high performance computing has becomeNERSC is the primary high-performance computing facility for

Gerber, Richard A.

2012-01-01T23:59:59.000Z

407

ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS  

Science Conference Proceedings (OSTI)

OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability.

WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

2002-04-01T23:59:59.000Z

408

Performance Engineering: Understanding and Improving the Performance of Large-Scale Codes  

E-Print Network (OSTI)

Journal of High Performance Computing Applications, vol.component of the high-performance computing world. This isJournal of High Performance Computing Applications, vol.

2008-01-01T23:59:59.000Z

409

Performance Tools and APIs on BG/P Systems | Argonne Leadership Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Performance Tools and APIs Performance Tools and APIs Tuning and Analysis Utilities (TAU) Rice HPC Toolkit IBM HPCT Mpip gprof Profiling Tools Darshan PAPI High Level UPC API Low Level UPC API UPC Hardware BG/P dgemm Performance Tuning MPI on BGP Performance FAQs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Performance Tools and APIs on BG/P Systems MPI and OpenMP Options Tuning MPI on BGP Performance Tools Tuning and Analysis Utilities (TAU) - Instruments applications and gathers information on timings, MPI activity, and hardware performance counter events Rice HPCToolkit- Performs sample based profiling of applications and

410

High Performance and Sustainable Buildings Guidance  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

HIGH PERFORMANCE and SUSTAINABLE BUILDINGS GUIDANCE Final (12/1/08) PURPOSE The Interagency Sustainability Working Group (ISWG), as a subcommittee of the Steering Committee established by Executive Order (E.O.) 13423, initiated development of the following guidance to assist agencies in meeting the high performance and sustainable buildings goals of E.O. 13423, section 2(f). 1 E.O. 13423, sec. 2(f) states "In implementing the policy set forth in section 1 of this order, the head of each agency shall: ensure that (i) new construction and major renovations of agency buildings comply with the Guiding Principles for Federal Leadership in High Performance and Sustainable Buildings set forth in the Federal Leadership in High Performance and Sustainable Buildings Memorandum of Understanding (2006)

411

Implementation Workshop: High Performance Work Organizations  

E-Print Network (OSTI)

Since the rise of the industrial revolution, there are few challenges that compare in scale and scope with the challenge of implementing lean principles in order to achieve high performance work systems. This report summarize ...

Klein, Jan

412

High Performance Sustainable Building Design RM  

Energy.gov (U.S. Department of Energy (DOE))

The High Performance Sustainable Building Design (HPSBD) Review Module (RM) is a tool that assists the DOE federal project review teams in evaluating the technical sufficiency for projects that may...

413

FPGA-based hardware accelerator for high-performance data-stream processing  

Science Conference Proceedings (OSTI)

An approach to solving high-performance data-stream processing is proposed based on hardware solutions that use a field-programmable gate array. The described HDG hardware solution was successfully applied to video data streams. The computation capacity ... Keywords: FPGA, application-specific processors, data-stream processing, hardware accelerators, high-performance computations, real time

K. F. Lysakov; M. Yu. Shadrin

2013-03-01T23:59:59.000Z

414

Computational Thermodynamics Aided High-Entropy Alloy Design  

Science Conference Proceedings (OSTI)

Presentation Title, Computational Thermodynamics Aided High-Entropy Alloy Design. Author(s), Fan Zhang, Chuan Zhang, Weisheng Cao, Shuanglin Chen.

415

Mathematical, Information and Computational Sciences Mathematical, Information  

E-Print Network (OSTI)

and Computational Sciences #12;High Performance Computing, Collaboration and Networks- Critical for DOE Science

416

Performance testing and internal probe measurements of a high specific impulse Hall thruster  

E-Print Network (OSTI)

The BHT-1000 high specific impulse Hall thruster was used for performance testing and internal plasma measurements to support the ongoing development of computational models. The thruster was performance tested in both ...

Warner, Noah Zachary, 1978-

2003-01-01T23:59:59.000Z

417

High-reliability computing for the smarter planet  

SciTech Connect

The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary. Already critical infrastructure is failing too frequently. In this paper, we will introduce the Cross-Layer Reliability concept for designing more reliable computer systems.

Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

2010-01-01T23:59:59.000Z

418

Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts  

Science Conference Proceedings (OSTI)

This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

Not Available

2006-06-01T23:59:59.000Z

419

Multimedia for The Visualization of High Performance ...  

Science Conference Proceedings (OSTI)

... Office of Science, the SciDAC (Scientific Discovery through Advanced Computing) program brings together computational scientists, applied ...

2012-03-05T23:59:59.000Z

420

Performance of Ultra-Scale Applications on Leading Vector and Scalar HPC Platforms  

E-Print Network (OSTI)

In Proc. SC2003: High performance computing, networking, andIn Proc. SC2004: High performance computing, networking, andMeeting on High Performance Computing for Computational

2005-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


421

High Performance Flow Simulations on Graphics Processing Units  

NLE Websites -- All DOE Office Websites (Extended Search)

High Performance Flow Simulations on Graphics Processing Units High Performance Flow Simulations on Graphics Processing Units Speaker(s): Wangda Zuo Date: June 17, 2010 - 12:00pm Location: 90-3122 Seminar Host/Point of Contact: Michael Wetter Building design and operation often requires real-time or faster-than-real-time simulations for detailed information on air distributions. However, none of the current flow simulation techniques can satisfy this requirement. To solve this problem, a Fast Fluid Dynamics (FFD) model has been developed. The FFD can solve Navier-Stokes equations at a speed of 50 times faster than Computational Fluid Dynamics (CFD). In addition, the computing speed of the FFD program has been further enhanced up to 30 times by executing in parallel on a Graphics Processing Unit (GPU) instead of a Central Processing Unit (CPU). As a whole, the FFD on a GPU

422

Maricris Lodriguito Mayes | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Ab Initio Molecular Dynamics, Computational Material ScienceNanoscience, High Performance Computing, Reaction Mechanism and Dynamics, Theoretical and Computational Chemistry...

423

Ascend : an architecture for performing secure computation on encrypted data  

E-Print Network (OSTI)

This thesis considers encrypted computation where the user specifies encrypted inputs to an untrusted batch program controlled by an untrusted server. In batch computation, all data that the program might need is known at ...

Fletcher, Christopher W. (Christopher Wardlaw)

2013-01-01T23:59:59.000Z

424

High Performance Sustainable Building Design RM  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

High Performance Sustainable High Performance Sustainable Building Design Review Module March 2010 CD-0 O High 0 This Re Les OFFICE OF h Perform CD-1 eview Module ssons learned f F ENVIRON Standard R mance Su Revi Critical D CD-2 M has been pilot from the pilot h NMENTAL Review Plan ustainabl iew Module Decision (CD C March 2010 ted at the SRS have been incor L MANAGE n (SRP) le Buildin e D) Applicabili D-3 SWPF and MO rporated in Rev EMENT ng Design ity CD-4 OX FFF projec view Module n Post Ope cts. eration Standard Review Plan, 2 nd Edition, March 2010 i FOREWORD The Standard Review Plan (SRP) 1 provides a consistent, predictable corporate review framework to ensure that issues and risks that could challenge the success of Office of Environmental Management (EM) projects are identified early and addressed proactively. The

425

Yuri Alexeev | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

for using and enabling computational methods in chemistry and biology for high-performance computing on next-generation high-performance computers. Yuri is particularly...

426

Strategy Guideline: Partnering for High Performance Homes  

SciTech Connect

High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

Prahl, D.

2013-01-01T23:59:59.000Z

427

PMap : unlocking the performance genes of HPC applications  

E-Print Network (OSTI)

in High Performance Computing . . . . . . . . 1.2.1 Parallelbench- mark. In High Performance Computing, Networking,Confer- ence for High Performance Computing, Networking,

He, Jiahua

2011-01-01T23:59:59.000Z

428

Project materials [Commercial High Performance Buildings Project  

Science Conference Proceedings (OSTI)

The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

None

2001-01-01T23:59:59.000Z

429

A scalable high-performance I/O system  

SciTech Connect

A significant weakness of many existing parallel supercomputers is their lack of high-performance parallel I/O. This weakness has prevented, in many cases, the full exploitation of the true potential of MPP systems. As part of a joint project with IBM, we have designed a parallel I/O system for an IBM SP system that can provide sustained I/O rates of greater than 160 MB/s from collections of compute nodes to archival disk and peak transfer rates that should exceed 400 MB/s from compute nodes to I/O servers. This testbed system is being used for a number of projects, first it will provide a high-performance experimental I/O system for traditional computational science applications, second it will b used as an I/O software and development environment for new parallel I/O algorithms and operating systems support, and third it will be used as the foundation for a number of new projects designed to develop enabling technology for the National Information Infrastructure. This report describes the system under development at Argonne National Laboratory, provides some preliminary performance results, and outlines future experiments and directions.

Henderson, M.; Nickless, B.; Stevens, R.

1994-05-01T23:59:59.000Z

430

A high accuracy computed water line list  

E-Print Network (OSTI)

A computed list of H$_{2}$$^{16}$O infra-red transition frequencies and intensities is presented. The list, BT2, was produced using a discrete variable representation two-step approach for solving the rotation-vibration nuclear motions. It is the most complete water line list in existence, comprising over 500 million transitions (65% more than any other list) and it is also the most accurate (over 90% of all known experimental energy levels are within 0.3 cm$^{-1}$ of the BT2 values). Its accuracy has been confirmed by extensive testing against astronomical and laboratory data. The line list has been used to identify individual water lines in a variety of objects including: comets, sunspots, a brown dwarf and the nova-like object V838 Mon. Comparison of the observed intensities with those generated by BT2 enables physical values to be derived for these objects. The line list can also be used to provide an opacity for models of the atmospheres of M-dwarf stars and assign previously unknown water lines in laboratory spectra.

R. J. Barber; J. Tennyson; G. J. Harris; R. N. Tolchenov

2006-01-11T23:59:59.000Z

431

High-Performance Mass Spectrometry Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

HPMSF Overview HPMSF Overview Section 2-4-1 High-Performance Mass Spectrometry Facility The High-Performance Mass Spectrometry Facility (HPMSF) provides state-of-the-art mass spectrometry (MS) and separations instrumentation that has been refined for leading-edge analysis of biological problems with a primary emphasis on proteomics. Challenging research in proteomics, cell signaling, cellular molecular machines, and high-molecular weight systems receive the highest priority for access to the facility. Current research activities in the HPMSF include proteomic analyses of whole cell lysates, analyses of organic macro-molecules and protein complexes, quantification using isotopically labeled growth media, targeted proteomics analyses of subcellular fractions, and nucleic acid analysis of

432

Transforming High School Physics with Modeling and Computation  

E-Print Network (OSTI)

The Engage to Excel (PCAST) report, the National Research Council's Framework for K-12 Science Education, and the Next Generation Science Standards all call for transforming the physics classroom into an environment that teaches students real scientific practices. This work describes the early stages of one such attempt to transform a high school physics classroom. Specifically, a series of model-building and computational modeling exercises were piloted in a ninth grade Physics First classroom. Student use of computation was assessed using a proctored programming assignment, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Student views on computation and its link to mechanics was assessed with a written essay and a series of think-aloud interviews. This pilot study shows computation's ability for connecting scientific practice to the high school science classroom.

Aiken, John M

2013-01-01T23:59:59.000Z

433

On the Design and Performance of the Maple System - Computer ...  

E-Print Network (OSTI)

a powerful set of facilities for symbolic mathematical computation, portability, and a .... and an external user interface are diff, expand, taylor, type, and coal f (for...

434

High Performance Commercial Fenestration Framing Systems  

SciTech Connect

A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial fenestration framing systems, by investigating new technologies that would improve the thermal performance of aluminum frames, while maintaining their structural and life-cycle performance. The project targeted an improvement of over 30% (whole window performance) over conventional commercial framing technology by improving the performance of commercial framing systems.

Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

2010-01-31T23:59:59.000Z

435

Corporate Information & Computing Services  

E-Print Network (OSTI)

Corporate Information & Computing Services High Performance Computing Report March 2008 Author The University of Sheffield's High Performance Computing (HPC) facility is provided by CiCS. It consists of both Graduate Students and Staff. #12;Corporate Information & Computing Services High Performance Computing

Martin, Stephen John

436

Thermal Performance Analysis of a High-Mass Residential Building  

DOE Green Energy (OSTI)

Minimizing energy consumption in residential buildings using passive solar strategies almost always calls for the efficient use of massive building materials combined with solar gain control and adequate insulation. Using computerized simulation tools to understand the interactions among all the elements facilitates designing low-energy houses. Finally, the design team must feel confident that these tools are providing realistic results. The design team for the residential building described in this paper relied on computerized design tools to determine building envelope features that would maximize the energy performance [1]. Orientation, overhang dimensions, insulation amounts, window characteristics and other strategies were analyzed to optimize performance in the Pueblo, Colorado, climate. After construction, the actual performance of the house was monitored using both short-term and long-term monitoring approaches to verify the simulation results and document performance. Calibrated computer simulations showed that this house consumes 56% less energy than would a similar theoretical house constructed to meet the minimum residential energy code requirements. This paper discusses this high-mass house and compares the expected energy performance, based on the computer simulations, versus actual energy performance.

Smith, M.W.; Torcellini, P.A., Hayter, S.J.; Judkoff, R.

2001-01-30T23:59:59.000Z

437

The rise and fall of High Performance Fortran: an historical object lesson  

Science Conference Proceedings (OSTI)

High Performance Fortran (HPF) is a high-level data-parallel programming system based on Fortran. The effort to standardize HPF began in 1991, at the Supercomputing Conference in Albuquerque, where a group of industry leaders asked Ken Kennedy to lead ... Keywords: High Performance Fortran (HPF), compilers, parallel computing

Ken Kennedy; Charles Koelbel; Hans Zima

2007-06-01T23:59:59.000Z

438

Cloud computing and its implications for organizational design and performance  

E-Print Network (OSTI)

Cloud computing has been at the center of attention for a while now. This attention is directed towards different aspects of this concept which concern different stakeholders from IT companies to cloud adopters to simple ...

Farahani Rad, Ali

2013-01-01T23:59:59.000Z

439

Computational model design and performance estimation in registration brake control  

Science Conference Proceedings (OSTI)

Electric motorcycles are applicable to both toys and real motorcycles, and also is a reference for constructing larger electrical vehicles. A design computational model of regenerative braking control of electric motorcycles and an experimental identification ...

P. S. Pa; S. C. Chang

2009-06-01T23:59:59.000Z

440

Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research  

E-Print Network (OSTI)

providing high-performance computing (HPC) resources to moreof NERSChigh performance computing (HPC) and NERSC haveafforded by high performance computing, advanced simulations

Gerber, Richard

2012-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


441

Reducing power consumption while performing collective operations on a plurality of compute nodes  

DOE Patents (OSTI)

Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Peters, Amanda E. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

2011-10-18T23:59:59.000Z

442

CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT  

Science Conference Proceedings (OSTI)

High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

Corones, James [Krell Institute

2013-09-23T23:59:59.000Z

443

Performance evaluation of a dynamic load-balancing library for cluster computing  

Science Conference Proceedings (OSTI)

The performance of scientific applications in heterogeneous environments has been improved with the research advances in dynamic scheduling at application and runtime system levels. This paper presents the performance evaluation of a library as a result ... Keywords: cluster computing, data migration, dynamic load balancing library, dynamic scheduling, loop scheduling, overhead analysis, parallel computing, parallel runtime system, performance evaluation, resource management, task migration

Ioana Banicescu; Ricolindo L. Carino; Jaderick P. Pabico; Mahadevan Balasubramaniam

2005-05-01T23:59:59.000Z

444

Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Computing Computing and Storage Requirements Computing and Storage Requirements for FES J. Candy General Atomics, San Diego, CA Presented at DOE Technical Program Review Hilton Washington DC/Rockville Rockville, MD 19-20 March 2013 2 Computing and Storage Requirements Drift waves and tokamak plasma turbulence Role in the context of fusion research * Plasma performance: In tokamak plasmas, performance is limited by turbulent radial transport of both energy and particles. * Gradient-driven: This turbulent transport is caused by drift-wave instabilities, driven by free energy in plasma temperature and density gradients. * Unavoidable: These instabilities will persist in a reactor. * Various types (asymptotic theory): ITG, TIM, TEM, ETG . . . + Electromagnetic variants (AITG, etc). 3 Computing and Storage Requirements Fokker-Planck Theory of Plasma Transport Basic equation still

445

DOE High Performance Concentrator PV Project  

DOE Green Energy (OSTI)

Much in demand are next-generation photovoltaic (PV) technologies that can be used economically to make a large-scale impact on world electricity production. The U.S. Department of Energy (DOE) initiated the High-Performance Photovoltaic (HiPerf PV) Project to substantially increase the viability of PV for cost-competitive applications so that PV can contribute significantly to both our energy supply and environment. To accomplish such results, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices with the goal of enabling progress of high-efficiency technologies toward commercial-prototype products. We will describe the details of the subcontractor and in-house progress in exploring and accelerating pathways of III-V multijunction concentrator solar cells and systems toward their long-term goals. By 2020, we anticipate that this project will have demonstrated 33% system efficiency and a system price of $1.00/Wp for concentrator PV systems using III-V multijunction solar cells with efficiencies over 41%.

McConnell, R.; Symko-Davies, M.

2005-08-01T23:59:59.000Z

446

Forced Air Systems in High Performance Homes  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

FORCED AIR SYSTEMS IN FORCED AIR SYSTEMS IN HIGH PERFORMANCE HOMES Iain Walker (LBNL) Building America Meeting 2013 What are the issues? 1. Sizing  When is too small too small? 2. Distribution  Can we get good mixing at low flow? 3. Performance  Humidity Control  Part load efficiency  Blowers & thermal losses Sizing  Part-load - not an issue with modern equipment  Careful about predicted loads - a small error becomes a big problem for tightly sized systems  Too Low Capacity = not robust  Extreme vs. design days  Change in occupancy  Party mode  Recovery from setback Sizing  Conventional wisdom - a good envelope = easy to predict and not sensitive to indoor conditions  But..... Heating and cooling become discretionary - large variability depending on occupants

447

Amy W. Apon, Ph.D. Professor and Chair, Computer Science Division  

E-Print Network (OSTI)

performance computing, impact of high performance computing to research competiveness, sustainable funding, Division of Computer Science, Clemson University 20082011 Director, Arkansas High Performance Computing Center 20042008 Director of High Performance Computing, University of Arkansas 20072011 Professor

Duchowski, Andrew T.

448

Building America Roadmap to High Performance Homes  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Program Name or Ancillary Text Program Name or Ancillary Text eere.energy.gov Building America Technical Update Meeting - April 29, 2013 Building America Roadmap to High Performance Homes Eric Werling Building America Coordinator Denver, CO April 29, 2013 Building Technology Office U.S. Department of Energy EERE's National Mission Mission: To create American leadership in the global transition to a clean energy economy 1) High-Impact Research, Development, and Demonstration to Make Clean Energy as Affordable and Convenient as Traditional Forms of Energy 2) Breaking Down Barriers to Market Entry 2 | Building Technologies Office eere.energy.gov Why It Matters to America * Winning the most important global economic development race of the 21 st century * Creating jobs through American innovation

449

High-performance laboratories and cleanrooms  

SciTech Connect

The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations--primarily safety driven--that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities.

Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

2002-07-01T23:59:59.000Z

450

Federal Leadership in High Performance and Sustainable Buildings...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Federal Leadership in High Performance and Sustainable Buildings Memorandum of Understanding Federal Leadership in High Performance and Sustainable Buildings Memorandum of...

451

Federal Leadership in High Performance and Sustainable Buildings...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership in High Performance and Sustainable Buildings Memorandum of Understanding Federal Leadership in High Performance and Sustainable Buildings Memorandum of Understanding...

452

Energy Design Guidelines for High Performance Schools: Hot and...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Energy Design Guidelines for High Performance Schools: Hot and Humid Climates Energy Design Guidelines for High Performance Schools: Hot and Humid Climates School districts around...

453

Memorandum of American High-Performance Buildings Coalition DOE...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Memorandum of American High-Performance Buildings Coalition DOE Meeting August 19, 2013 Memorandum of American High-Performance Buildings Coalition DOE Meeting August 19, 2013 This...

454

Performance evaluation of a grid computing architecture using realtime network monitoring  

Science Conference Proceedings (OSTI)

This paper integrates the concepts of realtime network monitoring and visualizations into a grid computing architecture on the Internet. We develop a Realtime Network Monitor(RNM) that performs realtime network monitoring in order to improve the performance ...

Young-Sik Jeong; Cheng-Zhong Xu

2004-12-01T23:59:59.000Z

455

Commercial remodeling : using computer graphic imagery to evaluate building energy performance during conceptual redesign  

E-Print Network (OSTI)

This research is an investigation of the relationship between commercial remodeling and building thermal performance. A computer graphic semiotic is developed to display building thermal performance based on this relationship. ...

Williams, Kyle D

1985-01-01T23:59:59.000Z

456

Computational Science and Its Applications ICCSA 2003 - Springer  

Science Conference Proceedings (OSTI)

Army High Performance Computing Research Center; 8. Department of Computer Science and Engineering, University of Minessota; 5. Department of Computer...

457

Edmund G. Brown Jr. HIGH-PERFORMANCE HIGH-TECH  

E-Print Network (OSTI)

. DE-FG02-88ER45372); computational resources provided by NERSC. REFERENCES [1] Streintz F., Ann. Phys

458

Integrating advanced facades into high performance buildings  

SciTech Connect

Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability can also offer solutions to building owners where reliable access to the electric grid is a challenge, in both less-developed countries and in industrialized countries where electric generating capacity has not kept pace with growth. We find that when properly designed and executed as part of a complete building solution, advanced facades can provide solutions to many of these challenges in building design today.

Selkowitz, Stephen E.

2001-05-01T23:59:59.000Z

459

Enabling fuzzy technologies in high performance networking via an open FPGA-based development platform  

Science Conference Proceedings (OSTI)

Soft computing techniques and particularly fuzzy inference systems are gaining momentum as tools for network traffic modeling, analysis and control. Efficient hardware implementations of these techniques that can achieve real-time operation in high-speed ... Keywords: Computer networks, Congestion control, Field programmable gate arrays, Fuzzy inference, Network performance, Network traffic control, Queuing control

Federico Montesino Pouzols; Angel Barriga Barros; Diego R. Lopez; Santiago Snchez-Solano

2012-04-01T23:59:59.000Z

460

Cumulvs: Interacting with High-Performance Scientific Simulations, for Visualization, Steering and Fault Tolerance  

Science Conference Proceedings (OSTI)

High-performance computer simulations are an increasingly popular alternative or complement to physical experiments or prototypes. However, as these simulations grow more massive and complex, it becomes challenging to monitor and control their execution. ... Keywords: CCA, CUMULVS, ECho, Global Arrays, MPI, MxN, PVM, computational steering, fault tolerance, model coupling, visualization

James A. Kohl; Torsten Wilde; David E. Bernholdt

2006-05-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


461

Keeneland: Computational Science Using Heterogeneous GPU Computing  

E-Print Network (OSTI)

................................................................. 152 123 #12;124 Contemporary High Performance Computing: From Petascale toward Exascale 7.1 Overview of Computational Sciences, and Oak Ridge National Laboratory. NSF 08-573: High Performance Computing System performance computing system. The Keeneland project is led by the Georgia Institute of Technology (Georgia

Dongarra, Jack

462

USING MULTITAIL NETWORKS IN HIGH PERFORMANCE CLUSTERS  

Science Conference Proceedings (OSTI)

Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault-tolerance of current high-performance clusters. We present and analyze various venues for exploiting multiple rails. Different rail access policies are presented and compared, including static and dynamic allocation schemes. An analytical lower bound on the number of networks required for static rail allocation is shown. We also present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. Striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load and allocation scheme. The methods compared include a static rail allocation, a round-robin rail allocation, a dynamic allocation based on local knowledge, and a rail allocation that reserves both end-points of a message before sending it. The latter is shown to perform better than other methods at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes.

S. COLL; E. FRACHTEMBERG; F. PETRINI; A. HOISIE; L. GURVITS

2001-03-01T23:59:59.000Z

463

Large Scale Computing and Storage Requirements for High Energy Physics  

NLE Websites -- All DOE Office Websites (Extended Search)

for High Energy Physics for High Energy Physics Accelerator Physics P. Spentzouris, Fermilab Motivation Accelerators enable many important applications, both in basic research and applied sciences Different machine attributes are emphasized for different applications * Different particle beams and operation principles * Different energies and intensities Accelerator science and technology objectives for all applications * Achieve higher energy and intensity, faster and cheaper machine design, more reliable operation a wide spectrum of requirements for very complex instruments. Assisting their design and operation requires an equally complex set of computational tools. High Energy Physics Priorities High energy frontier * Use high-energy colliders to discover new particles and

464

Energy Proportionality and Performance in Data Parallel Computing Clusters  

E-Print Network (OSTI)

pages 565572, 2010. Energy consumption (%) CS-1 CS-2 CS-3sec) Job arrival rate (?) (a) Energy consumption Fig. 18. (time Performance and energy consumption for multi-level CS

Kim, Jinoh

2011-01-01T23:59:59.000Z

465

Comparison of computation methods for CBM production performance  

E-Print Network (OSTI)

Coalbed methane (CBM) reservoirs have become a very important natural resource around the world. Because of their complexity, calculating original gas in place and analyzing production performance require consideration of special features. Coalbed methane production is somewhat complicated and has led to numerous methods of approximating production performance. Many CBM reservoirs go through a dewatering period before significant gas production occurs. With dewatering, desorption of gas in the matrix, and molecular diffusion within the matrix, the production process can be difficult to model. Several authors have presented different approaches involving the complex features related to adsorption and diffusion to describe the production performance for coalbed methane wells. Various programs are now commercially available to model production performance for CBM wells, including reservoir simulation, semi-analytic, and empirical approaches. Programs differ in their input data, description of the physical problem, and calculation techniques. This study will compare different tools available in the gas industry for CBM reservoir analysis, such as numerical reservoir simulators and semi-analytical software programs, to understand the differences in production performance when standard input data is used. Also, this study will analyze how sorption time (for modeling the diffusion process) influences the gas production performance for CBM wells.

Mora, Carlos A.

2007-08-01T23:59:59.000Z

466

Agglomeration Economies and the High-Tech Computer  

E-Print Network (OSTI)

Terminal Manuf. Other Computer Peripheral Equip. Manuf.and Testing of Elec. Computer Systems Design and RelatedData Processing Services Computer Program Services Computer

Wallace, Nancy E.; Walls, Donald

2004-01-01T23:59:59.000Z

467

HPC Global File System Performance Analysis Using A Scientific-Application Derived Benchmark  

E-Print Network (OSTI)

In Proc. SC07: High performance computing, networking, andConference on High Performance Computing, Bangalore,In Proc. SC2008: High performance computing, networking, and

Borrill, Julian

2009-01-01T23:59:59.000Z

468

High performance internal reforming unit for high temperature fuel cells  

DOE Patents (OSTI)

A fuel reformer having an enclosure with first and second opposing surfaces, a sidewall connecting the first and second opposing surfaces and an inlet port and an outlet port in the sidewall. A plate assembly supporting a catalyst and baffles are also disposed in the enclosure. A main baffle extends into the enclosure from a point of the sidewall between the inlet and outlet ports. The main baffle cooperates with the enclosure and the plate assembly to establish a path for the flow of fuel gas through the reformer from the inlet port to the outlet port. At least a first directing baffle extends in the enclosure from one of the sidewall and the main baffle and cooperates with the plate assembly and the enclosure to alter the gas flow path. Desired graded catalyst loading pattern has been defined for optimized thermal management for the internal reforming high temperature fuel cells so as to achieve high cell performance.

Ma, Zhiwen (Sandy Hook, CT); Venkataraman, Ramakrishnan (New Milford, CT); Novacco, Lawrence J. (Brookfield, CT)

2008-10-07T23:59:59.000Z

469

High-performance commercial building systems  

SciTech Connect

This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and health and performance benefits to occupants. At the same time this program can strengthen the growing energy efficiency industry in California by providing new jobs and growth opportunities for companies providing the technology, systems, software, design, and building services to the commercial sector. The broad objectives across all five program elements were: (1) To develop and deploy an integrated set of tools and techniques to support the design and operation of energy-efficient commercial buildings; (2) To develop open software specifications for a building data model that will support the interoperability of these tools throughout the building life-cycle; (3) To create new technology options (hardware and controls) for substantially reducing controllable lighting, envelope, and cooling loads in buildings; (4) To create and implement a new generation of diagnostic techniques so that commissioning and efficient building operations can be accomplished reliably and cost effectively and provide sustained energy savings; (5) To enhance the health, comfort and performance of building occupants. (6) To provide the information technology infrastructure for owners to minimize their energy costs and manage their energy information in a manner that creates added value for their buildings as the commercial sector transitions to an era of deregulated utility markets, distributed generation, and changing business practices. Our ultimate goal is for our R&D effort to have measurable market impact. This requires that the research tasks be carried out with a variety of connections to key market actors or trends so that they are recognized as relevant and useful and can be adopted by expected users. While some of this activity is directly integrated into our research tasks, the handoff from ''market-connected R&D'' to ''field deployment'' is still an art as well as a science and in many areas requires resources and a timeframe well beyond the scope of this PIER research program. The TAGs, PAC

Selkowitz, Stephen

2003-10-01T23:59:59.000Z

470

High-performance commercial building systems  

SciTech Connect

This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and health and performance benefits to occupants. At the same time this program can strengthen the growing energy efficiency industry in California by providing new jobs and growth opportunities for companies providing the technology, systems, software, design, and building services to the commercial sector. The broad objectives across all five program elements were: (1) To develop and deploy an integrated set of tools and techniques to support the design and operation of energy-efficient commercial buildings; (2) To develop open software specifications for a building data model that will support the interoperability of these tools throughout the building life-cycle; (3) To create new technology options (hardware and controls) for substantially reducing controllable lighting, envelope, and cooling loads in buildings; (4) To create and implement a new generation of diagnostic techniques so that commissioning and efficient building operations can be accomplished reliably and cost effectively and provide sustained energy savings; (5) To enhance the health, comfort and performance of building occupants. (6) To provide the information technology infrastructure for owners to minimize their energy costs and manage their energy information in a manner that creates added value for their buildings as the commercial sector transitions to an era of deregulated utility markets, distributed generation, and changing business practices. Our ultimate goal is for our R&D effort to have measurable market impact. This requires that the research tasks be carried out with a variety of connections to key market actors or trends so that they are recognized as relevant and useful and can be adopted by expected users. While some of this activity is directly integrated into our research tasks, the handoff from ''market-connected R&D'' to ''field deployment'' is still an art as well as a science and in many areas requires resources and a timeframe well beyond the scope of this PIER research program. The TAGs, PAC and other industry partners have assisted directly in this effort

Selkowitz, Stephen

2003-10-01T23:59:59.000Z

471

Summary of currently used wind turbine performance prediction computer codes  

DOE Green Energy (OSTI)

Informaion on currently used wind turbine aerodynamic/economic performance prediction codes is compiled and presented. Areas of interest to wind energy researchers that are not included in the reported codes are identified. Areas which are weak in experimental support are also identified.

Perkins, F.

1979-05-01T23:59:59.000Z

472

Modeling Incinerator Flue Train Performance with a Digital Computer  

E-Print Network (OSTI)

as a matrix of pressure/volume points, along with fan speed and horsepower requirements at the design point of incinerator performance under a wide range of conditions, Furnace and flue train design data are fed that requirements can be met under all conditions at a reasonable cost. The municipal design picture is complicated

Columbia University

473

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

Acronyms Argonne Leadership Computing Facility adaptivethe Leadership Computing Facilities at Oak Ridge and Argonne

Gerber, Richard A.

2011-01-01T23:59:59.000Z

474

NIST, Computer Security Division, Computer Security ...  

Science Conference Proceedings (OSTI)

... Standards. ITL January 1999, Jan 1999, Secure Web-Based Access to High Performance Computing Resources. ITL November ...

475

An integrated high performance Fastbus slave interface  

Science Conference Proceedings (OSTI)

A high performance CMOS Fastbus slave interface ASIC (Application Specific Integrated Circuit) supporting all addressing and data transfer modes defined in the IEEE 960 - 1986 standard is presented. The FAstbus Slave Integrated Circuit (FASIC) is an interface between the asynchronous Fastbus and a clock synchronous processor/memory bus. It can work stand-alone or together with a 32 bit microprocessor. The FASIC is a programmable device enabling its direct use in many different applications. A set of programmable address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/sec to Fastbus can be obtained using an internal FIFO in the FASIC to buffer data between the two buses during block transfers. Message passing from Fastbus to a microprocessor on the slave module is supported. A compact (70 mm x 170 mm) Fastbus slave piggy back sub-card interface including level conversion between ECL and TTL signal levels has been implemented using surface mount components and the 208 pin FASIC chip.

Christiansen, J.; Ljuslin, C. (CERN/ECP, Geneva (Switzerland))

1993-08-01T23:59:59.000Z

476

Green and High Performance Factory Crafted Housing  

E-Print Network (OSTI)

In the U.S., factory-built housing greater than 400 square feet is built either to the U.S. Department of Housing and Urban Development (HUD) code for mobile homes or site-built codes for modular housing. During the last few years, as the production of HUD code housing has dwindled, many leading edge factory builders have started building modular homes to compete with site-built housing and stay in business. As part of the Building America Industrialized Housing Partnership (BAIHP) we have assisted in the design and construction of several green and high performance modular homes that Palm Harbor Homes, Florida Division (PHH) has built for the International Builders Show (IBS) in 2006, 2007, and 2008. This paper will summarize the design features and the green and energy-efficient certification processes conducted for the 2008 show homes, one of which received the very first E-Scale produced by BAIHP for the U.S. Department of Energy (DOE) Builders Challenge program.

Thomas-Rees, S.; Chasar, D.; Chandra, S.; Stroer, D.

2008-12-01T23:59:59.000Z

477

High-performance commercial building systems  

E-Print Network (OSTI)

HVAC engineers and operators to optimize energy performance of buildings; and Develop simulation-based test and optimization

Selkowitz, Stephen

2003-01-01T23:59:59.000Z

478

Apex-Map: a synthetic scalable benchmark probe to explore data access performance on highly parallel systems  

Science Conference Proceedings (OSTI)

With the increasing gap between processor, memory, and interconnect speed, the performances of scientific applications on high performance computing systems have become dominated by the ability to move global data. However, many benchmarks in the field ...

Erich Strohmaier; Hongzhang Shan

2005-08-01T23:59:59.000Z

479

JOURNAL OF COMPUTATIONAL PHYSICS 131, 368377 (1997) ARTICLE NO. CP965611  

E-Print Network (OSTI)

and Army High Performance Computing Research Center, University of Minnesota, Minneapolis, Minnesota 55455 High Performance Computing the most troublesome term

Kurien, Susan

480

Computing trends using graphic processor in high energy physics  

E-Print Network (OSTI)

One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

Niculescu, Mihai

2011-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "high performance computer" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


481

Computing trends using graphic processor in high energy physics  

E-Print Network (OSTI)

One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

Mihai Niculescu; Sorin-Ion Zgura

2011-06-30T23:59:59.000Z

482

Software defined radio a high performance embedded challenge  

Science Conference Proceedings (OSTI)

Wireless communication is one of the most computationally demanding workloads. It is performed by mobile terminals (cell phones) and must be accomplished by a small battery powered system. An important goal of the wireless industry is to ...

Hyunseok Lee; Yuan Lin; Yoav Harel; Mark Woh; Scott Mahlke; Trevor Mudge; Krisztian Flautner

2005-11-01T23:59:59.000Z

483

Computational Human Performance Modeling For Alarm System Design  

SciTech Connect

The introduction of new technologies like adaptive automation systems and advanced alarms processing and presentation techniques in nuclear power plants is already having an impact on the safety and effectiveness of plant operations and also the role of the control room operator. This impact is expected to escalate dramatically as more and more nuclear power utilities embark on upgrade projects in order to extend the lifetime of their plants. One of the most visible impacts in control rooms will be the need to replace aging alarm systems. Because most of these alarm systems use obsolete technologies, the methods, techniques and tools that were used to design the previous generation of alarm system designs are no longer effective and need to be updated. The same applies to the need to analyze and redefine operators alarm handling tasks. In the past, methods for analyzing human tasks and workload have relied on crude, paper-based methods that often lacked traceability. New approaches are needed to allow analysts to model and represent the new concepts of alarm operation and human-system interaction. State-of-the-art task simulation tools are now available that offer a cost-effective and efficient method for examining the effect of operator performance in different conditions and operational scenarios. A discrete event simulation system was used by human factors researchers at the Idaho National Laboratory to develop a generic alarm handling model to examine the effect of operator performance with simulated modern alarm system. It allowed analysts to evaluate alarm generation patterns as well as critical task times and human workload predicted by the system.

Jacques Hugo

2012-07-01T23:59:59.000Z

484

High Performance Solar Control Office Windows  

E-Print Network (OSTI)

boratory University of California/Berkeley r t::;t:; r I thefor LAWRENCE BERKELEY LABORATORY UNIVERSITY OF CALIFORNIA Berkeley, California 94701 This work was performed for the

King, William J.

2011-01-01T23:59:59.000Z

485

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

of Science, Advanced Scientific Computing Research (ASCR)Office of Advanced Scientific Computing Research, FacilitiesOffice of Advanced Scientific Computing Research (ASCR), and

Gerber, Richard A.

2011-01-01T23:59:59.000Z

486

Federal Energy Management Program: High-Performance Sustainable Building  

NLE Websites -- All DOE Office Websites (Extended Search)

High-Performance Sustainable Building Design for New Construction and Major Renovations High-Performance Sustainable Building Design for New Construction and Major Renovations New construction and major renovations to existing buildings offer Federal agencies opportunities to create sustainable high-performance buildings. High-performance buildings can incorporate energy-efficient designs, sustainable siting and materials, and renewable energy technologies along with other innovative strategies. Also see Guiding Principles for Federal Leadership in High-Performance and Sustainable Buildings. Performance-Based Design Build Typically, architects, engineers, and project managers consider the potential to build a high-performance building to be limited by the initial cost. A different approach-performance-based design build-makes high performance the priority, from start to finish. Contracts are developed that focus on both limiting construction costs and meeting performance targets. The approach is not a source of funding, but rather a strategy to make the most out of limited, appropriated, funds.