Powered by Deep Web Technologies
Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


1

Argonne Leadership Computing Facility  

E-Print Network (OSTI)

on constant Q surface. (Credit: Anurag Gupta/GE Global) www.alcf.anl.gov The Leadership Computing Facility Division operates the Argonne Leadership Computing Facility -- the ALCF -- as part of the U.S. Department.......................................................................................... 63 2010 ALCF Projects ............................................................................ 64

Kemner, Ken

2

Careers | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Specialist As a member of the Argonne Leadership Computing Facilitys (ALCF) High Performance Computing (HPC) team, appointee will participate in the technical operation, support...

3

Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites

area area Contact Us | Careers | Staff Directory | User Support Search form Search Search Argonne Leadership Computing Facility an Office of Science user facility Home . About Overview History Staff Directory Careers Visiting Us Contact Us Resources & Expertise Mira Cetus Vesta Intrepid Challenger Surveyor Visualization Clusters Data and Networking Our Teams User Advisory Council Science at ALCF INCITE 2014 Projects ALCC 2013 Projects ESP Projects View All Projects Allocation Programs Early Science Program Publications Industry Collaborations News & Events Web Articles In the News Upcoming Events Past Events Informational Materials Photo Galleries User Services User Support Machine Status Presentations Training & Outreach User Survey Getting Started How to Get an Allocation New User Guide

4

Oak Ridge Leadership Computing Facility  

NLE Websites -- All DOE Office Websites

Oak Ridge Leadership Computing Facility Oak Ridge Leadership Computing Facility The OLCF was established at Oak Ridge National Laboratory in 2004 with the mission of standing up a supercomputer 100 times more powerful than the leading systems of the day. Connect with OLCF Facebook Twitter YouTube Vimeo Search OLCF.ORNL.GOV Home About OLCF Overview Leadership Team Groups Org Chart User Council Careers Visitor Information & Tours Contact Us Leadership Science Biological Sciences Chemistry Computer Science Earth Science Engineering Materials Science Physics 2013 INCITE Projects 2013 ALCC Projects Computing Resources Titan Cray XK7 Eos Lens EVEREST Rhea Sith Smoky Data Management Data Analysis Center Projects Adios CCI eSiMon File System Projects IOTA OpenSFS SWTools XGAR User Support Getting Started System User Guides KnowledgeBase

5

Tony Tolbert | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Tony Tolbert Consultant - Business Intelligence Argonne Leadership Computing Facility 9700 S. Cass Avenue Bldg. 240 Wkstn. 3D29 Argonne, IL 60439 630-252-6027 wtolbert@alcf.anl...

6

Web Articles | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

In the News In the News Supercomputers: New Software Needed Information Week Story UA Chemist Leads Supercomputer Effort to Aid Nuclear Understanding UA News Rewinding the universe Deixis Magazine more news Web Articles sort ascending Theory and Computing Sciences building at Argonne ALCF and MCS Establish Joint Lab for Evaluating Computing Platforms To centralize research activities aimed at evaluating future high performance computing platforms, a new joint laboratory at Argonne will provide significant opportunities for the Argonne Leadership Computing Facility (ALCF) and the Mathematics and Computer Science (MCS), both located in the Theory and Computing Sciences building, to work collaboratively on prototype technologies for petascale and beyond. January 08, 2014

7

History | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

HPC at Argonne ALCF Continues a Tradition of Computing Innovation In 1949, because the computers they needed weren't yet available commercially for purchase, Argonne physicists...

8

Vesta | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. Feedback Form Vesta Vesta is the...

9

Intrepid | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Resources & Expertise Resources & Expertise Mira Cetus Vesta Intrepid Challenger Surveyor Visualization Clusters Data and Networking Our Teams User Advisory Council Intrepid Intrepid Data and Networking Intrepid Intrepid has a highly scalable torus network, as well as a high-performance collective network that minimizes the bottlenecks common in simulations on large, parallel computers. Intrepid uses less power per teraflop than systems built around commodity microprocessors, resulting in greater energy efficiency and reduced operating costs. Blue Gene applications use common languages and standards-based MPI communications tools, so a wide range of science and engineering applications are straightforward to port, including those used by the computational science community for cutting-edge research

10

Scalasca | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Transfer Data Transfer Debugging & Profiling Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Scalasca Introduction Scalasca is a software tool that supports the performance optimization of MPI and OpenMP parallel programs by measuring and analyzing their runtime behavior. The analysis identifies potential performance bottlenecks - in particular those concerning communication and synchronization - and

11

GROMACS | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Software & Libraries Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] GROMACS Building and Running GROMACS on Vesta/Mira The Gromacs Molecular Dynamics package has a large number of executables. Some of them, such as luck, are just utilities that do not need to be built for the back end. Begin by building the serial version of Gromacs (i.e., the version that can run within one processor, with or without more than one thread) for the front end and then build the parallel version (i.e., with MPI) for the back end. This way, a full set of executables is available for the front end and

12

Oak Ridge Leadership Computing Facility Position Paper  

Science Conference Proceedings (OSTI)

This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.

Oral, H Sarp [ORNL; Hill, Jason J [ORNL; Thach, Kevin G [ORNL; Podhorszki, Norbert [ORNL; Klasky, Scott A [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL

2011-01-01T23:59:59.000Z

13

Michael Papka | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Both his laboratory leadership roles and his research interests relate to high-performance computing in support of scientific discovery. Dr. Papka holds a Senior Fellow...

14

Oak Ridge Leadership Computing Facility User Update: SmartTruck...  

NLE Websites -- All DOE Office Websites (Extended Search)

Leadership Computing Facility User Update: SmartTruck Systems Startup zooms to success improving fuel efficiency of long-haul trucks by more than 10 percent Supercomputing...

15

Accounts Policy | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Decommissioning of BG/P Systems and Resources Decommissioning of BG/P Systems and Resources Blue Gene/Q Versus Blue Gene/P Mira/Cetus/Vesta Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy Job Scheduling Policy on BG/P Job Scheduling Policy on BG/Q Refund Policy Software Policy User Authentication Policy Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Accounts Policy All holders of user accounts must abide by all appropriate Argonne Leadership Computing Facility and Argonne National Laboratory computing usage policies. These are described at the time of the account request and

16

Data Policy | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Intrepid/Challenger/Surveyor Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy Job Scheduling Policy on BG/P Job Scheduling Policy on BG/Q Refund Policy Software Policy User Authentication Policy Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Data Policy Contents ALCF Data Confidentiality ALCF Staff with Root Privileges Use of Proprietary/Licensed Software Prohibited Data Export Control Data Storage Systems Home File System Space Data/Parallel File System Space Capacity and Retention Policies Back to top ALCF Data Confidentiality The Argonne Leadership Computing Facility (ALCF) network is an

17

Richard Coffey | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

a supercomputer for those groups, and participated in campus-wide IT and high-performance computing leadership efforts. Before that, he worked for ten years at the University...

18

High Performance Computing at the Oak Ridge Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

High Performance Computing at High Performance Computing at the Oak Ridge Leadership Computing Facility Go to Menu Page 2 Outline * Our Mission * Computer Systems: Present, Past, Future * Challenges Along the Way * Resources for Users Go to Menu Page 3 Our Mission Go to Menu Page 4 * World's most powerful computing facility * Nation's largest concentration of open source materials research * $1.3B budget * 4,250 employees * 3,900 research guests annually * $350 million invested in modernization * Nation's most diverse energy portfolio * The $1.4B Spallation Neutron Source in operation * Managing the billion-dollar U.S. ITER project ORNL is the U.S. Department of Energy's largest science and energy laboratory Go to Menu Page 5 Computing Complex @ ORNL World's most powerful computer for open science

19

ALCC Program | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Getting Started How to Get an Allocation New User Guide Intrepid to Mira: Key Changes INCITE Program ALCC Program Director's Discretionary Program ALCC Program ASCR Leadership...

20

Maricris Lodriguito Mayes | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Ab Initio Molecular Dynamics, Computational Material ScienceNanoscience, High Performance Computing, Reaction Mechanism and Dynamics, Theoretical and Computational Chemistry...

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


21

Yuri Alexeev | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

for using and enabling computational methods in chemistry and biology for high-performance computing on next-generation high-performance computers. Yuri is particularly...

22

Resources & Expertise | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

is dedicated to large-scale computation and builds on Argonne's strengths in high-performance computing software, advanced hardware architectures and applications expertise. It...

23

INCITE Program | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary Program INCITE Program Innovative and Novel Computational Impact on...

24

Nichols Romero | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Laboratory (2005-2007) and later worked as a Computational Scientist in the High-Performance Computing Modernization Program (2007-2008) for the Department of Defense. Romero was...

25

Powering Research | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

and disk storage capacity. As we move towards the next major challenge in high-performance computing-exascale levels of computation-no doubt the benefits of partnering will grow...

26

New User Guide | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. Feedback Form New User Guide Step 1. Request...

27

Accerelate Your Vision | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

world's most capable resources available to researchers. The ALCF has the high performance computing resources and expertise to enable major research breakthroughs leading to...

28

About ALCF | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

and the largest-scale systems. Accelerating Transitional Discovery High-performance computing is becoming increasingly important as more scientists and engineers use...

29

Software and Libraries | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Allinea DDT +ddt L Multithreaded, multiprocess source code debugger for high performance computing. bgqstack @default I A tool to debug and provide postmortem analysis of...

30

Scott Parker | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

at the National Center for Supercomputing Applications, where he focused on high-performance computing and scientific applications. At Argonne since 2008, he works on performance...

31

Argonne Leadership Computing Facility (ALCF) | U.S. DOE Office of Science  

Office of Science (SC) Website

Argonne Argonne Leadership Computing Facility (ALCF) Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Accessing ASCR Supercomputers Oak Ridge Leadership Computing Facility (OLCF) Argonne Leadership Computing Facility (ALCF) National Energy Research Scientific Computing Center (NERSC) Energy Sciences Network (ESnet) Research & Evaluation Prototypes (REP) Innovative & Novel Computational Impact on Theory and Experiment (INCITE) ASCR Leadership Computing Challenge (ALCC) Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy SC-21/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301) 903-7486 F: (301)

32

Pullback Policy | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Eureka / Gadzooks Eureka / Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy Job Scheduling Policy on BG/P Job Scheduling Policy on BG/Q Refund Policy Software Policy User Authentication Policy Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Pullback Policy In an effort to ensure that valuable ALCF computing resources are used judiciously, a pullback policy has been instituted. Projects granted allocations under the INCITE and ALCC programs that have not used a significant amount of their allocation will be evaluated and adjusted during the year following the policies outlined on this page.

33

Machine Overview | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

BG/Q Drivers Status Machine Overview Machine Partitions Torus Network Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Machine Overview Blue Gene /Q systems are composed of login nodes, I/O nodes, and compute nodes. Login Nodes Login and compile nodes are IBM Power 7-based systems running Red Hat Linux and are the user's interface to a Blue Gene/Q system. This is where users login, edit files, compile, and submit jobs. These are shared resources

34

BGP: Eureka / Gadzooks | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

File Systems Compiling and Linking Scheduling Software and Libraries Using Data Analytics & Visualization Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] BGP: Eureka / Gadzooks Researchers often analyze the huge data sets generated on Intrepid by converting it into visual representations on ALCF's Eureka, a system featuring a large installation of NVIDIA Quadro Plex S4 external graphics processing unit (GPUs). The ALCF also operates Gadzooks, a visualization test and deployment system. Eureka 100 compute nodes: each with (2) 2.0 GHz quad-core Xeon servers with 32 GB RAM 200 NVIDIA Quadro FX5600 GPUs in 50 S4s Memory: More than 3.2 terabytes of RAM

35

Software Policy | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Intrepid/Challenger/Surveyor Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy Job Scheduling Policy on BG/P Job Scheduling Policy on BG/Q Refund Policy Software Policy User Authentication Policy Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Software Policy ALCF Resource Software Use All software used on ALCF computers must be appropriately acquired and used according to the appropriate licensing. Possession or use of illegally copied software is prohibited. Likewise, users shall not copy copyrighted software, except as permitted by the owner of the copyright. Currently,

36

BGP: Code Saturne | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Documentation Feedback Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] BGP: Code Saturne What is Code_Saturne? Code Saturne is the EDF's general purpose Computational Fluid Dynamics (CFD) software. EDF stands for Électricité de France, one of the world's largest producers of electricity. Obtaining Code_Saturne Code_Saturne is an open source code, freely available for the CFD practitioners and other scientists too. You can download the latest version from the Code_Saturne Official Forum Web Page and you can also follow the Forum with interesting questions about installation problems, general usage, examples, etc. Building Code_Saturne for Blue Gene/P The version currently available on Intrepid is the last official stable

37

Accounts & Access | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Account Information Accounts and Access FAQ Connect & Log In Using CRYPTOCards SSH Keys on Surveyor Disk Space Quota Management Allocations Decommissioning of BG/P Systems and Resources Blue Gene/Q Versus Blue Gene/P Mira/Cetus/Vesta Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Accounts & Access Account Information Account Information: All computing carried out on the ALCF systems is associated with a user "account." This account is used to log onto the login servers and run jobs on the resources. Using CRYPTOcards Using CRYPTOCards: Useful information to guide you in using and troubleshooting your CRYPTOcard.

38

System Overview | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

System Overview BG/Q Drivers Status Machine Overview Machine Partitions Torus Network Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] System Overview Machine Overview Machine Overview is a reference for the login and compile nodes, I/O nodes, and compute nodes of the BG/Q system. Machine Partitions Machine Partitions is a reference for the way that Mira, Vesta and Cetus are partitioned and discusses the network topology of the partitions.

39

Visualization Clusters | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Eureka Eureka Analytics and Visualization Visualization Clusters Tukey Tukey is the ALCF's newest analysis and visualization cluster. Equipped with state-of-the-art graphics processing units (GPUs), Tukey converts computational data from Mira into high-resolution visual representations. The resulting images, videos, and animations help users to better analyze and understand the data generated by Mira. Tukey can also be used for statistical analysis, helping to pinpoint trends in the simulation data. Additionally, the system is capable of preprocessing efforts, such as meshing, to assist users preparing for Mira simulations. Tukey shares the Mira network and parallel file system, enabling direct access to Mira-generated results. Configuration Two 2 GHz 8-core AMD Opteron CPUs per node

40

Running Jobs | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Reservations Cobalt Job Control How to Queue a Job Running Jobs FAQs Queuing and Running on BG/Q Systems Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Running Jobs Contents Job Submission Submitting a Script Job Sub-block Script Jobs Multiple Consecutive Runs within a Script Job Settings Environment Variables Script Environment Program and Argument Length Limit Job Dependencies Thread Stack Size Verbose Setting for Runjob How do I get each line of the output labeled with the MPI rank that it came from? Mapping of MPI Tasks to Cores

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


41

User Authentication Policy | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Eureka / Gadzooks Eureka / Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy Job Scheduling Policy on BG/P Job Scheduling Policy on BG/Q Refund Policy Software Policy User Authentication Policy Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] User Authentication Policy Users of the Argonne production systems are required to use a CRYPTOCard one time password, multifactor authentication system. This document explains the policies users must follow regarding CRYPTOCard tokens for accessing the Argonne resources. MultiFactor Authentication "Authentication systems are frequently described by the authentication

42

BGP: GPAW | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Documentation Feedback Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] BGP: GPAW What is GPAW? GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method. It uses real-space uniform grids and multi-grid methods or atom-centered basis-functions. Obtaining GPAW GPAW is an open-source code which can be download at https://wiki.fysik.dtu.dk/gpaw/ It relies on the following Python libraries: Atomic Simulation Environment (ASE) - https://wiki.fysik.dtu.dk/ase/ NumPy - http://numpy.scipy.org/ Plus the standard math libraries: BLAS LAPACK ScaLAPACK Building GPAW for Blue Gene/P Build instructions for GPAW can be found here: https://wiki.fysik.dtu.dk/gpaw/install/BGP/surveyor.html

43

BGP: MADNESS | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

IBM References IBM References Software and Libraries BGP: GROMACS BGP: MADNESS Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] BGP: MADNESS Overview MADNESS Project on Google Code MADNESS Project at ORNL Downloading MADNESS MADNESS is available under the GNU General Public License v2. You can download the source from Google Code like this: svn checkout http://m-a-d-n-e-s-s.googlecode.com/svn/local/trunk m-a-d-n-e-s-s-read-only MADNESS on BlueGene/P MADNESS currently provokes many shortcomings in the IBM XL C++ compiler and thus can only be compiled with the GNU C++ compiler. Please see below for the XL C++ compiler error for current and past problems.

44

Machine Partitions | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

BG/Q Drivers Status Machine Overview Machine Partitions Torus Network Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Machine Partitions Mira As our production machine, the queues on Mira are very similar to those on our BG/P system Intrepid. In the prod-capability queue, partition sizes of 8192, 12288, 16384, 32768, and 49152 nodes are available. All partitions have a full torus network. The max runtime of jobs submitted to the

45

gprof Profiling Tools | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] gprof Profiling Tools Contents Introduction Profiling on the Blue Gene Enabling Profiling Collecting Profile Information Profiling Threaded Applications Using gprof Routine Level Flat Profile Line Level Flat Profile Call Graph Analysis Routine Execution Count List Annotated Source Listing Issues in Interpreting Profile Data Profiling Concepts Programs in Memory

46

Debugging & Profiling | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Debugging & Profiling Debugging & Profiling Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor TotalView on BG/Q Systems Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Debugging & Profiling Initial setups Core file settings - this page contains some environment variables that allow you to control code file creation and contents. Using VNC with a Debugger - when displaying an X11 client (e.g. Totalview) remotely over the network, interactive response is typically slow. Using the VNC server can often help you improve the situation.

47

IBM References | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Transfer Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] IBM References Contents IBM Redbooks A2 Processor Manual QPX Vector Instruction Set Architecture XL Compiler Documentation MASS Documentation Back to top IBM Redbooks IBM System Blue Gene Solution: Blue Gene/Q Application Development Manual - Application and library developers wants this one. This documents options for MPI, OpenMP, and other features of interest to most users. IBM System Blue Gene Solution: Blue Gene/Q Code Development and Tools Interface - Low-level tools; developers may find this useful.

48

Data Transfer | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Using Globus Online Using GridFTP Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Data Transfer The Blue Gene/Q will connect to other research institutions using a total of 100 Gbit/s of public network connectivity (10 Gbit/s during early access). This allows scientists to transfer datasets to and from other institutions over fast research networks such as the Energy Science Network (ESNet) and the Metropolitan Research and Education Network (MREN). Data Transfer Node Overview A total of 12 data transfer nodes (DTNs) will be available to all Mira

49

Introducing Challenger | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Introducing Challenger Quick Reference Guide System Overview Data Transfer Data Storage & File Systems Compiling and Linking Queueing and Running Jobs Debugging and Profiling Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Introducing Challenger The Blue Gene/P resource, Challenger, is the new home for the prod-devel job submission queue. Moving the prod-devel queue to Challenger clears the way for more capability jobs on Intrepid. Challenger shares the same environment as Intrepid and is intended for small, short, interactive debugging and test runs. Production jobs are not

50

ALCF Acknowledgment Policy | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

User Guides User Guides How to Get an Allocation New User Guide Accounts & Access Allocations Decommissioning of BG/P Systems and Resources Blue Gene/Q Versus Blue Gene/P Mira/Cetus/Vesta Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy Job Scheduling Policy on BG/P Job Scheduling Policy on BG/Q Refund Policy Software Policy User Authentication Policy Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] ALCF Acknowledgment Policy By applying for ALCF resources you agree to acknowledge ALCF in all publications based on work done with those resources. Following is a sample

51

The Argonne Leadership Computing Facility 2010 annual report.  

SciTech Connect

Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued to provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers that will be faster than petascale-class computers by a factor of a thousand. Pete Beckman, who served as the ALCF's Director for the past few years, has been named director of the newly created Exascale Technology and Computing Institute (ETCi). The institute will focus on developing exascale computing to extend scientific discovery and solve critical science and engineering problems. Just as Pete's leadership propelled the ALCF to great success, we know that that ETCi will benefit immensely from his expertise and experience. Without question, the future of supercomputing is certainly in good hands. I would like to thank Pete for all his effort over the past two years, during which he oversaw the establishing of ALCF2, the deployment of the Magellan project, increases in utilization, availability, and number of projects using ALCF1. He managed the rapid growth of ALCF staff and made the facility what it is today. All the staff and users are better for Pete's efforts.

Drugan, C. (LCF)

2011-05-09T23:59:59.000Z

52

Oak Ridge Leadership Computing Facility (OLCF) | U.S. DOE Office of Science  

Office of Science (SC) Website

Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Accessing ASCR Supercomputers Oak Ridge Leadership Computing Facility (OLCF) Argonne Leadership Computing Facility (ALCF) National Energy Research Scientific Computing Center (NERSC) Energy Sciences Network (ESnet) Research & Evaluation Prototypes (REP) Innovative & Novel Computational Impact on Theory and Experiment (INCITE) ASCR Leadership Computing Challenge (ALCC) Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy SC-21/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301) 903-7486 F: (301)

53

CONTACT Argonne Leadership Computing Facility | www.alcf.anl.gov | info@alcf.anl.gov | (877) 737-8615 LEADERSHIP-CLASS COMPUTING  

E-Print Network (OSTI)

Leadership Computing Facility (ALCF) was created and exists today as a preeminent global resource t y #12;Argonne Leadership Computing Facility ALCF Continues a Tradition of Computing Innovation--a tradition that continues today at the ALCF. The seedbed for such groundbreaking software as MPI, PETSc, PVFS

Kemner, Ken

54

High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility  

SciTech Connect

Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.

Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

2010-08-01T23:59:59.000Z

55

High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility  

SciTech Connect

Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.

Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

2010-08-01T23:59:59.000Z

56

Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.  

Science Conference Proceedings (OSTI)

The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms. The Data Analytics and Visualization Team lends expertise in tools and methods for high-performance, post-processing of large datasets, interactive data exploration, batch visualization, and production visualization. The Operations Team ensures that system hardware and software work reliably and optimally; system tools are matched to the unique system architectures and scale of ALCF resources; the entire system software stack works smoothly together; and I/O performance issues, bug fixes, and requests for system software are addressed. The User Services and Outreach Team offers frontline services and support to existing and potential ALCF users. The team also provides marketing and outreach to users, DOE, and the broader community.

Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

2012-08-16T23:59:59.000Z

57

High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility  

SciTech Connect

Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where appropriate, changes in Center metrics were introduced. This report covers CY 2010 and CY 2011 Year to Date (YTD) that unless otherwise specified, denotes January 1, 2011 through June 30, 2011. User Support remains an important element of the OLCF operations, with the philosophy 'whatever it takes' to enable successful research. Impact of this center-wide activity is reflected by the user survey results that show users are 'very satisfied.' The OLCF continues to aggressively pursue outreach and training activities to promote awareness - and effective use - of U.S. leadership-class resources (Reference Section 2). The OLCF continues to meet and in many cases exceed DOE metrics for capability usage (35% target in CY 2010, delivered 39%; 40% target in CY 2011, 54% January 1, 2011 through June 30, 2011). The Schedule Availability (SA) and Overall Availability (OA) for Jaguar were exceeded in CY2010. Given the solution to the VRM problem the SA and OA for Jaguar in CY 2011 are expected to exceed the target metrics of 95% and 90%, respectively (Reference Section 3). Numerous and wide-ranging research accomplishments, scientific support, and technological innovations are more fully described in Sections 4 and 6 and reflect OLCF leadership in enabling high-impact science solutions and vision in creating an exascale-ready center. Financial Management (Section 5) and Risk Management (Section 7) are carried out using best practices approved of by DOE. The OLCF has a valid cyber security plan and Authority to Operate (Section 8). The proposed metrics for 2012 are reflected in Section 9.

Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

2011-08-01T23:59:59.000Z

58

The Argonne Leadership Computing  

E-Print Network (OSTI)

Leadership Computing Facility (ALCF) was created and exists today as a preeminent global resource t y #12;Argonne Leadership Computing Facility ALCF Continues a Tradition of Computing Innovation--a tradition that continues today at the ALCF. The seedbed for such groundbreaking software as MPI, PETSc, PVFS

Kemner, Ken

59

Computing the Dark Universe | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

capability for critical cosmological probes by exploiting next-generation high performance computing architectures. The simulations will study the clustering of matter in the...

60

High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility  

Science Conference Proceedings (OSTI)

Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of fuel.

Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

2012-02-01T23:59:59.000Z

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


61

Oak Ridge Leadership Computing Facility How the Jaguar supercomputer helps assure America's energy security  

E-Print Network (OSTI)

on Theory and Experiment (INCITE) program, jointly managed by Leadership Computing Facilities at Oak Ridge works best before they build. Reduce nuclear waste. One way to reduce nuclear waste is to fuel a fast reactor with it. But the enormous expense of experiments has slowed development of a commercially viable

62

Argonne Leadership Computing Facility | www.alcf.anl.gov | info@alcf.anl.gov | (877) 737-8615 Materials Science  

E-Print Network (OSTI)

CONTACT Argonne Leadership Computing Facility | www.alcf.anl.gov | info@alcf.anl.gov | (877) 737 and computational readiness. For more information about ALCC and other programs at the ALCF, visit: http://www.alcf 2012 alcf-alcc_list-0212 ALCC ASCR Leadership Computing Challenge ENERGY U.S. DEPARTMENT OF Argonne

Kemner, Ken

63

ORNL/TM-2007/44 Leadership Computing Facility  

E-Print Network (OSTI)

........................................................................... 97 E.10. Single fuel assembly of a sodium-cooled, fast-spectrum nuclear reactor reactors, separations reprocessing facilities, and fuel fabrication/storage facilities. Nuclear physics CTEM collisionless trapped electron mode CY calendar year DFT density functional theory DNA

64

Oak Ridge Leadership Computing Facility User Update: SmartTruck Systems |  

NLE Websites -- All DOE Office Websites (Extended Search)

Leadership Computing Facility User Update: SmartTruck Systems Leadership Computing Facility User Update: SmartTruck Systems Startup zooms to success improving fuel efficiency of long-haul trucks by more than 10 percent Supercomputing simulations at Oak Ridge National Laboratory enabled SmartTruck Systems engineers to develop the UnderTray System, some components of which are shown here. The system dramatically reduces drag-and increases fuel mileage-in long-haul trucks. Image: Michael Matheson, Oak Ridge National Laboratory Supercomputing simulations at Oak Ridge National Laboratory enabled SmartTruck Systems engineers to develop the UnderTray System, some components of which are shown here. The system dramatically reduces drag-and increases fuel mileage-in long-haul trucks. Image: Michael Matheson, Oak Ridge National Laboratory (hi-res image)

65

How to Get an Allocation | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

in the nation and provides leading scientists with next-generation, high-performance computing resources for breakthrough research to address global challenges. The ALCF...

66

IBM References for BG/P | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Blue GeneP System Administration, SG24-7417 IBM System Blue Gene Solution: High Performance Computing Toolkit for Blue GeneP Universal Performance Counters Unit User Manual V3.0...

67

Intrepid/Challenger/Surveyor | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Introducing Challenger Quick Reference Guide System Overview Data Transfer Data Storage & File Systems Compiling and Linking Queueing and Running Jobs Debugging and Profiling Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Intrepid/Challenger/Surveyor The ALCF houses several several IBM Blue Gene/P supercomputers, one of the world's fastest computing platforms. Intrepid Intrepid has a highly scalable torus network, as well as a high-performance collective network that minimizes the bottlenecks common in simulations on large, parallel computers. Intrepid uses less power per teraflop than

68

Tuning MPI on BG/Q | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Debugging & Profiling Debugging & Profiling Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Tuning MPI on BG/Q MPI Standard The MPI standard is a formal specification document that is backed by the MPI Forum, but no official standardization body (e.g. ISO or ANSI). Fortunately, it has wide vendor support in HPC and is available on all common platforms. Please see this page for all of the MPI standardization documents,

69

Connect & Log In | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Account Information Accounts and Access FAQ Connect & Log In Using CRYPTOCards SSH Keys on Surveyor Disk Space Quota Management Allocations Decommissioning of BG/P Systems and Resources Blue Gene/Q Versus Blue Gene/P Mira/Cetus/Vesta Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Connect & Log In Users log into ALCF resources via ssh. ssh mira.alcf.anl.gov -l Resources that are accessible via ssh are vesta.alcf.anl.gov, mira.alcf.anl.gov, surveyor.alcf.anl.gov, intrepid.alcf.anl.gov, cetus.alcf.anl.gov, challenger.alcf.anl.gov, tukey.alcf.anl.gov, eureka.alcf.anl.gov, and gadzooks.alcf.anl.gov.

70

How to Queue a Job | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Reservations Cobalt Job Control How to Queue a Job Running Jobs FAQs Queuing and Running on BG/Q Systems Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] How to Queue a Job Using the Job Resource Manager on BG/Q: Commands, Options and Examples This document provides examples of how to submit jobs on our BG/Q system. It also provides examples of commands that can be used to query the status of jobs, what partitions are available, etc. For an introduction to using the job resource manager and running jobs on BG/Q, see Running Jobs on the

71

Tuning and Analysis Utilities (TAU) | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Storage & File Systems Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Tuning and Analysis Utilities (TAU) References TAU Project Site TAU Instrumentation Methods TAU Compilation Options TAU Fortran Instrumentation FAQ TAU Leap to Petascale 2009 Presentation

72

Prior BG/P Driver Information | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Intrepid/Challenger/Surveyor Intrepid/Challenger/Surveyor Introducing Challenger Quick Reference Guide System Overview BG/P Driver Information Prior BG/P Driver Information Internal Networks Machine Environment FAQs Block and Job State Documentation Machine Partitions Data Transfer Data Storage & File Systems Compiling and Linking Queueing and Running Jobs Debugging and Profiling Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Prior BG/P Driver Information Prior BGP Driver Information This page contains information about the drivers and efixes currently installed on the ALCF resources Intrepid and Surveyor, as well as

73

How to Queue a Job | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Reservations Queueing Running Jobs HTC Mode MPMD and MPIEXEC FAQs Queueing and Running Debugging and Profiling Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] How to Queue a Job Using the Job Resource Manager on BG/P: Commands, Options and Examples This document provides examples of how to submit jobs on the Argonne BG/P system. It also provides examples of commands that can be used to query the status of jobs, what partitions are available, etc. For an introduction to using the job resource manager and running jobs on BG/P, see Running Jobs on the BG/P System. How To Examples and Results

74

Disk Space Quota Management | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Account Information Accounts and Access FAQ Connect & Log In Using CRYPTOCards SSH Keys on Surveyor Disk Space Quota Management Allocations Decommissioning of BG/P Systems and Resources Blue Gene/Q Versus Blue Gene/P Mira/Cetus/Vesta Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Disk Space Quota Management As you manage your project's disk space quota, it's important to remember that users you approve as to be added as project members are also added to the project's Unix Group. Unix Group members have the ability to write to the project directory and to access project data. You can manually add or remove Unix

75

NERSC 2011: High Performance Computing Facility Operational Assessment for the National Energy Research Scientific Computing Center  

E-Print Network (OSTI)

the Argonne and Oak Ridge Leadership Computing Facilitieslike Leadership Computing Facilities at Argonne and Oak

Antypas, Katie

2013-01-01T23:59:59.000Z

76

Porting Charm++/NAMD to IBM Blue Gene/Q Wei Jiang Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Porting Charm++/NAMD to IBM Blue Gene/Q Porting Charm++/NAMD to IBM Blue Gene/Q Wei Jiang Argonne Leadership Computing Facility 7 th , March NAMD_esp NAMD - Parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems Portable to all popular supercomputing platforms Great scalability based on Charm++ parallel objects Scientific aims on Blue Gene/Q Ensemble run that launches large number of replicas concurrently - mainly for energetic simulation High throughput simulation for large scale systems ~100M atoms Requirements for charm++ New communication layer that supports parallel/parallel runs Enable charm++ programming paradigm on Parallel Active Messaging Interface (PAMI) Parallel Structure of NAMD Hybrid force/spatial decomposition Adaptive Overlap of Communication and Computation

77

HTC Mode on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

P compute nodes normally run parallel programs in a mode referred to as High Performance Computing (HPC) Mode. In HPC mode, a physical partition may run only one (1) parallel...

78

HPCT HPM on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

HPM on BGP Systems References IBM System Blue Gene Solution: High Performance Computing Toolkit for Blue GeneP - IBM Redbook describing HPCT and other performance tools High...

79

IBM HPCT on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

IBM HPCT on BGP Systems References IBM System Blue Gene Solution: High Performance Computing Toolkit for Blue GeneP - IBM Redbook describing HPCT and other performance tools...

80

UPC Hardware on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

V3.0 Overview of the IBM Blue GeneP project IBM System Blue Gene Solution: High Performance Computing Toolkit for Blue GeneP Introduction This section gives a low level...

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


81

Frederico Fiuza receives 2013 ASCR Leadership Computing Challenge...  

NLE Websites -- All DOE Office Websites (Extended Search)

Ignition of Fusion Targets." The award will provide 19.5 million CPU hours at the Argonne Leadership Computing Facility using the IBM Blue GeneQ computer "Mira." Mira has 786,432...

82

A SLPEC/EQP Method for MPECs | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

A SLPEC/EQP Method for MPECs A SLPEC/EQP Method for MPECs Event Sponsor: Mathematics and Computing Science - LANS Seminar Start Date: Dec 4 2013 - 3:00pm Building/Room: Building 240/Room 4301 Location: Argonne National Laboratory Speaker(s): Todd Munson Sven Leyffer Speaker(s) Title: Argonne National Laboratory, MCS Host: Stefan Wild Event Website: http://www.mcs.anl.gov/research/LANS/events/listn/detail.php?id=2249 We discuss a new method for mathematical programs with complementarity constraints that is globally convergent to B-stationary points. The method solves a linear program with complementarity constraints to obtain an estimate of the active set. It then fixes the activities and solves an equality-constrained quadratic program to obtain fast convergence. The method uses a filter to promote global convergence. We establish

83

Measuring Flops on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Tuning and Analysis Utilities (TAU) Rice HPC Toolkit IBM HPCT Mpip gprof Profiling Tools Darshan PAPI High Level UPC API Low Level UPC API UPC Hardware BG/P dgemm Performance Tuning MPI on BGP Performance FAQs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Measuring Flops on BG/P Systems Generally speaking, BlueGene/P does not have a single command to return the job's number of floating point operations per second (Flops). The problem can partly be solved by using high-level hardware counter interface library, located in /soft/apps/UPC. An example program measures performance of a simple Y(N) = Y(N) + a * X(N)

84

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

Acronyms Argonne Leadership Computing Facility adaptivethe Leadership Computing Facilities at Oak Ridge and Argonne

Gerber, Richard A.

2011-01-01T23:59:59.000Z

85

Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

publishing. nazarewicz, W., Schunck, n., Wild, S.,* "Quality Input for Microscopic Fission Theory," Stockpile Stewardship Quarterly, May 2012, vol. 2, no. 1, pp. 6-7. ALCF | 2012...

86

ASCR Leadership Computing Challenge (ALCC) | U.S. DOE Office of Science  

Office of Science (SC) Website

ASCR ASCR Leadership Computing Challenge (ALCC) Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Accessing ASCR Supercomputers Oak Ridge Leadership Computing Facility (OLCF) Argonne Leadership Computing Facility (ALCF) National Energy Research Scientific Computing Center (NERSC) Energy Sciences Network (ESnet) Research & Evaluation Prototypes (REP) Innovative & Novel Computational Impact on Theory and Experiment (INCITE) ASCR Leadership Computing Challenge (ALCC) ALCC Application Details ALCC Past Awards Frequently Asked Questions Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy

87

The Magellan Final Report on Cloud Computing  

E-Print Network (OSTI)

at the Argonne Leadership Computing Facility (ALCF) and theat the Argonne Leadership Computing Facility (ALCF) and theat the Argonne Leadership Computing Facility (ALCF) and the

Coghlan, Susan

2013-01-01T23:59:59.000Z

88

The Magellan Final Report on Cloud Computing  

E-Print Network (OSTI)

of the Argonne Leadership Computing Facility at Argonneof the Argonne Leadership Computing Facility at Argonneat the Argonne Leadership Computing Facility (ALCF) and the

Coghlan, Susan

2013-01-01T23:59:59.000Z

89

Exascale for Energy: The Role of Exascale Computing in Energy Security  

E-Print Network (OSTI)

on the Argonne Leadership Computing Facilitys IBM Blueof the Argonne Leadership Computing Facility's BG/P using

Authors, Various

2010-01-01T23:59:59.000Z

90

Coreprocessor | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Coreprocessor Coreprocessor Coreprocessor is a basic parallel debugging tool that can be used to debug problems at all levels (hardware, kernel, and application). It is particularly useful when working with a large set of core files since it reveals where processors aborted, grouping them together automatically (for example, 9 died here, 500 were here, etc.). See the instructions below for using the Coreprocessor tool. References The Coreprocessor tool (IBM System Blue Gene Solution: Blue Gene/Q System Administration, Chapter 22) Location The coreprocessor.pl script is located at: /bgsys/drivers/ppcfloor/coreprocessor/bin/coreprocessor.pl or for user convenience, the SoftEnv key +utility_paths (that you get in your default environment by putting @default in your ~/.soft file) allows

91

Openspeedshop | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Openspeedshop Openspeedshop Introduction OpenSpeedshop is an open-source performance tool for the analysis of applications using: Sampling Experiments Support for Callstack Analysis Hardware Performance Counters MPI Profiling and Tracing I/O Profiling and Tracing Floating Point Exception Analysis A more detailed list of the individual experiment names and functionality definition follows: pcsamp Periodic sampling the program counters gives a low-overhead view of where the time is being spent in the user application. usertime Periodic sampling the call path allows the user to view inclusive and exclusive time spent in application routines. It also allows the user to see which routines called which routines. Several views are available, including the "hot" path.

92

Oak Ridge Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut...

93

Publications | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

production processes," Acta Phys. Polon Supp., October 2013, vol. 6, Cracow, Poland, INSPIRE, 2013 , pp. 257-262. 10.5506APhysPolBSupp.6.257 S. Bogner, A. Bulgac, J....

94

Presentations | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Desc Apply Presented at: Data Management: Tools and Best Practices for Intrepid's Decommissioning and Beyond Managing Data at ALCF November 2013 Data Management: Tools and Best...

95

Tukey | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Compiling and Linking ParaView on Tukey Using Cobalt on Tukey VisIt on Tukey Eureka Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we...

96

MADNESS | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries Boost CPMD CodeSaturne...

97

Cetus | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Cetus Cetus and Vesta Cetus shares the same software environment and file systems as Mira. The primary role of Cetus is to run small jobs in order to debug problems that occurred...

98

BGPM | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

BGPM Introduction bgpm is the system level API for accessing the hardware performance counters on the BGQ nodes. Location bgpm is a standard part of the BGQ software and is...

99

Surveyor | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Surveyor Surveyor Data and Networking Surveyor is used for tool and application porting, software testing and optimization, and systems software development. It is an IBM Blue...

100

Mira | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Resources & Expertise Mira Vesta Intrepid Challenger Surveyor Visualization Clusters Data & Networking Our Teams User Advisory Council Mira Submitted by Anonymous on November 27,...

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


101

GAMESS | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Performance Tools & APIs Software & Libraries Boost CPMD CodeSaturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References IntrepidChallengerSurveyor Tukey Eureka Gadzooks...

102

gdb | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgqstack gdb Coreprocessor TotalView on BGQ Systems Performance Tools & APIs Software & Libraries...

103

PAPI | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Tuning MPI on BGQ Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling...

104

Allocations | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Allocation Management Determining Allocation Requirements Querying Allocations Using cbank Decommissioning of BGP Systems and Resources Blue GeneQ Versus Blue GeneP MiraCetus...

105

Darshan | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Performance Tools & APIs Tuning MPI on BGQ Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BGQ Performance Counters BGPM...

106

GPAW | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Software & Libraries Boost CPMD CodeSaturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References IntrepidChallengerSurveyor Tukey Eureka Gadzooks Policies Documentation...

107

Programs | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Featured Science Slice of the translationally-invariant proton hexadecapole density of the ground state of 8Li, Nuclear Structure and Nuclear Reactions James Vary Allocation...

108

Boost | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Boost Overview Boost, a collection of modern, peer-reviewed C++ libraries, is installed in: softlibrariesboostcurrentcnk-gcccurrent -- for use with GCC C++ compilers:...

109

HPCToolkit | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

HPCToolkit References HPCToolkit Website HPCViewer Download Site HPCT Documentation Page Introduction HPCToolkit is an open-source suite of tools for profile-based performance...

110

HPCTW | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

HPCTW Introduction HPCTW is a set of libraries that may be linked to in order to gather MPI usage and hardware performance counter information. Location HPCTW is installed in:...

111

Challenger | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Challenger Challenger Data and Networking Challenger is the home for the prod-devel job submission queue. Moving the prod-devel queue to Challenger clears the way for capability...

112

LAMMPS | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

System Overview Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries Boost...

113

Vesta | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Resources & Expertise Mira Cetus Vesta Intrepid Challenger Surveyor Visualization Clusters Data and Networking Our Teams User Advisory Council Vesta Apply for time on Vesta Vesta...

114

Policies | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Eureka Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy ALCC...

115

Reservations | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

System Overview Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Reservations Cobalt Job Control How to Queue a Job Running Jobs FAQs Queuing and Running on...

116

QBox | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries Boost CPMD CodeSaturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Intrepid...

118

The Magellan Final Report on Cloud Computing  

E-Print Network (OSTI)

Leadership Computing Facility (ALCF) and the National EnergyLeadership Computing Facility (ALCF) and the National EnergyHPC compute cluster into the ALCF Magellan testbed, allowing

Coghlan, Susan

2013-01-01T23:59:59.000Z

119

The Magellan Final Report on Cloud Computing  

E-Print Network (OSTI)

research used resources of the Argonne Leadership ComputingFacility at Argonne National Laboratory, which is supportedused resources of the Argonne Leadership Computing Facility

Coghlan, Susan

2013-01-01T23:59:59.000Z

120

Technology Transfer Program - DOE Designated Facilities  

Spallation Neutron Source; Leadership Computing Facility * Pacific Northwest National Laboratory. Environmental Molecular Sciences Laboratory (EMSL)

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


121

Research Support Facility (RSF): Leadership in Building Performance (Brochure)  

DOE Green Energy (OSTI)

This brochure/poster provides information on the features of the Research Support Facility including a detailed illustration of the facility with call outs of energy efficiency and renewable energy technologies. Imagine an office building so energy efficient that its occupants consume only the amount of energy generated by renewable power on the building site. The building, the Research Support Facility (RSF) occupied by the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) employees, uses 50% less energy than if it were built to current commercial code and achieves the U.S. Green Building Council's Leadership in Energy and Environmental Design (LEED{reg_sign}) Platinum rating. With 19% of the primary energy in the U.S. consumed by commercial buildings, the RSF is changing the way commercial office buildings are designed and built.

Not Available

2011-09-01T23:59:59.000Z

122

Facilities and Capabilities | ornl.gov  

NLE Websites -- All DOE Office Websites (Extended Search)

three petascale computing facilities: the Oak Ridge Leadership Computing Facility (OLCF), managed for DOE; the National Institute for Computational Sciences (NICS) computing...

123

Leadership | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership Leadership Organization Leadership Budget Laboratories & Facilities History Benefits of Research Business & Funding Opportunities ESS&H Jobs & Internships Education...

124

bgp_stack | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

bgp_stack bgp_stack Decoding Core Files The core files on the Blue Gene/P system are text files and may be read using the 'more' command. Most of the detailed information provided in the core file is not of immediate use for determining why a program failed. But the core file does contain a function call stack record that can help you identify what line of what routine was executing when the error occurred. In the core file the call stack record is at the end of the file bracketed by +++STACK and ---STACK. The call stack contains a list of instruction addresses, for these to be useful the addresses need to be translated back to a source file and line. This may be done with the bgp_stack utility: bgp_stack [executable] [corefile] The lightweight core files produced by the runtime system do not contain

125

Petascale, Adaptive CFD | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Petascale, Adaptive CFD Petascale, Adaptive CFD Petascale, Adaptive CFD PI Name: Kenneth Jansen PI Email: jansenke@colorado.edu Institution: U. Colorado-Boulder Allocation Program: ESP Allocation Hours at ALCF: 150 Million Year: 2010 to 2013 Research Domain: Engineering The aerodynamic simulations proposed will involve modeling of active flow control based on synthetic jet actuation that has been shown experimentally to produce large-scale flow changes (e.g., re-attachment of separated flow or virtual aerodynamic shaping of lifting surfaces) from micro-scale input (e.g., a 0.1 W piezoelectric disk resonating in a cavity alternately pushes/pulls out/in the fluid through a small slit to create small-scale vortical structures that interact with, and thereby dramatically alter, the cross flow). This is a process that has yet to be understood fundamentally.

126

David Martin | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

David Martin Industrial Outreach Lead David Martin Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 3126 Argonne, IL 60439 630-252-0929 dem...

127

Public Informational Materials | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

News & Events News & Events Web Articles In the News Upcoming Events Past Events Informational Materials Photo Galleries Public Informational Materials Annual Reports ALCF 2010 Annual Report ALCF 2010 Annual Report May 2011 ALCF 2010 Annual Report 2011 annual report ALCF 2011 Annual Report May 2012 2011 ALCF Annual Report 2012 ALCF Annual Report ALCF 2012 Annual Report July 2013 2012 ALCF Annual Report Fact Sheets ALCF Fact Sheet ALCF Fact Sheet September 2013 ALCF Fact Sheet Blue Gene/Q Systems and Supporting Resources Blue Gene/Q Systems and Supporting Resources June 2013 Blue Gene/Q Systems and Supporting Resources Early Science Program Projects Early Science Program Projects July 2011 Early Science Program Projects Promotional Brochures INCITE in Review INCITE in Review March 2012 INCITE in Review

128

bgq_stack | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

bgq_stack bgq_stack Location The bgq_stack script is located at: /soft/debuggers/scripts/bin/bgq_stack or for user convenience, the SoftEnv key +utility_paths (that you get in your default environment by putting @default in your ~/.soft file) allows you to use directly: bgq_stack List the possible options using -h: > bgq_stack -h Using bgq_stack on BG/Q to decode core files When a Blue Gene/Q program terminates abnormally, the system generates multiple core files, plain text files that can be viewed with the vi editor. Most of the detailed information provided in the core file is not of immediate use for determining why a program failed. But the core file does contain a function call stack record that can help identify what line of what routine was executing when the error occurred. The call stack

129

User Services | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Services Services User Support Machine Status Presentations Training & Outreach User Survey User Services The ALCF User Assistance Center provides support for ALCF resources. The center's normal support hours are 9 a.m. until 5 p.m. (Central time) Monday through Friday, exclusive of holidays. Contact Us Email: support@alcf.anl.gov Telephone: 630-252-3111 866-508-9181 Service Desk: Building 240, 2-D-15/16 Argonne National Laboratory 9700 South Cass Avenue Argonne, IL 60439 Trouble Ticket System You can check the status of your existing tickets as well as send new support tickets. To do this, log in with your ALCF username and ALCF web accounts system password. (This is the password you chose when you requested your account, or which was assigned to you and sent with your new

130

Core File Settings | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Core File Settings Core File Settings The following environment variables control core file creation and contents. Specify regular (non-script) jobs using the qsub argument --env (Note: two dashes). Specify script jobs (--mode script) using the --envs (Note: two dashes) or --exp_env (Note: two dashes) options of runjob. For additional information about setting environment variables in your job, visit http://www.alcf.anl.gov/user-guides/running-jobs#environment-variables. Generation The following environment variables control conditions of core file generation and naming: BG_COREDUMPONEXIT=1 Creates a core file when the application exits. This is useful when the application performed an exit() operation and the cause and location of the exit() is not known. BG_COREDUMPONERROR=1

131

BGP: GAMESS | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

GAMESS GAMESS What is GAMESS? The General Atomic and Molecular Electronic Structure System (GAMESS) is a general ab initio quantum chemistry package. Obtaining GAMESS Follow the instructions at the Gordon research group website: http://www.msg.chem.iastate.edu/gamess/ Building GAMESS for Blue Gene /P A number of modifications were necessary: comp.actvte - Builds the actvte.x binary; mostly for convenience compall.bgp, comp.bgp, ddi/compddi.bgp - Replacement comp scripts lked.bgp - Replacement lked script source/zunix.c - Minor modification to #include directives ddi/src/ddi_bgp.c - Completely new file gms.bgp.py - Python script for running GAMESS You will need to modify the $GMSPATH environment variable in those scripts. These modifications are available here Media:Gamess.bgp.tar.

132

Data Transfer | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Transfer Data Transfer The Blue Gene/P connects to other research institutions using a total of 20 GBs of public network connectivity. This allows scientists to transfer datasets to and from other institutions over fast research networks such as the Energy Science Network (ESNet) and the Metropolitan Research and Education Network (MREN). Data Transfer Node Overview Two data transfer nodes are available to all Intrepid users, that provide the ability to perform wide and local area data transfers. dtn01.intrepid.alcf.anl.gov (alias for gs1.intrepid.alcf.anl.gov) dtn02.intrepid.alcf.anl.gov (alias for gs2.intrepid.alcf.anl.gov) Data Transfer Utilities HSI/HTAR HSI and HTAR allow users to transfer data to and from HPSS Using HPSS on Intrepid GridFTP GridFTP provides the ability to transfer data between trusted sites such

133

Using CRYPTOCards | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Token Enabling Your CRYPTOCard Token Logging in with Your CRYPTOCard Token Troubleshooting Your CRYPTOCard Resetting CRYPTOCard PIN CRYPTOCard Return Back to top CRYPTOCard...

134

Account Information | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

to interact with the ALCF Login Servers. This is the normal state for all accounts. Disabled: An account that still exists on the system (that is, the account continues to be...

135

Shaping Future Supercomputing Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

A second mesh independency validation was performed, but this time it used a simpler, two-phase-flow single burner with three levels of refinement (4-, 8-, and 16-million...

136

Douglas Waldron | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Douglas Waldron Senior Data Architect Douglas Waldron Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 3122 Argonne, IL 60439 630-252-2884 dwaldron...

137

Mark Hereld | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

programming models and application performance analysis. Hereld is a principal architect of distributed analysis environments that support collaborative creation of...

138

User Guides | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Allocations Decommissioning of BGP Systems and Resources Blue GeneQ Versus Blue GeneP MiraCetusVesta IntrepidChallengerSurveyor Tukey Eureka Gadzooks Policies...

139

CONTACT ? Argonne Leadership Computing Facility | industry...  

NLE Websites -- All DOE Office Websites (Extended Search)

will enable: Unprecedented reductions in emissions and noise in aircraft engines and wind turbines, Improved consumer products through a better understanding of how suds...

140

Allinea DDT | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

under Queue Submission Parameters. DDT Run DDT Queue Submission Parameters The Change... button allows users to modify some information set by default on System Settings and Job...

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


141

Argonne Leadership Computing Facility Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

million hours to use Large Eddy Simulation (LES) as an acoustic diagnostic and design tool to spur innovation in next-generation quieter aircraft engines and wind turbines,...

142

Staff Directory | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Administration 630-252-0212 jstover@alcf.anl.gov Tony Tolbert Consultant - Business Intelligence AIG 630-252-6027 wtolbert@alcf.anl.gov Brian Toonen ConsultantSoftware...

143

Science at ALCF | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Hours: 40 Million more science Science at ALCF Allocation Program - Any - INCITE ALCC ESP Director's Discretionary Year -Year 2008 2009 2010 2011 2012 2013 Research Domain - Any...

144

Argonne Leadership Computing Facility Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Katherine Riley .... 7 Researchers Get Time on Mira Test and Development Prototypes at ESP Workshop ... 8 Call for Proposals for Mira Access...

145

Cobalt Job Control | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

& File Systems Compiling & Linking Queueing & Running Jobs Reservations Cobalt Job Control How to Queue a Job Running Jobs FAQs Queuing and Running on BGQ Systems Data...

146

Determining Memory Use | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgqstack gdb Coreprocessor TotalView on BGQ Systems Performance Tools & APIs Software & Libraries...

147

mpiP | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

mpiP Introduction mpiP is an easy to use library that when linked with an application reports information on MPI usage and performance. Information reported includes the MPI...

148

Bash Shell | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Bash Shell A popular shell provided with the operating system, bash is available as a login shell. To change your login shell use your ALCF personal account page. Version(s):...

149

OLCF | Oak Ridge Leadership Computing Facility | ORNL  

NLE Websites -- All DOE Office Websites (Extended Search)

of new phenomena with ramifications for how we live our lives. Machines like Titan and activities like the Titan project will be the vehicles that allow us to explore...

150

Argonne Leadership Computing Facility - Fact Sheet | Argonne...  

NLE Websites -- All DOE Office Websites (Extended Search)

science - science that will change our world through major advances in biology, chemistry, energy, engineering, climate studies, astrophysics and more. ALCFFact SheetSept....

151

Training & Outreach | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

User Services User Support Machine Status Presentations Training & Outreach Getting Started Videoconference Data Management Webinar Argonne Training Program on Extreme-Scale...

152

Visiting Us | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Visitor Registration Form Visitor Registration Form As a national laboratory, formal registration is REQUIRED for all visitors coming to Argonne National Laboratory. Visitors must...

153

Machine Status | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Machine Status Science on Mira Cores Core Hours Atomistic Simulations of Nanoscale Oxides and Oxide Interfaces 65536 1297201.9256901 Lattice QCD 16384 28881.396622179 MockBOSS :...

154

Contact Us | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Contact Us Michael Papka Division Director (630) 252-1556 Susan Coghlan Deputy Division Director (630) 252-1637 Richard Coffey Strategic Advisor to the Director (630) 252-2725...

155

Early Science Program | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Early Science Program The goals of the Early Science Program (ESP) were to prepare key applications for the architecture and scale of Mira, and to solidify libraries and...

156

Our Teams | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Need Help? support@alcf.anl.gov 630-252-3111 866-508-9181 Expert Teams World-Class Expertise and Project Lifecycle Assistance To maximize your research, the ALCF has assembled a...

157

User Advisory Council | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Resources & Expertise Mira Cetus Vesta Intrepid Challenger Surveyor Visualization Clusters Data and Networking Our Teams User Advisory Council User Advisory Council The User...

158

Data and Networking | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Networking Data and Networking Data Storage The ALCF's data storage system is used to retain the data generated by simulations and visualizations. Disk storage provides...

159

Allocation Management | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

if there is an active allocation, check Running Jobs. For information to run the query or email support, check alcf.anl.gov and ask for all active allocations. Determining...

160

BGP: Repast HPC | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

(TIME) -n (NUMBEROFPROCESSES) zombiexl.exe .config.props .model.props Modifying the Model An important aspect of the Agent-Based Modeling paradigm is the flexibility with...

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


161

Argonne Leadership Computing Facility Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

... 2 Rapid Procedure Unites Quantum Chemistry with Artificial Intelligence ... 4 Lithium-Air: The "Mount Everest" of Batteries for Electric Vehicles...

162

PREPARING FOR EXASCALE ORNL Leadership Computing Facility  

E-Print Network (OSTI)

) representatives, and International Nuclear Information System (INIS) representatives from the following source.......................................................................................................................13 2.2.9 Nuclear Energy ..........................................................................................................................14 2.2.10 Nuclear Physics

163

PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy  

SciTech Connect

In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for the longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.

Joubert, Wayne [ORNL; Kothe, Douglas B [ORNL; Nam, Hai Ah [ORNL

2009-12-01T23:59:59.000Z

164

Scientific Application Requirements for Leadership Computing at the Exascale  

Science Conference Proceedings (OSTI)

The Department of Energy s Leadership Computing Facility, located at Oak Ridge National Laboratory s National Center for Computational Sciences, recently polled scientific teams that had large allocations at the center in 2007, asking them to identify computational science requirements for future exascale systems (capable of an exaflop, or 1018 floating point operations per second). These requirements are necessarily speculative, since an exascale system will not be realized until the 2015 2020 timeframe, and are expressed where possible relative to a recent petascale requirements analysis of similar science applications [1]. Our initial findings, which beg further data collection, validation, and analysis, did in fact align with many of our expectations and existing petascale requirements, yet they also contained some surprises, complete with new challenges and opportunities. First and foremost, the breadth and depth of science prospects and benefits on an exascale computing system are striking. Without a doubt, they justify a large investment, even with its inherent risks. The possibilities for return on investment (by any measure) are too large to let us ignore this opportunity. The software opportunities and challenges are enormous. In fact, as one notable computational scientist put it, the scale of questions being asked at the exascale is tremendous and the hardware has gotten way ahead of the software. We are in grave danger of failing because of a software crisis unless concerted investments and coordinating activities are undertaken to reduce and close this hardwaresoftware gap over the next decade. Key to success will be a rigorous requirement for natural mapping of algorithms to hardware in a way that complements (rather than competes with) compilers and runtime systems. The level of abstraction must be raised, and more attention must be paid to functionalities and capabilities that incorporate intent into data structures, are aware of memory hierarchy, possess fault tolerance, exploit asynchronism, and are power-consumption aware. On the other hand, we must also provide application scientists with the ability to develop software without having to become experts in the computer science components. Numerical algorithms are scattered broadly across science domains, with no one particular algorithm being ubiquitous and no one algorithm going unused. Structured grids and dense linear algebra continue to dominate, but other algorithm categories will become more common. A significant increase is projected for Monte Carlo algorithms, unstructured grids, sparse linear algebra, and particle methods, and a relative decrease foreseen in fast Fourier transforms. These projections reflect the expectation of much higher architecture concurrency and the resulting need for very high scalability. The new algorithm categories that application scientists expect to be increasingly important in the next decade include adaptive mesh refinement, implicit nonlinear systems, data assimilation, agent-based methods, parameter continuation, and optimization. The attributes of leadership computing systems expected to increase most in priority over the next decade are (in order of importance) interconnect bandwidth, memory bandwidth, mean time to interrupt, memory latency, and interconnect latency. The attributes expected to decrease most in relative priority are disk latency, archival storage capacity, disk bandwidth, wide area network bandwidth, and local storage capacity. These choices by application developers reflect the expected needs of applications or the expected reality of available hardware. One interpretation is that the increasing priorities reflect the desire to increase computational efficiency to take advantage of increasing peak flops [floating point operations per second], while the decreasing priorities reflect the expectation that computational efficiency will not increase. Per-core requirements appear to be relatively static, while aggregate requirements will grow with the system. This projection is consistent with a r

Ahern, Sean [ORNL; Alam, Sadaf R [ORNL; Fahey, Mark R [ORNL; Hartman-Baker, Rebecca J [ORNL; Barrett, Richard F [ORNL; Kendall, Ricky A [ORNL; Kothe, Douglas B [ORNL; Mills, Richard T [ORNL; Sankaran, Ramanan [ORNL; Tharrington, Arnold N [ORNL; White III, James B [ORNL

2007-12-01T23:59:59.000Z

165

Director's Discretionary (DD) Program | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

is not required. Review Process: Projects must demonstrate a need for high-performance computing resources. Reviewed by ALCF. Application Period: ongoing (available year...

166

User Facility Agreement Form  

NLE Websites -- All DOE Office Websites (Extended Search)

5. Which Argonne user facility will be hosting you? * Advanced Leadership Computing Facility (ALCF) Advanced Photon Source (APS) Argonne Tandem Linear...

167

Petascale Adaptive Computational Fluid Dynamics | Argonne Leadership  

NLE Websites -- All DOE Office Websites (Extended Search)

Petascale Adaptive Computational Fluid Dynamics Petascale Adaptive Computational Fluid Dynamics PI Name: Kenneth Jansen PI Email: jansen@rpi.edu Institution: Rensselaer Polytechnic Institute The specific aim of this request for resources is to examine scalability and robustness of our code on BG/P. We have confirmed that, during the flow solve phase, our CFD flow solver does exhibit perfect strong scaling to the full 32k cores on our local machine (CCNI-BG/L at RPI) but this will be our first access to BG/P. We are also eager to study the performance of the adaptive phase of our code. Some aspects have scaled well on BG/L (e.g., refinement has produced adaptive meshes that take a 17 million element mesh and perform local adaptivity on 16k cores to match a requested size field to produce a mesh exceeding 1 billion elements) but other aspects (e.g.,

168

User Facilities | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

User Facilities Advanced Photon Source Argonne Leadership Computing Facility Argonne Tandem Linear Accelerator System Center for Nanoscale Materials Electron Microscopy Center...

169

Biological Sciences Facility and Computational Sciences Facility  

E-Print Network (OSTI)

on PNNL's campus since 1997. Combined, the two facilities house about 300 staff who support PNNL replacing laboratory and office space PNNL has been using on the south end of the nearby Hanford Site financed the new buildings and is leasing them to Battelle, which operates PNNL for DOE. #12;January 2010

170

Argonne Leadership Compu2ng Facility www.alcf.anl.gov Katherine Riley  

E-Print Network (OSTI)

Argonne Leadership Compu2ng Facility ­ www.alcf.anl.gov Katherine Riley Manager and feasibility Managed By INCITE management commi^ee (ALCF & OLCF) DOE Office. Communica2on with ALCF is extremely helpful. You can request 2me on Mira

Kemner, Ken

171

National facility for advanced computational science: A sustainable path to scientific discovery  

Science Conference Proceedings (OSTI)

Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

2004-04-02T23:59:59.000Z

172

Computational Materials Design Facility (CMDF) - TMS  

Science Conference Proceedings (OSTI)

Jul 31, 2007 ... User is offline Print this message. Cathy Rohrer Posts: 44. Joined: 2/6/2007. The Computational Materials Design Facility is a "simulation...

173

Characterizing I/O performance on leadership-class systems |...  

NLE Websites -- All DOE Office Websites (Extended Search)

HPC production applications. Initially run at the IBM Blue Gene systems at the Argonne Leadership Computing Facility, Darshan recently was adapted by Argonne researchers,...

174

CNST NanoFab Facility User Computer Security and Usage ...  

Science Conference Proceedings (OSTI)

Page 1. CNST NanoFab Facility User Computer Security and Usage Policy ... CNST NanoFab Facility User Computer Security and Usage Policy ...

2013-07-31T23:59:59.000Z

175

2-6 Molecular Science Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

MSCF Overview MSCF Overview Molecular Science Computing Facility The Molecular Science Computing Facility (MSCF) supports a wide range of computational activities in environmental molecular research, from benchmark calculations on small mole- cules to reliable calculations on large molecules, and from solids to simulations of large bio- molecules and reactive chemical transport modeling. The MSCF provides an integrated production computing environment with links to external facilities within the U.S. Depart- ment of Energy (DOE), collaborating universities, and industry. Instrumentation & Capabilities * MPP2. Production cluster of 980 HP rx2600 nodes, 1960 1.5 gigahertz IA64 processors, 450 terabytes local disk, 6.8 terabytes memory, 11.8 teraflops * Lustre. Shared cluster

176

National Center for Computational Sciences | ornl.gov  

NLE Websites -- All DOE Office Websites (Extended Search)

two of Oak Ridge National Laboratory's (ORNL's) high-performance computing projects-the Oak Ridge Leadership Computing Facility (OLCF) and the National Climate-Computing Research...

177

Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research  

E-Print Network (OSTI)

ALCF ALE AMR API ARRA ASCR CGP CICART Alfvn Eigenmode / Energetic Particle Mode Argonne Leadership Computing Facility

Gerber, Richard

2012-01-01T23:59:59.000Z

178

Darshan on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

on the Guide to darshan-parser output page. Disabling Darshan Darshan can be disabled entirely by adding +mpiwrappers to your .softenvrc file before the @default....

179

Mira/Cetus/Vesta | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

bandwidth. Users will also have access to the HPSS data archive and Tukey, the new analysis and visualization cluster. All of these resources are available through high...

180

Oak Ridge Leadership Computing Facility Annual Report 20102011  

E-Print Network (OSTI)

, a process that poses a significant obstacle to economically viable bioethanol production. Simulation

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


181

BG/P Driver Information | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

to automatically recover from this condition but deadlock could occur. Try enlarging your remote get fifos." serviceaction"(NoService)" > Fixes for the GNU toolchain and Python...

182

BG/P File Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Blue GeneQ Versus Blue GeneP MiraCetusVesta IntrepidChallengerSurveyor Decommissioning of BGP Systems and Resources Introducing Challenger Quick Reference Guide System...

183

Data Storage & File Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Systems: An overview of the BGQ file systems available at ALCF. Disk Quota Disk Quota: Information on disk quotas for Mira and Vesta. Using HPSS Using HPSS: HPSS is a data...

184

LatticeQCD - Early Science | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

summary of Lattice QCD averages The current summary of Lattice QCD averages. Laiho, Lunghi, & Van de Water, Phys.Rev.D81:034503,2010. LatticeQCD - Early Science PI Name: Paul...

185

Geoffrey P. Ely Leadership Computing Facility Argonne National...  

NLE Websites -- All DOE Office Websites (Extended Search)

Decomposition 0 4 8 12 16 1 4 16 64 256 1024 4096 16384 65536 Runtimestep (s) Cores Weak Scaling Benchmark TACC Ranger (8M elements per core) ALCF Intrepid (1M elements per...

186

Getting Started at ALCF | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

ALCC Program Director's Discretionary Program Need Help? support@alcf.anl.gov 630-252-3111 866-508-9181 Getting Started at ALCF If you are interested in using ALCF resources,...

187

gdb on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

After booting & initial startup you will get a prompt from the gdb server. You need to query it for the IP and port numbers that correspond to each rank you want to attach the...

188

Facilities | U.S. DOE Office of Science (SC)  

Office of Science (SC) Website

Facilities Facilities Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Accessing ASCR Supercomputers Oak Ridge Leadership Computing Facility (OLCF) Argonne Leadership Computing Facility (ALCF) National Energy Research Scientific Computing Center (NERSC) Energy Sciences Network (ESnet) Research & Evaluation Prototypes (REP) Innovative & Novel Computational Impact on Theory and Experiment (INCITE) ASCR Leadership Computing Challenge (ALCC) Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy SC-21/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301) 903-7486 F: (301)

189

Software and Libraries for BG/P | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

Allinea DDT +ddt L Multithreaded, multiprocess source code debugger for high performance computing. TotalView +totalview L Multithreaded, multiprocess source code debugger for...

190

HPCT Xprofiler on BG/P Systems | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

HPCT Xprofiler on BGP Systems References IBM System Blue Gene Solution: High Performance Computing Toolkit for Blue GeneP - IBM Redbook describing HPCT and other performance...

191

Blue Gene/Q: Sequoia and Mira | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

Publication Date: April, 2013 Name of Publication Source: Contemporary High Performance Computing From Petascale toward Exascale Type of Publication: book chapter URL of the...

192

NERSC 2011: High Performance Computing Facility Operational Assessment for the National Energy Research Scientific Computing Center  

E-Print Network (OSTI)

NERSC 2011 High Performance Computing Facility Operationalby providing high-performance computing, information, data,s deep knowledge of high performance computing to overcome

Antypas, Katie

2013-01-01T23:59:59.000Z

193

DOE Office of Science Computing Facility Operational Assessment...  

NLE Websites -- All DOE Office Websites (Extended Search)

Assessment (OA) Review of the efficiencies in the steady-state operations of each of the DOE Office of Science High Performance Computing (HPC) Facilities. * OMB requirement for...

194

Innovative & Novel Computational Impact on Theory & Experiement (INCITE) |  

Office of Science (SC) Website

Innovative Innovative & Novel Computational Impact on Theory and Experiment (INCITE) Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Accessing ASCR Supercomputers Oak Ridge Leadership Computing Facility (OLCF) Argonne Leadership Computing Facility (ALCF) National Energy Research Scientific Computing Center (NERSC) Energy Sciences Network (ESnet) Research & Evaluation Prototypes (REP) Innovative & Novel Computational Impact on Theory and Experiment (INCITE) ASCR Leadership Computing Challenge (ALCC) Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy SC-21/Germantown Building

195

National Energy Research Scientific Computing Center (NERSC) | U.S. DOE  

Office of Science (SC) Website

National National Energy Research Scientific Computing Center (NERSC) Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Accessing ASCR Supercomputers Oak Ridge Leadership Computing Facility (OLCF) Argonne Leadership Computing Facility (ALCF) National Energy Research Scientific Computing Center (NERSC) Energy Sciences Network (ESnet) Research & Evaluation Prototypes (REP) Innovative & Novel Computational Impact on Theory and Experiment (INCITE) ASCR Leadership Computing Challenge (ALCC) Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy SC-21/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301) 903-7486 F: (301)

196

Brookhaven Reactor Experiment Control Facility, a distributed function computer network  

SciTech Connect

A computer network for real-time data acquisition, monitoring and control of a series of experiments at the Brookhaven High Flux Beam Reactor has been developed and has been set into routine operation. This reactor experiment control facility presently services nine neutron spectrometers and one x-ray diffractometer. Several additional experiment connections are in progress. The architecture of the facility is based on a distributed function network concept. A statement of implementation and results is presented. (auth)

Dimmler, D.G.; Greenlaw, N.; Kelley, M.A.; Potter, D.W.; Rankowitz, S.; Stubblefield, F.W.

1975-11-01T23:59:59.000Z

197

HIGH PERFORMANCE INTEGRATION OF DATA PARALLEL FILE SYSTEMS AND COMPUTING  

E-Print Network (OSTI)

Facility Application Requirements and Strategy OLCF Table of Contents iii CONTENTS TABLES ..................................................................................................................................84 APPENDIX A. OLCF OVERVIEW ...........................................................................................................................................119 #12;ORNL Leadership Computing Facility Application Requirements and Strategy OLCF Tables v TABLES

198

Modeling a Leadership-scale Storage System , Christopher Carothers1  

E-Print Network (OSTI)

an end-to-end storage system model of the Argonne Leadership Computing Facility's (ALCF) comput- ing collected from the ALCF's storage system for a variety of synthetic I/O workloads and scales. we present in the ALCF. As an early study of the CODES project, our simulators can quickly and accurately simulate

199

Modeling Resource-Coupled Computations Mark Hereld,* Joseph Insley, Eric Olson,* Michael E. Papka,*  

E-Print Network (OSTI)

CONTACT Argonne Leadership Computing Facility | www.alcf.anl.gov | info@alcf.anl.gov | (877) 737 The Argonne Leadership Computing Facility (ALCF) enables breakthrough science--science that will change our and more. Operated for the U.S. Department of Energy's (DOE) Office of Science, the ALCF gives scientists

200

User Facilities | ORNL  

NLE Websites -- All DOE Office Websites (Extended Search)

USER PORTAL USER PORTAL BTRICBuilding Technologies Research Integration Center CNMSCenter for Nanophase Materials Sciences CSMBCenter for Structural Molecular Biology CFTFCarbon Fiber Technology Facility HFIRHigh Flux Isotope Reactor MDF Manufacturing Demonstration Facility NTRCNational Transportation Research Center OLCFOak Ridge Leadership Computing Facility SNSSpallation Neutron Source Keeping it fresh at the Spallation Neutron Source Nanophase material sciences' nanotech toolbox Home | User Facilities SHARE ORNL User Facilities ORNL is home to a number of highly sophisticated experimental user facilities that provide unmatched capabilities to the broader scientific community, including a growing user community from universities, industry, and other laboratories research institutions, as well as to ORNL

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


201

Advanced Scientific Computing Research User Facilities | U.S. DOE Office of  

Office of Science (SC) Website

ASCR User Facilities ASCR User Facilities User Facilities ASCR User Facilities BES User Facilities BER User Facilities FES User Facilities HEP User Facilities NP User Facilities User Facilities Frequently Asked Questions User Facility Science Highlights Contact Information Office of Science U.S. Department of Energy 1000 Independence Ave., SW Washington, DC 20585 P: (202) 586-5430 ASCR User Facilities Print Text Size: A A A RSS Feeds FeedbackShare Page The Advanced Scientific Computing Research program supports the operation of the following national scientific user facilities: Energy Sciences Network (ESnet): External link The Energy Sciences Network, or ESnet External link , is the Department of Energy's high-speed network that provides the high-bandwidth, reliable connections that link scientists at national laboratories, universities and

202

DOE Designated User Facilities Multiple Laboratories * ARM Climate Research Facility  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Designated User Facilities Designated User Facilities Multiple Laboratories * ARM Climate Research Facility Argonne National Laboratory * Advanced Photon Source (APS) * Electron Microscopy Center for Materials Research * Argonne Tandem Linac Accelerator System (ATLAS) * Center for Nanoscale Materials (CNM) * Argonne Leadership Computing Facility (ALCF) * Brookhaven National Laboratory * National Synchrotron Light Source (NSLS) * Accelerator Test Facility (ATF) * Relativistic Heavy Ion Collider (RHIC) * Center for Functional Nanomaterials (CFN) * National Synchrotron Light Source II (NSLS-II ) (under construction) Fermi National Accelerator Laboratory * Fermilab Accelerator Complex Idaho National Laboratory * Advanced Test Reactor ** * Wireless National User Facility (WNUF)

203

Description of Facilities and Resources Oak Ridge National Laboratory  

E-Print Network (OSTI)

1 Description of Facilities and Resources Oak Ridge National Laboratory and the UT-ORNL Joint National Laboratory (ORNL) hosts three petascale computing facilities: the Oak Ridge Leadership Computing Center (NCRC), formed as collaboration between ORNL and the National Oceanographic and Atmospheric

204

0a77ceec.signing_policy | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

0a77ceec.signing_policy 0a77ceec.signing_policy # ca-signing-policy.conf, see ca-signing-policy.doc for more information # # This is the configuration file describing the policy for what CAs are # allowed to sign whoses certificates. # # This file is parsed from start to finish with a given CA and subject # name. # subject names may include the following wildcard characters: # * Matches any number of characters. # ? Matches any single character. # # CA names must be specified (no wildcards). Names containing whitespaces # must be included in single quotes, e.g. 'Certification Authority'. # Names must not contain new line symbols. # The value of condition attribute is represented as a set of regular # expressions. Each regular expression must be included in double quotes.

205

UPC Events All on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Events All on BG/P Systems Events All on BG/P Systems BG/P Hardware Events - Complete List Event Number Mode Counter Number Name Hardware Unit Description 0 0 0 BGP_PU0_JPIPE_INSTRUCTIONS P0 CPU J-pipe instructions 1 0 1 BGP_PU0_JPIPE_ADD_SUB P0 CPU Add/Sub in J-pipe 2 0 2 BGP_PU0_JPIPE_LOGICAL_OPS P0 CPU Logical operations in J-pipe 3 0 3 BGP_PU0_JPIPE_SHROTMK P0 CPU J-pipe shift/rotate/mask instructions 4 0 4 BGP_PU0_IPIPE_INSTRUCTIONS P0 CPU I-pipe instructions 5 0 5 BGP_PU0_IPIPE_MULT_DIV P0 CPU Mult/Div in I-pipe 6 0 6 BGP_PU0_IPIPE_ADD_SUB P0 CPU Add/Sub in I-pipe 7 0 7 BGP_PU0_IPIPE_LOGICAL_OPS P0 CPU Logical operations in I-pipe 8 0 8 BGP_PU0_IPIPE_SHROTMK P0 CPU I-pipe shift/rotate/mask instructions 9 0 9 BGP_PU0_IPIPE_BRANCHES P0 CPU Branches

206

Machine Partitions on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Machine Partitions on BG/P Systems Machine Partitions on BG/P Systems Partitions of the machine In the prod-devel queue (on Challenger), partition sizes of 16, 32, 64, 128, 256, and 512 nodes are available. Of these, only the 512 node partition has a torus network; the others have mesh networks. In the prod queue (on Intrepid), partitions of 512, 1K, 2K, 4K, 8K, 16K, 24K, 32K, and 40K are available. You can see the partitions in the output of partlist, along with whether they are free, busy, or blocked by other partitions: intrepid$ partlist Name Queue State Backfill ============================================================================= ANL-R00-R47-40960 off blocked (ANL-R00-R17-16384) - ANL-R00-R37-32768 prod blocked (ANL-R00-R17-16384) -

207

Allinea DDT on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

IntrepidChallengerSurveyor Decommissioning of BGP Systems and Resources Introducing Challenger Quick Reference Guide System Overview Data Transfer Data Storage & File Systems...

208

Running Jobs on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

by the OS, in which case the program continues to execute, but likely with a large performance penalty, or it may be left unhandled, in which case execution of the program is...

209

Oak Ridge Leadership Computing Facility | Annual Report 2009 Petascale Science Delivered  

E-Print Network (OSTI)

. The list of accomplishments also included the largest known simulations of a nuclear reactor. This work reactor, paving the way for the design of next-generation nuclear power devices. Other projects included-AC0500OR22725 About the Cover Neutron power distribution in a 17x17 assembly, PWR900 reactor core. Each

210

Department of Energy National Laboratories and Plants: Leadership in Cloud Computing (Book)  

SciTech Connect

A status report on the cloud computing strategy for each Department of Energy laboratory and plant, showing the movement toward a cloud first IT strategy.

Not Available

2012-12-01T23:59:59.000Z

211

Lessons Learned from the 200 West Pump and Treatment Facility Construction Project at the US DOE Hanford Site - A Leadership for Energy and Environmental Design (LEED) Gold-Certified Facility  

SciTech Connect

CH2M Hill Plateau Remediation Company (CHPRC) designed, constructed, commissioned, and began operation of the largest groundwater pump and treatment facility in the U.S. Department of Energys (DOE) nationwide complex. This one-of-a-kind groundwater pump and treatment facility, located at the Hanford Nuclear Reservation Site (Hanford Site) in Washington State, was built to an accelerated schedule with American Recovery and Reinvestment Act (ARRA) funds. There were many contractual, technical, configuration management, quality, safety, and Leadership in Energy and Environmental Design (LEED) challenges associated with the design, procurement, construction, and commissioning of this $95 million, 52,000 ft groundwater pump and treatment facility to meet DOEs mission objective of treating contaminated groundwater at the Hanford Site with a new facility by June 28, 2012. The project teams successful integration of the projects core values and green energy technology throughout design, procurement, construction, and start-up of this complex, first-of-its-kind Bio Process facility resulted in successful achievement of DOEs mission objective, as well as attainment of LEED GOLD certification, which makes this Bio Process facility the first non-administrative building in the DOE Office of Environmental Management complex to earn such an award.

Dorr, Kent A.; Ostrom, Michael J.; Freeman-Pollard, Jhivaun R.

2013-01-11T23:59:59.000Z

212

NREL: Energy Executive Leadership Academy - Leadership Institute  

NLE Websites -- All DOE Office Websites (Extended Search)

Institute Institute Participants in NREL's Executive Energy Leadership Institute learn about renewable energy and energy efficiency from the experts through this accelerated training program typically conducted over a three-day period. Course content includes briefings by technology experts on renewable energy and energy efficiency technologies, market assessments, and analytical and financial tools, as well as associated technology tours. Tours of NREL research facilities are a key component of the Institute. All sessions originate and end at NREL's campus in Golden, Colorado. For additional details, including a customized Leadership Institute in your region, see the sample syllabus or contact Energy Execs. Qualified individuals are invited to apply for the upcoming 2014 Institute.

213

Creating science-driven computer architecture: A new path to scientific leadership  

SciTech Connect

This document proposes a multi-site strategy for creating a new class of computing capability for the U.S. by undertaking the research and development necessary to build supercomputers optimized for science in partnership with the American computer industry.

McCurdy, C. William; Stevens, Rick; Simon, Horst; Kramer, William; Bailey, David; Johnston, William; Catlett, Charlie; Lusk, Rusty; Morgan, Thomas; Meza, Juan; Banda, Michael; Leighton, James; Hules, John

2002-10-14T23:59:59.000Z

214

Quick Reference Guide for BG/P Systems | Argonne Leadership Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Introducing Challenger Quick Reference Guide System Overview Data Transfer Data Storage & File Systems Compiling and Linking Queueing and Running Jobs Debugging and Profiling Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Quick Reference Guide for BG/P Systems Contents Hardware Description Compiling/Linking Running/Queuing Libraries/Applications Performance Tools Debugging Back to top Hardware Description Surveyor - 13.6 TF/s 1 rack BG/P (1024 compute nodes/4096 CPUs) Intrepid - 557.1 TF/s 40 rack BG/P (40960 compute nodes/163840 CPUs) Front-end nodes (FENs), or login nodes - Regular Linux-based computers for

215

Science-driven system architecture: A new process for leadership class computing  

E-Print Network (OSTI)

eds. , DOE Office of Science, Washington, D.C. , 2003. J.et al. , Creating Science-Driven Computer Architecture: AWashington, D.C. , 2003. A Science-Based Case for Large-

2004-01-01T23:59:59.000Z

216

Lessons Learned From The 200 West Pump And Treatment Facility Construction Project At The US DOE Hanford Site - A Leadership For Energy And Environmental Design (LEED) Gold-Certified Facility  

SciTech Connect

CH2M Hill Plateau Remediation Company (CHPRC) designed, constructed, commissioned, and began operation of the largest groundwater pump and treatment facility in the U.S. Department of Energy's (DOE) nationwide complex. This one-of-a-kind groundwater pump and treatment facility, located at the Hanford Nuclear Reservation Site (Hanford Site) in Washington State, was built in an accelerated manner with American Recovery and Reinvestment Act (ARRA) funds and has attained Leadership in Energy and Environmental Design (LEED) GOLD certification, which makes it the first non-administrative building in the DOE Office of Environmental Management complex to earn such an award. There were many contractual, technical, configuration management, quality, safety, and LEED challenges associated with the design, procurement, construction, and commissioning of this $95 million, 52,000 ft groundwater pump and treatment facility. This paper will present the Project and LEED accomplishments, as well as Lessons Learned by CHPRC when additional ARRA funds were used to accelerate design, procurement, construction, and commissioning of the 200 West Groundwater Pump and Treatment (2W P&T) Facility to meet DOE's mission of treating contaminated groundwater at the Hanford Site with a new facility by June 28, 2012.

Dorr, Kent A. [CH2M HILL Plateau Remediation Company, Richland, WA (United States); Ostrom, Michael J. [CH2M HILL Plateau Remediation Company, Richland, WA (United States); Freeman-Pollard, Jhivaun R. [CH2M HILL Plateau Remediation Company, Richland, WA (United States)

2012-11-14T23:59:59.000Z

217

Machine Environment FAQs on BG/P Systems | Argonne Leadership Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

BG/P Driver Information Prior BG/P Driver Information Internal Networks Machine Environment FAQs Block and Job State Documentation Machine Partitions Data Transfer Data Storage & File Systems Compiling and Linking Queueing and Running Jobs Debugging and Profiling Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Machine Environment FAQs on BG/P Systems What is the /proc filesystem? The CNK OS on the compute nodes does not provide a standard /proc file system with information about the processes running on the compute node. A /jobs directory does however exist which provides limited information about

218

A SPECIALIZED, MULTI-USER COMPUTER FACILITY FOR THE HIGH-SPEED, INTERACTIVE PROCESSING OF EXPERIMENTAL DATA  

E-Print Network (OSTI)

of a proposed five user facility is shown In Figure 1. TheA SPECIALIZED, MULTI-USER COMPUTE* FACILITY FOR THE HIGH-to support aultiple ' users on the facility, each capable.of

Maples, C.C.

2010-01-01T23:59:59.000Z

219

Debugging & Profiling on BG/P Systems | Argonne Leadership Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Core File Settings Using VNC with a Debugger Allinea DDT bgp_stack Coreprocessor gdb TotalView Determining Memory Use Common Debugging Issues FAQs Debugging Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Debugging & Profiling on BG/P Systems This information is for Intrepid and Challenger. Initial setups Core File Settings - this page contains some environment variables that allow you to control code file creation and contents. Using VNC with a Debugger - when displaying an X11 client (e.g. Totalview) remotely over the network, interactive response is typically slow. Using the VNC server can often help you improve the situation.

220

Performance Tools and APIs on BG/P Systems | Argonne Leadership Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Performance Tools and APIs Performance Tools and APIs Tuning and Analysis Utilities (TAU) Rice HPC Toolkit IBM HPCT Mpip gprof Profiling Tools Darshan PAPI High Level UPC API Low Level UPC API UPC Hardware BG/P dgemm Performance Tuning MPI on BGP Performance FAQs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Performance Tools and APIs on BG/P Systems MPI and OpenMP Options Tuning MPI on BGP Performance Tools Tuning and Analysis Utilities (TAU) - Instruments applications and gathers information on timings, MPI activity, and hardware performance counter events Rice HPCToolkit- Performs sample based profiling of applications and

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


221

Common Debugging Issues on BG/P Systems | Argonne Leadership Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Common Debugging Issues on BG/P Systems Common Debugging Issues on BG/P Systems What does "Cannot allocate memory" error mean when starting up a debug job? {0}.0: ciod: Error forking external debugger process: Cannot allocate memory The IO node is what loads the executable on the compute nodes, but it is running out of memory during job startup. An alternate IO node kernel profile may solve the problem. This is set by running a cobalt job with --kernel . Please contact support@alcf.anl.gov for assistance. Signals If your stderr file indicates your run terminated due to a signal, the signal names and numbers are listed in the manpage for signal. On the login node, type "man 7 signal" to see this information. ‹ Allinea DDT on BG/P Systems up Core File Settings on BG

222

Example Program and Makefile for BG/P | Argonne Leadership Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Intrepid/Challenger/Surveyor Intrepid/Challenger/Surveyor Introducing Challenger Quick Reference Guide System Overview Data Transfer Data Storage & File Systems Compiling and Linking Example Program and Makefile for BG/P FAQs Compiling and Linking Queueing and Running Jobs Debugging and Profiling Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Example Program and Makefile for BG/P Program Example Here's an example of compiling a simple MPI program on ALCF Blue Gene/P systems: > cat pi.c #include "mpi.h" #include #include int main(int argc, char** argv)

223

FAQs Queueing and Running on BG/P Systems | Argonne Leadership Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Reservations Queueing Running Jobs HTC Mode MPMD and MPIEXEC FAQs Queueing and Running Debugging and Profiling Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] FAQs Queueing and Running on BG/P Systems Contents Is there a limit on stack size? What are typical boot times for a job My job had empty stdout, and the stderr looks like it died immediately after it started. What happened? Where can I find the details of a job submission? Back to top Is there a limit on stack size? There is no strict limit on the stack size. The stack and heap grow towards each other until a collision occurs. If your job terminates with an error

224

FAQs Data Management on BG/P Systems | Argonne Leadership Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

BG/P File Systems FAQs Data Management on BG/P Systems I O Tuning Using HPSS Compiling and Linking Queueing and Running Jobs Debugging and Profiling Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] FAQs Data Management on BG/P Systems Contents Why is the ALCF implementing /home quotas on Intrepid? When will the quotas take effect? What is the quota amount? How can I find out how much data I have? Where/how can I find out how much data my project members each have? What will happen when I reach the quota limit? How will I know I've hit it? Will ALCF implement quotas on /intrepid-fs0?

225

Multicore and accelerator development for a leadership-class stellar astrophysics code  

Science Conference Proceedings (OSTI)

We describe recent development work on the core-collapse supernova code CHIMERA. CHIMERA has consumed more than 100 million cpu-hours on Oak Ridge Leadership Computing Facility (OLCF) platforms in the past 3 years, ranking it among the most important ... Keywords: GPU, OpenACC, openMP, stellar astrophysics, supernovae

O. E. Bronson Messer; J. Austin Harris; Suzanne Parete-Koon; Merek A. Chertkow

2012-06-01T23:59:59.000Z

226

Toward simulation-time data analysis and I/O acceleration on leadership-class systems  

E-Print Network (OSTI)

infrastructure on the systems at the Argonne Leadership Computing Facility (ALCF). We use the 40-rack Intrepid applications in the ALCF environment, we set up a temporary PVFS2 storage cluster on Eureka and mounted by a U.S. Department of Energy INCITE award and an ALCF Director's Discretionary Allocation. #12

227

OCIO Leadership | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

OCIO Leadership OCIO Leadership OCIO Leadership Leadership Robert F. Brese Chief Information Officer Mr. Brese is the Chief Information Officer (CIO) for the Department of Energy (DOE). He provides leadership, establishes policy, and maintains oversight of DOE's annual $2 billion investment in information technology (IT), at more than 25 National Laboratories and Production Facilities, to enable urgent missions that span from open science to nuclear security. Mr. Brese is also a leader in the U.S. Government's cybersecurity community and a key contributor to the Administration's efforts in legislation, policy and technology research, development, and deployment. More about Bob Brese Donald E. Adcock Deputy Chief Information Officer More about Don Adcock Photograph of CTO Peter Tseronis

228

Leadership and Leading Indicators Presentation  

NLE Websites -- All DOE Office Websites (Extended Search)

Leadership Leadership and Leading Indicators Peter S. Winokur, Ph.D., Member Defense Nuclear Facilities Safety Board Thanks to Matt Moury and Doug Minnema August 28, 2008 Objectives * A few thoughts about leadership * Actions taken by leaders * Role of leading indicators * Consider the future August 28, 2008 2 3 Safety Culture Safety culture is an organization's values and behaviors - modeled by its leaders and internalized by its members - that serve to make nuclear safety an overriding priority.* - Dating back to SEN-35-91, it's DOE Policy; - It's perishable; - EFCOG/DOE ISMS Safety Culture Task Team. *INPO, Principles for a Strong Nuclear Safety Culture, November 2004. August 28, 2008 4 Leadership & Mission Top 10 Ways To Know You Have A Safety Culture: * #1 is Leadership - the talk and the walk

229

VACET: Twists and Turns State-of-the-art computational science simulations generate large-scale vector  

E-Print Network (OSTI)

. Weak scaling behavior of the CASTRO code on the jaguarpf machine at the OLCF. For the two-based parallelism, on the jaguarpf ma- chine at the Oak Ridge Leadership Computing Facility (OLCF). A weak scaling

230

NEPA CX Determination SS-SC-12-03 for the Stanford Research Computer Facility (SRCF)  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

3 for the Stanford Research Computer Facility (SRCF) 3 for the Stanford Research Computer Facility (SRCF) National Environmental Policy Act (NEPA) Categorical Exclusion (CX) Determination A. SSO NEPA Control #: SS-SC-12-03 AN12038 B. Brief Description of Proposed Action: The project scope includes the construction of a new computer facility (21,500 square feet) capable of providing 3 MW of data center potential. The new two-story facility will provide infrastructure for a multitude of server racks. There are three fenced service yards outside the building, one for chillers, one for new electrical substation equipment, and one for emergency generators. The ground floor will be utilized for electrical and receiving area; the second floor will have a server room, mechanical room, conference

231

NNS computing facility manual P-17 Neutron and Nuclear Science  

SciTech Connect

This document describes basic policies and provides information and examples on using the computing resources provided by P-17, the Neutron and Nuclear Science (NNS) group. Information on user accounts, getting help, network access, electronic mail, disk drives, tape drives, printers, batch processing software, XSYS hints, PC networking hints, and Mac networking hints is given.

Hoeberling, M.; Nelson, R.O.

1993-11-01T23:59:59.000Z

232

Designing computational steering facilities for distributed agent based simulations  

Science Conference Proceedings (OSTI)

Agent-Based Models (ABMs) are a class of models which, by simulating the behavior of multiple agents (i.e., ndependent actions, interactions and adaptation), aim to emulate and/or predict complex phenomena. One of the general features of ABM simulations ... Keywords: agent-based simulation, computational steering, distributed systems, visualization of distributed models

Gennaro Cordasco, Rosario De Chiara, Francesco Raia, Vittorio Scarano, Carmine Spagnuolo, Luca Vicidomini

2013-05-01T23:59:59.000Z

233

Applicaiton of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop October 25, 2011 Application of the Computer Program SASSI for Seismic SSI Analysis for WTP Facilities, Farhang Ostadan & Raman Venkata, October 25, 2011, Page-2 Background *SASSI computer code was developed in the early 1980's to solve Soil-Structure-Interaction (SSI) problems * Original version of SASSI was based on the direct solution method for embedded structures * Requires that each soil node in the excavated soil volume be an interaction node * Subtraction solution method was introduced in 1998

234

Facilities  

NLE Websites -- All DOE Office Websites (Extended Search)

Environment Feature Stories Public Reading Room: Environmental Documents, Reports LANL Home Phonebook Calendar Video About Operational Excellence Facilities Facilities...

235

A SPECIALIZED, MULTI-USER COMPUTER FACILITY FOR THE HIGH-SPEED, INTERACTIVE PROCESSING OF EXPERIMENTAL DATA  

E-Print Network (OSTI)

LBL to develop a specialized computer facility specificallyto process o a large computer (e.g. , CDC7600) may require iof modern, mid-range computers. Unfortunately the data

Maples, C.C.

2010-01-01T23:59:59.000Z

236

NERSC 2011: High Performance Computing Facility Operational Assessment for the National Energy Research Scientific Computing Center  

E-Print Network (OSTI)

utilizes a separate link to ESnet to provide data-rich largecontinues to partner with ESnet in providing quality networksignificant collaboration with ESnet and other facility

Antypas, Katie

2013-01-01T23:59:59.000Z

237

Computer software configuration management plan for 200 East/West Liquid Effluent Facilities  

Science Conference Proceedings (OSTI)

This computer software management configuration plan covers the control of the software for the monitor and control system that operates the Effluent Treatment Facility and its associated truck load in station and some key aspects of the Liquid Effluent Retention Facility that stores condensate to be processed. Also controlled is the Treated Effluent Disposal System`s pumping stations and monitors waste generator flows in this system as well as the Phase Two Effluent Collection System.

Graf, F.A. Jr.

1995-02-27T23:59:59.000Z

238

NREL: Energy Executive Leadership Academy - Leadership Program  

NLE Websites -- All DOE Office Websites (Extended Search)

Leadership Program NREL's Executive Energy Leadership Program is an in-depth training program conducted over five three-day sessions from May through September. The classroom...

239

User Facilities | U.S. DOE Office of Science (SC)  

Office of Science (SC) Website

Facilities Facilities User Facilities ASCR User Facilities BES User Facilities BER User Facilities FES User Facilities HEP User Facilities NP User Facilities User Facilities Frequently Asked Questions User Facility Science Highlights Contact Information Office of Science U.S. Department of Energy 1000 Independence Ave., SW Washington, DC 20585 P: (202) 586-5430 Atom probe sample chamber at the Environmental Molecular Sciences Laboratory. Atom probe sample chamber at the Environmental Molecular Sciences Laboratory. Pacific Northwest National Laboratory The x-ray nanoprobe at the Advanced Photon Source. The Hard X-ray Nanoprobe at the Advanced Photon Source. Argonne National Laboratory Titan The Titan Cray XK7 supercomputer at the Oak Ridge Leadership Computing Facility. Oak Ridge National Laboratory

240

U.S. CMS - U.S. CMS @ Work - Data and Computing - Facility Operations -  

NLE Websites -- All DOE Office Websites (Extended Search)

Data and Computing Facility Operations Data and Computing Facility Operations In This Section: Getting Started Computing Environment Resources Setup Software Tutorials, Documentation, How Tos Mass Storage File Transfer Batch Systems CRAB Quota and Usage Statistics CERN Bluearc Quota and Stats System Status U.S. CMS Grid Data Transfer to and from UAF At Fermilab, access to User Analysis Farm (UAF) goes through cmsuaf.fnal.gov. This can be accessed using Secure Copy (scp) or sftp. The following storage areas on NFS are available for users: /uscms/home/username /uscms_data/d1/username To transfer a file to UAF: Usage: scp file_name username@cmsuaf.fnal.gov:/uscms/home/username e.g. $ scp zprime705.jdf wenzel@cmsuaf.fnal.gov:/uscms/home/wenzel zprime705.jdf 100% |*****************************| 286 00:00

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


241

U.S. CMS - U.S. CMS @ Work - Data and Computing - Facility Operations - How  

NLE Websites -- All DOE Office Websites (Extended Search)

Data and Computing Facility Operations Data and Computing Facility Operations In This Section: Getting Started Computing Environment Resources Setup Software Tutorials, Documentation, How Tos Mass Storage File Transfer Batch Systems CRAB Quota and Usage Statistics CERN Bluearc Quota and Stats System Status U.S. CMS Grid How to use SRM on the UAF Introduction Prerequisites Prepare your UAF account to use srmcp Transfering a file Monitoring SRM Gettin Help Introduction SRM (Storage Resource Management) is a grid-service available on the UAF. The srmcp command allows for file transfers between sites and mass storage systems. Here we will show examples to transfer files from CASTOR at CERN to Fermilab. Since it is a grid service there are two prerequisites: Prerequisites The whole procedure will probaly take a few days but you might want to

242

contacts-by-user-facility | ornl.gov  

NLE Websites -- All DOE Office Websites (Extended Search)

Find People General Contacts Leadership Team Media Contacts User Facility Contacts Internal Users Home | Our People | User Facility Contacts User Facility Contacts |...

243

Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs  

SciTech Connect

High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

Drewmark Communications; Sartor, Dale; Wilson, Mark

2010-07-01T23:59:59.000Z

244

National facility for advanced computational science: A sustainable path to scientific discovery  

E-Print Network (OSTI)

User Facilities .Managing National User Facilities Berkeley Lab has been afour DOE national user facilities. The focus of Berkeley Lab

2004-01-01T23:59:59.000Z

245

Microsoft Word - Designated_User_Facilities_April_13_2010  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

4/13/2010 4/13/2010 DOE Designated Scientific User Facilities Laboratory/Facility Argonne National Laboratory Advanced Photon Source (APS) Intense Pulsed Neutron Source (IPNS) Electron Microscopy Center for Materials Research Argonne Wakefield Accelerator (AWA) Argonne Tandem Linac Accelerator System (ATLAS) Center for Nanoscale Materials Leadership Computing Facility* Brookhaven National Laboratory Scanning Transmission Electron Microscope Facility National Synchrotron Light Source (NSLS) Accelerator Test Facility (ATF) Relativistic Heavy Ion Collider (RHIC) Center for Functional Nanomaterials Fermi National Accelerator Laboratory 1,000 GeV Superconducting Accelerator System

246

Automation of a cryogenic facility by commercial process-control computer  

SciTech Connect

To insure that Brookhaven's superconducting magnets are reliable and their field quality meets accelerator requirements, each magnet is pre-tested at operating conditions after construction. MAGCOOL, the production magnet test facility, was designed to perform these tests, having the capacity to test ten magnets per five day week. This paper describes the control aspects of MAGCOOL and the advantages afforded the designers by the implementation of a commercial process control computer system.

Sondericker, J.H.; Campbell, D.; Zantopp, D.

1983-01-01T23:59:59.000Z

247

Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study  

SciTech Connect

This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

Krstulovich, S.F.

1986-11-12T23:59:59.000Z

248

National Ignition Facility computational fluid dynamics modeling and light fixture case studies  

SciTech Connect

This report serves as a guide to the use of computational fluid dynamics (CFD) as a design tool for the National Ignition Facility (NIF) program Title I and Title II design phases at Lawrence Livermore National Laboratory. In particular, this report provides general guidelines on the technical approach to performing and interpreting any and all CFD calculations. In addition, a complete CFD analysis is presented to illustrate these guidelines on a NIF-related thermal problem.

Martin, R.; Bernardin, J.; Parietti, L.; Dennison, B.

1998-02-01T23:59:59.000Z

249

Leadership | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Message from the Director Board of Governors Organization Chart Argonne Distinguished Fellows Emeritus Scientists & Engineers History Discoveries Prime Contract Contact Us Leadership Argonne integrates world-class science, engineering, and user facilities to deliver innovative research and technologies. We create new knowledge that addresses the scientific and societal needs of our nation. Eric D. Isaacs Eric D. Isaacs, Director, Argonne National Laboratory Director, Argonne National Laboratory Argonne National Laboratory Eric D. Isaacs, a prominent University of Chicago physicist, is President of UChicago Argonne, LLC, and Director of Argonne National Laboratory. Mark Peters Mark Peters, Deputy Lab Director for Programs Deputy Laboratory Director for Programs

250

Leadership Constraints: Leading Global Virtual Teams Through Environmental Complexity  

Science Conference Proceedings (OSTI)

This research focused on the question: What leadership constraints contribute to the complexity of the working environment faced by global virtual team leaders and how do those leadership constraints impact the behavior of leaders when they are trying ... Keywords: Computer Mediated Communication, Constraints, Global Virtual Team, Leadership, Telework

Leslie C. Tworoger, Cynthia P. Ruppel, Baiyun Gong, Randolph A. Pohlman

2013-04-01T23:59:59.000Z

251

Matthew R. Norman Scientific Computing Group  

E-Print Network (OSTI)

-present, Porting the Community Atmosphere Model - Spectral Element (CAM-SE) to ORNL's Titan Supercomputer National Laboratory PO BOX 2008 MS6016 Oak Ridge, TN 37831, USA normanmr@ornl.gov (865) 576-1757 Education-scale atmospheric simu- lation code, to run on Oak Ridge Leadership Computing Facility's (OLCF's) Titan super

252

Facilities  

NLE Websites -- All DOE Office Websites (Extended Search)

Facilities Facilities Facilities LANL's mission is to develop and apply science and technology to ensure the safety, security, and reliability of the U.S. nuclear deterrent; reduce global threats; and solve other emerging national security and energy challenges. Contact Operator Los Alamos National Laboratory (505) 667-5061 Some LANL facilities are available to researchers at other laboratories, universities, and industry. Unique facilities foster experimental science, support LANL's security mission DARHT accelerator DARHT's electron accelerators use large, circular aluminum structures to create magnetic fields that focus and steer a stream of electrons down the length of the accelerator. Tremendous electrical energy is added along the way. When the stream of high-speed electrons exits the accelerator it is

253

HELIOS: a computer program for modeling the solar thermal test facility, a users guide  

DOE Green Energy (OSTI)

HELIOS is a flexible computer code for evaluations of proposed designs for central tower solar energy collector systems, for safety calculations on the threat to personnel and to the facility itself, for determination of how various input parameters alter the power collected, and for design trade-offs. Input variables include atmospheric transmission effects, reflector shape and surface errors, suntracking errors, focusing and alignment strategies, receiver design, placement positions of the tower and mirrors, time-of-day, and day-of-year for the calculation. Plotting and editing computer codes are available. Complete input instructions, code-structure details, and output explanation are given. The code is in use on CDC 6600 and CDC 7600 computers.

Vittitoe, C.N.; Biggs, F.; Lighthill, R.E.

1977-03-01T23:59:59.000Z

254

Computer code input for thermal hydraulic analysis of Multi-Function Waste Tank Facility Title II design  

Science Conference Proceedings (OSTI)

The input files to the P/Thermal computer code are documented for the thermal hydraulic analysis of the Multi-Function Waste Tank Facility Title II design analysis.

Cramer, E.R.

1994-10-01T23:59:59.000Z

255

Facilities  

Science Conference Proceedings (OSTI)

... the gap for test data on advanced materials of ... in NCAL and Center for Theoretical and Computational Materials Science (CTCMS) cluster; ...

2013-02-22T23:59:59.000Z

256

Enhanced Computational Infrastructure for Data Analysis at the DIII-D National Fusion Facility  

SciTech Connect

Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from 9 national laboratories, 19 foreign laboratories, 16 universities, and 5 industrial partnerships. As a result of this work, DIII-D data is available on a 24 x 7 basis from a set of viewing and analysis tools that can be run either on the collaborators' or DIII-Ds computer systems. Additionally, a Web based data and code documentation system has been created to aid the novice and expert user alike.

Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; Meyer, W.H.; Parker, C.T.

1999-08-01T23:59:59.000Z

257

Visual Analysis of I/O System Behavior for HighEnd Computing  

E-Print Network (OSTI)

at the Argonne Leadership Computing Facility (ALCF). On the ALCF systems, we use the 40-rack Intrepid Blue Gene network. When tracing applications in the ALCF environment, we set up a temporary PVFS2 storage cluster by the ALCF. The extra compute nodes we al- locate for the I/O software are accessible only by our application

Islam, M. Saif

258

ASCR Science Network Requirements  

E-Print Network (OSTI)

Argonne Leadership Computing Facility (ALCF) .the Argonne Leadership Computing Facility (ALCF), the OakThe Argonne Leadership Computing Facility (ALCF) provides

Dart, Eli

2010-01-01T23:59:59.000Z

259

ASCR Science Network Requirements  

E-Print Network (OSTI)

7 4 Argonne Leadership Computing Facility (include the Argonne Leadership Computing Facility (ALCF),The Argonne Leadership Computing Facility (ALCF) provides

Dart, Eli

2010-01-01T23:59:59.000Z

260

Magellan: experiences from a Science Cloud  

E-Print Network (OSTI)

at the Argonne Leadership Computing Facility (ALCF) and thescience at the Argonne Leadership Computing Facility and theof the Argonne Leadership Computing Facility at Argonne

Ramakrishnan, Lavanya

2013-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


261

Multicore and Accelerator Development for a Leadership-Class Stellar Astrophysics Code  

SciTech Connect

We describe recent development work on the core-collapse supernova code CHIMERA. CHIMERA has consumed more than 100 million cpu-hours on Oak Ridge Leadership Computing Facility (OLCF) platforms in the past 3 years, ranking it among the most important applications at the OLCF. Most of the work described has been focused on exploiting the multicore nature of the current platform (Jaguar) via, e.g., multithreading using OpenMP. In addition, we have begun a major effort to marshal the computational power of GPUs with CHIMERA. The impending upgrade of Jaguar to Titan a 20+ PF machine with an NVIDIA GPU on many nodes makes this work essential.

Messer, Bronson [ORNL; Harris, James A [ORNL; Parete-Koon, Suzanne T [ORNL; Chertkow, Merek A [ORNL

2013-01-01T23:59:59.000Z

262

NREL: Sustainable NREL - Energy Systems Integration Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Energy Systems Integration Facility Energy Systems Integration Facility A close-up photo of a grey and yellow research facility. The Energy Systems Integration Facility The Energy Systems Integration Facility (ESIF), designed to Platinum-level standards of U.S. Green Building Council's Leadership in Energy and Environmental Design (LEED®), incorporates a large number of energy efficiency and sustainability practices. Researchers housed within will help overcome challenges related to the interconnection of distributed energy systems and the integration of renewable energy technologies into the electricity grid. The ESIF will also contain advanced computational capability. Fast Facts Cost: $135M Square feet: 182,500 Occupants: 205 Labs/Equipment: 14 laboratories, an Insight Visualization Center, a

263

Leadership | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership Leadership Leadership This page features biographies for the members of the Office of Energy Efficiency and Renewable Energy (EERE) management. Executive Leadership David Danielson Assistant Secretary David Danielson leads EERE's energy portfolio, helping hasten the transition to a clean energy economy. He oversees six major technology and strategic areas, including Energy Efficiency, Renewable Power, Sustainable Transportation, Strategic Programs, Financial Management, and Business Operations offices. He represents EERE before national, state, and local audiences to reinforce EERE's mission and to leverage partnerships to transform the nation's economic engine to one powered by clean energy. Mike Carr Principal Deputy Assistant Secretary Mike Carr provides leadership direction on cross-cutting activities in

264

National facility for advanced computational science: A sustainable path to scientific discovery  

E-Print Network (OSTI)

Office of Advanced Scientific Computing Research of the U.S.Office of Advanced Scientific Computing Research (OASCR) andOASCR Office of Advanced Scientific Computing Research (DOE

2004-01-01T23:59:59.000Z

265

Fellows' Prize for Leadership  

NLE Websites -- All DOE Office Websites (Extended Search)

Leadership Leadership Fellows' Prize for Leadership Demonstrating outstanding leadership in science or engineering. Fellows' Prize for Leadership recipients 2009 David S. Moore, DE-9 For his inspirational technical leadership in the fields of shock physics and the science of explosives detection 2008 Andrew Shreve, MPA-CINT For his stimulation of young Laboratory staff to develop skills and to make personal sacrifices necessary to become effective leaders 2007 Dan Thoma, INST-OFF For his strong scientific leadership both within and outside the Laboratory, including his support to the JOWOG 22 collaboration, and his serving as a mentor in MST Division, as a senior advisor at the directorate level, in national societies, and most recently in the LANL Institutes Juan Fernandez, P-24

266

Cover: PNNL's Photovoltaic array produces electricity for our super-computing facility and adjacent car charging stations. IN THIS REPORT  

E-Print Network (OSTI)

#12;Cover: PNNL's Photovoltaic array produces electricity for our super-computing facility agencies, universities, and industry. Interdisciplinary teams at PNNL address many of America's most: 143 kBtu/ft2 ) · At least 7.5% of electricity use from renewable sources by 2013 and thereafter

267

Argonne User Facility Agreements | Advanced Photon Source  

NLE Websites -- All DOE Office Websites (Extended Search)

Master proprietary agreement sample (pdf) Master proprietary agreement sample (pdf) Master non-proprietary agreement sample (pdf) Differences between non-proprietary and proprietary Opens in a new window Argonne's National User Facilities Advanced Leadership Computing Facility (ALCF) Advanced Photon Source (APS) Argonne Tandem Linear Accelerator System (ATLAS) Center for Nanoscale Materials (CNM) Electron Microscopy Center (EMC) Argonne User Facility Agreements About User Agreements If you are not an Argonne National Laboratory employee, a user agreement signed by your home institution is a prerequisite for experimental work at any of Argonne's user facilities. The Department of Energy recently formulated master agreements that cover liability, intellectual property, and financial issues (access templates from the links in the left

268

Audit of Selected Aspects of the Unclassified Computer Security Program at a DOE Headquarters Computing Facility, AP-B-95-02  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

OFFICE OF INSPECTOR GENERAL AUDIT OF SELECTED ASPECTS OF THE UNCLASSIFIED COMPUTER SECURITY PROGRAM AT A DOE HEADQUARTERS COMPUTING FACILITY The Office of Inspector General wants to make the distribution of its reports as customer friendly and cost effective as possible. Therefore, this report will be available electronically through the Internet five to seven days after publication at the alternative addresses: Department of Energy Headquarters Gopher gopher.hr.doe.gov Department of Energy Headquarters Anonymous FTP vm1.hqadmin.doe.gov U.S. Department of Energy Human Resources and Administration Home Page

269

Visual Analysis of I/O System Behavior for HighEnd Computing  

E-Print Network (OSTI)

Inglett, and Robert Wisniewski for their advice on the Blue Gene hardware. We thank Argonne ALCF's William, and Timothy Williams, for their advice on the ALCF early-access Blue Gene/Q machine. This work was supported] Argonne Leadership Computing Facility: The ANL Intrepid Blue Gene System. http://www.alcf

270

Recipient: 1997 Leadership Award  

Science Conference Proceedings (OSTI)

Citation: "For her outstanding leadership in policy setting national materials ... the U.S. Department of Energy, and steering committees of the National Science...

271

Leadership | Department of Energy  

NLE Websites -- All DOE Office Websites (Extended Search)

Leadership Melanie A. Kenderdine Director of the Office of Energy Policy and Systems Analysis Melanie A. Kenderdine joined the Department of Energy as Director of the Office of...

272

Gender Diversity in Corporate Leadership  

E-Print Network (OSTI)

te NOVEMBER 2011 Gender Diversity in Corporate Leadershipin that greater gender diversity in top leadership positions23). GREATER GENDER DIVERSITY IN TOP LEADERSHIP POSITIONS OF

McLean, Lindsey

2011-01-01T23:59:59.000Z

273

Argonne Lea Computing F A  

NLE Websites -- All DOE Office Websites (Extended Search)

Lea Lea Computing F A r g o n n e L e a d e r s h i p C o m p u t i n g FA c i l i t y 2 0 1 3 S c i e n c e H i g H l i g H t S Argonne leadership computing Facility C O N T E N T S About ALCF ...............................................................................................................................2 MirA...............................................................................................................................................3 SCienCe DireCtor'S MeSSAge ..........................................................................................4 ALLoCAtion ProgrAMS .......................................................................................................5 eArLy SCienCe ProgrAM ....................................................................................................

274

BES Science Network Requirements  

E-Print Network (OSTI)

at the Argonne Leadership Computing Facility (ALCF). 10.2at the Argonne Leadership Computing Facility (ALCF). Data

Dart, Eli

2011-01-01T23:59:59.000Z

275

Supercomputing | Facilities | ORNL  

NLE Websites -- All DOE Office Websites (Extended Search)

Primary Systems Infrastructure High Performance Storage Supercomputing and Computation Home | Science & Discovery | Supercomputing and Computation | Facilities and Capabilities...

276

U.S. CMS - U.S. CMS @ Work - Data and Computing - Facility Operations...  

NLE Websites -- All DOE Office Websites (Extended Search)

File Transfer Batch Systems CRAB Quota and Usage Statistics CERN Bluearc Quota and Stats System Status U.S. CMS Grid Facility Operations: Batch System Batch Systems: The batch...

277

U.S. CMS - U.S. CMS @ Work - Data and Computing - Facility Operations...  

NLE Websites -- All DOE Office Websites (Extended Search)

File Transfer Batch Systems CRAB Quota and Usage Statistics CERN Bluearc Quota and Stats System Status U.S. CMS Grid Quota Policy and Usage Statistics Tier 1 Facility provides...

278

I/O performance challenges at leadership scale  

Science Conference Proceedings (OSTI)

Today's top high performance computing systems run applications with hundreds of thousands of processes, contain hundreds of storage nodes, and must meet massive I/O requirements for capacity and performance. These leadership-class systems face daunting ...

Samuel Lang; Philip Carns; Robert Latham; Robert Ross; Kevin Harms; William Allcock

2009-11-01T23:59:59.000Z

279

Supporting Computational Data Model Representation with High-performance I/O in Parallel netCDF  

E-Print Network (OSTI)

staycompetitive Argonne Leadership Computing Facility industry@alcf.anl.gov #12;MIRA RANKS third on the TOP500, and create more accurate models for your business with Mira, the ALCF's new petascale IBM Blue Gene/Q system, ALCF Science Director CUTTING-EDGE SUPERCOMPUTING KEEPS YOU COMPETITIVE A key driver of our nation

Choudhary, Alok

280

National facility for advanced computational science: A sustainable path to scientific discovery  

E-Print Network (OSTI)

Scientific Computing (NERSC) Center, 1996 present. Services and Systems at NERSC (Oct. 1, 1997- Dec 31, 1998,History Chief Architect, NERSC Division, Lawrence Berkeley

2004-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


281

Leadership | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

About Us » Leadership About Us » Leadership Leadership David G. Huizenga Senior Advisor, Office of Environmental Management President Obama designated David G. Huizenga as the Acting Assistant Secretary for the Office of Environmental Management, effective July 20, 2011. More about Senior Advisor Huizenga Tracy Mustin Principal Deputy Assistant Secretary for Environmental Management Tracy Mustin has been the Principal Deputy Assistant Secretary for Environmental Management since August 2011. In this capacity, she is responsible for the policy direction, management, and execution of the Department of Energy's nuclear cleanup portfolio. More about Principal Deputy Assistant Secretary Mustin Alice C. Williams Associate Principal Deputy Assistant Secretary for Environmental Management

282

Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design  

Science Conference Proceedings (OSTI)

A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

Kostin, Mikhail [FRIB, MSU] [FRIB, MSU; Mokhov, Nikolai [FNAL] [FNAL; Niita, Koji [RIST, Japan] [RIST, Japan

2013-09-25T23:59:59.000Z

283

Leadership | Department of Energy  

NLE Websites -- All DOE Office Websites (Extended Search)

About Us » Leadership About Us » Leadership Leadership Leadership Eric J. Fygi Deputy General Counsel Eric J. Fygi has served as the Deputy General Counsel since the Department of Energy's founding in October 1977, and periodically has served since then as the Department's Acting General Counsel. Together with the General Counsel, he is responsible to the Secretary of Energy for all the Department's legal affairs and the management of a 100-plus complement of attorneys at the Department's headquarters. More about Eric J. Fygi Gena E. Cadieux Deputy General Counsel for Technology Transfer & Procurement Ms. Cadieux is the Deputy General Counsel for Technology Transfer and Procurement at the United States Department of Energy (DOE). She manages a legal staff responsible for providing legal counsel to the procurement

284

Leadership | Department of Energy  

NLE Websites -- All DOE Office Websites (Extended Search)

Leadership Leadership Leadership Ingrid Ann Christner Kolb Photo of Ingrid Kolb Director, Office of Management Ingrid Kolb was appointed Director of the Office of Management on December 1, 2005. As the Director she leads an organization comprised of nearly 260 employees with a budget of $55 million. The Office of Management (MA) is the Department of Energy's central management organization providing leadership in such mission critical areas as project and acquisition management. MA also provides direction and policy guidance in support of efforts to reform the Department's management through implementation of the President's Management Agenda. More about Ingrid Kolb Marilyn L. Dillon Marilyn Dillon Director, Office of Resource Management and Planning As Director of the Office of Resource Management and Planning, Ms. Dillon

285

Leadership | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership Leadership Leadership Leadership Alison Doone Deputy Chief Financial Officer Alison Doone is the Department of Energy's (DOE) Deputy Chief Financial Officer (CFO). More about Deputy Doone Joanne Choi Director, Office of Finance and Accounting Ms. Choi is the Director of the Office of Finance and Accounting, which consists of Energy Finance and Accounting Service Center and Office of Financial Control and Reporting. Ms. Choi is responsible for providing agency-wide accounting and financial management service to the Department. She oversees the production of accurate and timely audited financial statements. Before joining the Department of Energy, Ms. Choi worked at the Office of Management and Budget (OMB) developing financial systems policy for federal agencies. Ms.

286

Leadership | Department of Energy  

NLE Websites -- All DOE Office Websites (Extended Search)

Leadership Leadership Leadership Dot Harris The Honorable Dot Harris, Director, Office of Economic Impact and Diversity LaDoris "Dot" Harris was nominated by President Obama to be the Director of the Office of Economic Impact and Diversity at the United States Department of Energy. She was confirmed by the U.S. Senate on March 29, 2012. Ms. Harris brings nearly 30 years of management and leadership experience to this position, having served at some of the world's largest firms and leading a successful energy, IT, and healthcare consulting firm. More about Dot Harris Andre H. Sayles Principal Deputy Director of the Office of Economic Impact and Diversity and Acting Deputy Director of the Office of Minority Business and Economic Development Andre H. Sayles, Ph.D., joined the Department of Energy as the Principal

287

Leadership | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

About Us » Leadership About Us » Leadership Leadership Leadership Robert C. Gibbs Chief Human Capital Officer Bob Gibbs was born and raised in Boston, Massachusetts. A retired naval officer, he holds both a B.A. in business management from the University of Washington, and a J.D. from George Mason University. He is a member of the Maryland and the American Bar Associations. More about Bob Gibbs Cyndi L. Mays Deputy Chief Human Capital Officer Ms. Cyndi Mays is a senior level Federal manager with over twenty years of experience managing organizations, projects and people in both large and small civilian and military agencies. She has proven success in Human Capital Strategy and Operations, Human Resources Transformation, Program Management, and Executive and Stakeholder Communications. Her portfolio

288

The grand challenge of managing the petascale facility.  

Science Conference Proceedings (OSTI)

This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, we should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point.

Aiken, R. J.; Mathematics and Computer Science

2007-02-28T23:59:59.000Z

289

Leadership Development | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Development Development Leadership Development Leadership Development DOE's Leadership & Development Programs are designed to strengthen the participant's capacity to lead by deepening their understanding of the DOE's core values and key leadership characteristics and behaviors, which is the foundation of our model for success. These programs will help individuals improve performance through the implementation of a personalized development plan that uses competency assessments as the foundation. Participants are introduced to concepts, characteristics, and behaviors needed to enhance leadership skills and/or prepare them for assignment to leadership positions at DOE and beyond. The programs consist of developmental experiences, formal and informal training, active learning

290

Speeding Up Science Data Transfers Between Department of Energy Facilities  

NLE Websites -- All DOE Office Websites (Extended Search)

Speeding Up Science Speeding Up Science Data Transfers Between Department of Energy Facilities Speeding Up Science Data Transfers Between Department of Energy Facilities May 16, 2009 As scientists conduct cutting-edge research with ever more sophisticated techniques, instruments, and supercomputers, the data sets that they must move, analyze, and manage are increasing in size to unprecedented levels. The ability to move and share data is essential to scientific collaboration, and in support of this activity network and systems engineers from the Department of Energy's (DOE) Energy Sciences Network (ESnet), National Energy Research Scientific Computing Center (NERSC) and Oak Ridge Leadership Computing Facility (OLCF) are teaming up to optimize wide-area network (WAN) data transfers.

291

Guiding Principles for Federal Leadership in High-Performance and  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Guiding Principles for Federal Leadership in High-Performance and Guiding Principles for Federal Leadership in High-Performance and Sustainable Buildings Guiding Principles for Federal Leadership in High-Performance and Sustainable Buildings October 4, 2013 - 4:49pm Addthis The Federal Energy Management Program (FEMP) provides guidance and assistance for compliance with the guiding principles established by the 2006 Federal Leadership in High-Performance and Sustainable Buildings Memorandum of Understanding (MOU), which became mandatory through Executive Order (E.O.) 13423 and reinforced in E.O. 13514. The common set of guiding principles include those for: Integrated design Energy performance Water conservation Materials to help: Reduce the total ownership cost of facilities Improve energy efficiency and water conservation

292

HSS Work Group Leadership Meetings: Transition Elements | Department of  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Work Group Leadership Meetings: Transition Elements Work Group Leadership Meetings: Transition Elements HSS Work Group Leadership Meetings: Transition Elements Meeting Dates: November 13 - 15, 2012 This HSS Focus Group Work Group telecom was held with the Work Group Co-Leads to discuss change elements and strategic direction to support accelerated efforts to advancing progress, productivity and performance within each of the Work Groups. Although current roles within all of the Work Groups and Focus Group efforts remain the same, the addition of centralized leadership and oversight by representatives (2) of the Departmental Representative to the Defense Nuclear Facilities Safety Board are established. Meeting Summaries 851 Implementation Meeting Summary Strategic Initiatives Meeting Summary Workforce Retention Meeting Summary

293

Surface Field Optimization of Accelerating Structures for CLIC Using ACE3P on Remote Computing Facility  

E-Print Network (OSTI)

This paper presents a computer program for searching for the optimum shape of an accelerating structure cell by scanning a multidimensional geometry parameter space. For each geometry, RF parameters and peak surface fields are calculated using ACE3P on a remote high-performance computational system. Parameter point selection, mesh generation, result storage and post-analysis are handled by a GUI program running on the users workstation. This paper describes the program, AcdOptiGui. AcdOptiGui also includes some capability for automatically selecting scan points based on results from earlier simulations, which enables rapid optimization of a given parameterized geometry. The software has previously been used as a part of the design process for accelerating structures for a 500 GeV CLIC.

Sjobak, KN; Grudiev, A

2013-01-01T23:59:59.000Z

294

ASCR-Final  

NLE Websites -- All DOE Office Websites (Extended Search)

facilities - facilities that house some of the world's fastest supercomputers -- at the Oak Ridge Leadership Computing Facility (OLCF), the Argonne Leadership Computing Facility...

295

Leadership | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership Leadership Leadership Tracey LeBeau Director, Office of Indian Energy Policy and Programs Tracey A. LeBeau (Cheyenne River Sioux) is Director for the U.S. Department of Energy's Office of Indian Energy Policy and Programs. She was appointed in January 2011 to establish this new Office which is authorized by statute to manage, coordinate, create and facilitate programs and initiatives to encourage tribal energy and energy infrastructure development. Administratively, the Office was established to also coordinate, across the Department, those policies, programs and initiatives involving Indian energy and energy infrastructure development. More about Tracey LeBeau Pilar Thomas Deputy Director, Office of Indian Energy Policy and Programs Pilar Thomas (Pascua Yaqui) is the Deputy Director in the Office of Indian

296

Leadership | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership Leadership Leadership David Geiser Director and Acting Deputy Director of the Office of Legacy Management Dave Geiser graduated from Cornell University with a bachelor's degree in chemical engineering and received his commission in the U.S. Navy in 1981. He served in the Navy for eight years as a nuclear-trained officer on the USS Daniel Webster and at the Naval Sea Systems Command. After leaving the Navy, Mr. Geiser received a master of engineering administration degree from The George Washington University and joined Science Applications International Corporation. During his three years with SAIC, he spent two years in Paris, France, evaluating European waste management practices. More about Director David Geiser Barbara McNeal Lloyd Director, Office of Business Operations

297

Leadership | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

About Us » Leadership About Us » Leadership Leadership Peter B. Lyons Assistant Secretary for Nuclear Energy Dr. Peter B. Lyons was confirmed by the Senate as the Assistant Secretary for Nuclear Energy on April 14, 2011. Dr. Lyons was appointed to his previous role as Principal Deputy Assistant Secretary of the Office of Nuclear Energy in September, 2009. As Assistant Secretary, Dr. Lyons is responsible for all programs and activities of the Office of Nuclear Energy. More about Assistant Secretary Lyons Dennis Michael Miotla Chief Operating Officer & Acting Principal Deputy Assistant Secretary Mr. Miotla currently serves as Chief Operating Officer and acting Principal Deputy Assistant Secretary for the Office of Nuclear Energy. Mr. Miotla shares responsibilities with the Assistant Secretary for all research,

298

Leadership | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Us » Leadership Us » Leadership Leadership Jonathan Elkind Assistant Secretary for Policy & International Affairs (Acting) Jonathan Elkind serves as Acting Assistant Secretary for the Office of Policy and International Affairs (PI) and has served as the Principal Deputy Assistant Secretary for PI since June 2009. Prior to joining the Energy Department, Mr. Elkind worked as a senior fellow at the Brookings Institution, focusing on energy security and foreign policy issues. He also founded and headed EastLink Consulting, LLC, an independent consultancy focusing on energy, environment, and investment. From 1998 to 2001, Elkind served on the staff of the U.S. More about Acting Assistant Secretary Elkind Andrea Lockwood Deputy Assistant Secretary for Eurasia, Africa, and the Middle East

299

Leadership | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

About Us » Leadership About Us » Leadership Leadership Melanie A. Kenderdine Director of the Office of Energy Policy and Systems Analysis Melanie A. Kenderdine joined the Department of Energy as Director of the Office of Energy Policy and Systems Analysis and Energy Counselor to the Secretary in May 2013. Prior to serving in her current role at DOE, Ms. Kenderdine worked as the Executive Director and Associate Director of the MIT Energy Initiative (MITEI). During her six-year tenure at MITEI, she raised over $500 million from industry and private donors for energy research and education, was a member of the research team for MIT's Future of Natural Gas Study, and was the rapporteur and editor for the MITEI Symposium Series. More about Melanie A. Kenderdine Jonathan Pershing Principal Deputy Director of the Office of Energy Policy and Systems

300

Leadership Development Resource Center | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership Development Resource Center Leadership Development Resource Center Leadership Development Resource Center The Office of Learning and Workforce Development believes that effective leadership is central to organizational success and has implemented the Leadership Development Resource Center. This will provide current and emerging leaders with the tools and information to help them build their leadership capacity. The LDRC is a means of coordinating resources and program efforts in order to meet DOE's mission by progressing in all phases of leadership development. DOE Leadership Philosophy Several themes describe the state of leadership development today: A growing recognition that leadership development, regardless of the theory or model that an organization adopts, involves more than just

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


301

Leadership Philosophy | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership » Leadership Philosophy Leadership » Leadership Philosophy Leadership Philosophy We are diverse, talented, knowledgeable, and dedicated people committed to public service and to the success of Legacy Management. We are trustworthy stewards of DOE's intergenerational legacy responsibilities and of the American tax dollars. Our full potential is realized through teamwork, respecting each other, promoting open communication and productivity, and supporting creativity and initiative. We trust each other, and we feel responsible for and dedicated to each other's success. Through our leadership and commitment, holding ourselves accountable for producing superior quality products, we - the DOE Office of Legacy Management - can best achieve our shared goals. Each of us shares responsibility for creating a safe work environment with clear goals,

302

Leadership | National Nuclear Security Administration  

National Nuclear Security Administration (NNSA)

Leadership | National Nuclear Security Administration Leadership | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog Leadership Home > Field Offices > Welcome to the Livermore Field Office > Leadership Leadership Kimberly D. Lebak, Manager Kim Lebak became the Livermore Site Manager in January, 2012 for the National Nuclear Security Administration of the U.S. Department of Energy.

303

Leadership Development | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

include work-life balance, stress management and innovative solutions to career and gender issues. Photo Gallery: Strategic Laboratory Leadership Program Strategic Laboratory...

304

Leadership Team | ornl.gov  

NLE Websites -- All DOE Office Websites (Extended Search)

About ORNL Fact Sheet Brochure Diversity Leadership Team Organization History Environmental Policy Corporate Giving Research Integrity Who we are, aren't Home | ORNL | About ORNL |...

305

Educational Technology| Leadership and Implementation.  

E-Print Network (OSTI)

?? The purpose of this study was to evaluate two important aspects of educational technology: leadership and implementation. The research conducted in this study aimed (more)

Galla, Anthony J.

2011-01-01T23:59:59.000Z

306

NETL: Carbon Storage - Carbon Sequestration Leadership Forum  

NLE Websites -- All DOE Office Websites (Extended Search)

CSLF Carbon Storage Carbon Sequestration Leadership Forum CSLF Logo The Carbon Sequestration Leadership Forum (CSLF) is a voluntary climate initiative of industrially developed and...

307

Federal Energy Management Program: Leadership Institutional Change  

NLE Websites -- All DOE Office Websites (Extended Search)

Leadership Leadership Institutional Change Principle to someone by E-mail Share Federal Energy Management Program: Leadership Institutional Change Principle on Facebook Tweet about Federal Energy Management Program: Leadership Institutional Change Principle on Twitter Bookmark Federal Energy Management Program: Leadership Institutional Change Principle on Google Bookmark Federal Energy Management Program: Leadership Institutional Change Principle on Delicious Rank Federal Energy Management Program: Leadership Institutional Change Principle on Digg Find More places to share Federal Energy Management Program: Leadership Institutional Change Principle on AddThis.com... Sustainable Buildings & Campuses Operations & Maintenance Greenhouse Gases Water Efficiency Data Center Energy Efficiency

308

Magellan: experiences from a Science Cloud  

E-Print Network (OSTI)

at the Argonne Leadership Computing Facility (ALCF) and theArgonne Leadership Com- puting Facility (ALCF) and the National Energy Research Scientific Computing Facility (

Ramakrishnan, Lavanya

2013-01-01T23:59:59.000Z

309

Molecular Science Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

The parallel code PFLOTRAN for modeling reactive flows in porous media was developed by Peter Lichtner at Los Alamos National Laboratory and tested on the EMSL MPP2 system....

310

Implementation plan for operating alternatives for the Naval Computer and Telecommunications Station cogeneration facility at Naval Air Station North Island, San Diego, California  

SciTech Connect

The goal of the US Department of Energy (DOE) Federal Energy Management Program (FEMP) is to facilitate energy efficiency improvements at federal facilities. This is accomplished by a balanced program of technology development, facility assessment, and use of cost-sharing procurement mechanisms. Technology development focuses upon the tools, software, and procedures used to identify and evaluate energy efficiency technologies and improvements. For facility assessment, FEMP provides metering equipment and trained analysts to federal agencies exhibiting a commitment to improve energy use efficiency. To assist in procurement of energy efficiency measures, FEMP helps federal agencies devise and implement performance contracting and utility demand-side management strategies. Pacific Northwest Laboratory (PNL) supports the FEMP mission of energy systems modernization. Under this charter, the Laboratory and its contractors work with federal facility energy managers to assess and implement energy efficiency improvements at federal facilities nationwide. The SouthWestern Division of the Naval Facilities Engineering Command, in cooperation with FEMP, has tasked PNL with developing a plan for implementing recommended modifications to the Naval Computer and Telecommunications Station (NCTS) cogeneration plant at the Naval Air Station North Island (NASNI) in San Diego. That plan is detailed in this report.

Carroll, D.M.; Parker, S.A.; Stucky, D.J.

1994-04-01T23:59:59.000Z

311

Frontiers of Performance Analysis on Leadership-Class Systems  

SciTech Connect

The number of cores in high-end systems for scientific computing are employing is increasing rapidly. As a result, there is an pressing need for tools that can measure, model, and diagnose performance problems in highly-parallel runs. We describe two tools that employ complementary approaches for analysis at scale and we illustrate their use on DOE leadership-class systems.

Fowler, R J; Adhianto, L; de Supinski, B R; Fagan, M; Gamblin, T; Krentel, M; Mellor-Crummey, J; Schulz, M; Tallent, N

2009-06-15T23:59:59.000Z

312

INCITE Quarterly Report Policy | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

late: The ability to submit jobs for the PI and users of the late project will be disabled. If a report is more than 90 days late: The PI and users of the late project will...

313

Discretionary Allocation Request | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

Physics Physics, Condensed Matter Physics Physics, High Energy Physics Physics, Nuclear Physics Physics, Space Physics Physics, Particle Physics Physics, Plasma Physics...

314

Surveyor / Gadzooks File Systems | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

IntrepidChallengerSurveyor Decommissioning of BGP Systems and Resources Introducing Challenger Quick Reference Guide System Overview Data Transfer Data Storage & File Systems...

315

Getting Started Videoconferences | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

Getting Started Videoconferences Start Date: Jan 23 2014 - 3:16pm Event Website: http:www.alcf.anl.govworkshopsgetting-started-videoconference-2014 Register for one of eight...

316

Querying Allocations Using cbank | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

apply to the resource (machine) to which a user is currently logged in. For example, a query on Surveyor about PROJECTNAME will return information about the Surveyor allocation...

317

Computational Studies of Nucleosome Stability | Argonne Leadership...  

NLE Websites -- All DOE Office Websites (Extended Search)

structure studies of the nucleosomes, which are complexes of DNA and proteins in chromatin and account for 75-90% of the packaging of the genomic DNA in mammalian cells....

318

Executive Leadership Program | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership Program Leadership Program Executive Leadership Program Program Overview: This program is based on the U.S. Office of Personnel Management's Leadership Effectiveness Framework (LEF), a model for effective leadership/managerial performance. The program helps participants acquire or enhance the LEF competencies needed to become a successful government leader and manager. Participants will complete the following activities: individual needs assessment; leadership development plans; leadership development team activities; developmental work assignment; shadowing assignment; executive interviews; management readings; and four residential training sessions. In order to complete all the components of the program, participants will be away from their position of record for a

319

Computer program development specification for the air traffic control subsystem of the Man-Vehicle Systems Research Facility.  

E-Print Network (OSTI)

Functional summary: The Air Traffic Control (ATC) Subsystem of the Man-Vehicle System Research Facility (MVSRF) is a hardware/software complex which provides the MVSRF with the capability of simulating the multi-aircraft, ...

Massachusetts Institute of Technology. Flight Transportation Laboratory

1982-01-01T23:59:59.000Z

320

Vision, Leadership and Commitment...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

ie w o nl ine at ie w o nl ine at energy.gov/cio Vision, Leadership and Commitment... Enabling the Future through Technology and Information Strategic Plan OCIO FY 2012 - FY 2017 Transformation. Sustainability. Innovation. Teamwork. Partnerships. U.S. Department of energy | office of the Chief information officer | oCio Strategic Plan 3 of 28 table of contents Table of Contents Message from Michael Locatis, Chief Information Officer ���������������������������������������������� 5

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


321

Gender Diversity in Corporate Leadership  

E-Print Network (OSTI)

Female Lead- ership and Gender Equity: Evidence from PlantCSW upda te NOVEMBER 2011 Gender Diversity in CorporateF emale Leadership and Gender Equity: Evi- dence from Plant

McLean, Lindsey

2011-01-01T23:59:59.000Z

322

Leadership and Leading Indicators Presentation  

NLE Websites -- All DOE Office Websites (Extended Search)

has to do with casting vision and motivating people." John C. Maxwell August 28, 2008 6 A Call for Leadership Sampling of recent Board-to-DOE letters found *60% had safety...

323

Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Computing Computing and Storage Requirements Computing and Storage Requirements for FES J. Candy General Atomics, San Diego, CA Presented at DOE Technical Program Review Hilton Washington DC/Rockville Rockville, MD 19-20 March 2013 2 Computing and Storage Requirements Drift waves and tokamak plasma turbulence Role in the context of fusion research * Plasma performance: In tokamak plasmas, performance is limited by turbulent radial transport of both energy and particles. * Gradient-driven: This turbulent transport is caused by drift-wave instabilities, driven by free energy in plasma temperature and density gradients. * Unavoidable: These instabilities will persist in a reactor. * Various types (asymptotic theory): ITG, TIM, TEM, ETG . . . + Electromagnetic variants (AITG, etc). 3 Computing and Storage Requirements Fokker-Planck Theory of Plasma Transport Basic equation still

324

Vollfll1-e XIV, Numberrl, Falll999 A. publication of'the Academic Senate, Calift~rnia State U:qiversity, Fullerton ' ' '" '--I L '  

E-Print Network (OSTI)

) and the Oak Ridge Leadership Computing Facility (OLCF), located in the National Center for Computational

de Lijser, Peter

325

N D'ORDRE : 9056 THSE DE DOCTORAT  

E-Print Network (OSTI)

Computing Center (NERSC) and the Oak Ridge Leadership Computing Facility (OLCF), located in the National

Paris-Sud XI, Université de

326

F < ' . , ttt' --1"3&. t;;;; LX Crystal Lattice Defects, 1982, Vol. 9, pp. 149-165  

E-Print Network (OSTI)

) and the Oak Ridge Leadership Computing Facility (OLCF), located in the National Center for Computational

Macdonald, James Ross

327

A Benchmark of GW Methods for Azabenzenes: Is the GW Approximation Good Enough?  

E-Print Network (OSTI)

), the Texas Advanced Computing Center (TACC), and the Argonne Leadership Computing Facility (ALCF). A

328

Argonne materials scientist Vilas Pol (former postdoc) was recently featured on the PBS NOVA series "Making Stuff: Cleaner," where  

E-Print Network (OSTI)

) Argonne Leadership Computing Facility (ALCF) Transportation Research and Analysis Computing Center (TRACC

Kemner, Ken

329

paper_combined.dvi  

NLE Websites -- All DOE Office Websites (Extended Search)

of the Argonne Leadership Computing Facility at Argonne National Laboratory and the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory, which are sup-...

330

ASCR Science Network Requirements  

E-Print Network (OSTI)

7 4 Argonne Leadership Computing Facility (programs include the Argonne Leadership Computing Facility (the world. ASCR has LCFs at Argonne National Laboratory and

Dart, Eli

2010-01-01T23:59:59.000Z

331

The Magellan Final Report on Cloud Computing  

Science Conference Proceedings (OSTI)

The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.

,; Coghlan, Susan; Yelick, Katherine

2011-12-21T23:59:59.000Z

332

Leadership Excellence Program | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership Excellence Program Leadership Excellence Program Leadership Excellence Program Overview The Office of Environmental Management (EM) recognizes that leadership enhancement is vital to the program and commits to the development and strengthening of leadership skills for all employees throughout their careers. EM's Leadership Excellence Program (LEP) is a competency-based program designed to develop future leaders and enhance SES leadership skills. EM's LEP is a roadmap to senior leadership in the organization. Every EM employee is a potential leader, whether he or she chooses to become a manager or elects to focus on excellence in a technical or functional role. The LEP is designed to develop team leaders, project leaders, supervisors, managers, and senior executives. The LEP does not

333

Sandia National Laboratories: About Sandia: Leadership: Information...  

NLE Websites -- All DOE Office Websites (Extended Search)

vision and leadership of Sandia's information technology, information management, and cyber security strategy. The balance between information technology and information...

334

Analysis of operating alternatives for the Naval Computer and Telecommunications Station Cogeneration Facility at Naval Air Station North Island, San Diego, California  

SciTech Connect

The Naval Facilities Engineering Command Southwestern Division commissioned Pacific Northwest Laboratory (PNL), in support of the US Department of Energy (DOE) Federal Energy Management Program (FEMP), to determine the most cost-effective approach to the operation of the cogeneration facility in the Naval Computer and Telecommunications Station (NCTS) at the Naval Air Station North Island (NASNI). Nineteen alternative scenarios were analyzed by PNL on a life-cycle cost basis to determine whether to continue operating the cogeneration facility or convert the plant to emergency-generator status. This report provides the results of the analysis performed by PNL for the 19 alternative scenarios. A narrative description of each scenario is provided, including information on the prime mover, electrical generating efficiency, thermal recovery efficiency, operational labor, and backup energy strategy. Descriptions of the energy and energy cost analysis, operations and maintenance (O&M) costs, emissions and related costs, and implementation costs are also provided for each alternative. A summary table presents the operational cost of each scenario and presents the result of the life-cycle cost analysis.

Parker, S.A.; Carroll, D.M.; McMordie, K.L.; Brown, D.R.; Daellenbach, K.K.; Shankle, S.A.; Stucky, D.J.

1993-12-01T23:59:59.000Z

335

ARM - Facility News Article  

NLE Websites -- All DOE Office Websites (Extended Search)

15, 2005 [Facility News] 15, 2005 [Facility News] Aging, Overworked Computer Network at SGP Gets Overhauled Bookmark and Share This aerial map of instruments deployed at the SGP Central Facility provides an indication of the computer resources needed to manage data at the site, let alone communicate with other ARM sites. This aerial map of instruments deployed at the SGP Central Facility provides an indication of the computer resources needed to manage data at the site, let alone communicate with other ARM sites. Established as the first ARM research facility in 1992, the Southern Great Plains (SGP) site in Oklahoma is the "old man on the block" when it comes to infrastructure. Though significant improvements have been made to facilities and equipment throughout the years, the computer network at the

336

Computing Frontier: Distributed Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Computing Computing Frontier: Distributed Computing and Facility Infrastructures Conveners: Kenneth Bloom 1 , Richard Gerber 2 1 Department of Physics and Astronomy, University of Nebraska-Lincoln 2 National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory 1.1 Introduction The field of particle physics has become increasingly reliant on large-scale computing resources to address the challenges of analyzing large datasets, completing specialized computations and simulations, and allowing for wide-spread participation of large groups of researchers. For a variety of reasons, these resources have become more distributed over a large geographic area, and some resources are highly specialized computing machines. In this report for the Snowmass Computing Frontier Study, we consider several questions about distributed computing

337

Experimental and Computational Study of the Flux Spectrum in Materials Irradiation Facilities of the High Flux Isotope Reactor  

Science Conference Proceedings (OSTI)

This report compares the available experimental neutron flux data in the High Flux Isotope Reactor (HFIR) to computational models of the HFIR loosely based on the experimental loading of cycle 400. Over the last several decades, many materials irradiation experiments have included fluence monitors which were subsequently used to reconstruct a coarse-group energy-dependent flux spectrum. Experimental values for thermal and fast neutron flux in the flux trap about the midplane are found to be 1.78 0.27 and 1.05 0:06 1E15 n/cm sec, respectively. The reactor physics code MCNP is used to calculate neutron flux in the HFIR at irradiation locations. The computational results are shown to correspond to closely to experimental data for thermal and fast neutron flux with calculated percent differences ranging from 0:55 13.20%.

McDuffee, Joel Lee [ORNL; Daly, Thomas F [ORNL

2012-01-01T23:59:59.000Z

338

Infrastructure and Facilities Management | National Nuclear Security  

National Nuclear Security Administration (NNSA)

Infrastructure and Facilities Management | National Nuclear Security Infrastructure and Facilities Management | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog Infrastructure and Facilities Management Home > content > Infrastructure and Facilities Management Infrastructure and Facilities Management NNSA restores, rebuilds, and revitalizes the physical infrastructure of the

339

Contained Firing Facility | National Nuclear Security Administration  

National Nuclear Security Administration (NNSA)

Contained Firing Facility | National Nuclear Security Administration Contained Firing Facility | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog Contained Firing Facility Home > About Us > Our Programs > Defense Programs > Office of Research, Development, Test, and Evaluation > Office of Research and Development > Facilities > Contained Firing Facility

340

Supercomputing | Facilities | ORNL  

NLE Websites -- All DOE Office Websites (Extended Search)

Facilities and Capabilities Facilities and Capabilities Primary Systems Infrastructure High Performance Storage Supercomputing and Computation Home | Science & Discovery | Supercomputing and Computation | Facilities and Capabilities | High Performance Storage SHARE High Performance Storage and Archival Systems To meet the needs of ORNL's diverse computational platforms, a shared parallel file system capable of meeting the performance and scalability require-ments of these platforms has been successfully deployed. This shared file system, based on Lustre, Data Direct Networks (DDN), and Infini-Band technologies, is known as Spider and provides centralized access to petascale datasets from all major on-site computational platforms. Delivering more than 240 GB/s of aggregate performance,

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


341

W.J. Cody Associates | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

computing systems. You will work in a research environment that includes the Argonne Leadership Computing Facility, with access to one of DOE's leadership-class computers. W....

342

PNNL: About PNNL - Laboratory Leadership  

NLE Websites -- All DOE Office Websites (Extended Search)

Laboratory Leadership Laboratory Leadership PNNL science and technology inspires and enables the world to live prosperously, safely, and securely. Our leaders turn this vision into action, guiding all of PNNL's efforts. They ensure that our multidisciplinary research teams perform safely, securely and sustainably while advancing science and technology to solve the nation's most pressing problems in energy, the environment and national security. Leaders Mike Kluse Photo Mike Kluse PNNL Laboratory Director Mike Kluse establishes the vision and strategic direction of the Laboratory which combines excellence in science and technology, management and operations, and community stewardship. Steve Ashby Photo Steve Ashby Deputy Director of Science & Technology Steve Ashby leads PNNL's strategic planning agenda and stewards efforts to

343

Leadership Institutional Change Principle | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership Institutional Change Principle Leadership Institutional Change Principle Leadership Institutional Change Principle October 8, 2013 - 11:04am Addthis For changing behavior among employees, leaders in Federal agencies should visibly communicate their own commitments to sustainability in the workplace. Such visible leadership will help achieve sustainability goals in the short term and continue to provide motivation for long-term benefits. Methods Leaders should have a deep understanding of the politics of the organization and the ability to inspire and influence employees. Someone in a leadership position should demonstrate his or her direct involvement as the initial person to commit to sustainability instead of simply admiring others' efforts. Active leadership from managers as well as from other

344

Our Leadership | National Nuclear Security Administration  

NLE Websites -- All DOE Office Websites (Extended Search)

Leadership | National Nuclear Security Administration Leadership | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog Our Leadership Home > About Us > Our Leadership Our Leadership The NNSA plays a critical role in ensuring the security of our Nation by maintaining the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without nuclear testing; reducing the global danger from

345

Our Leadership | National Nuclear Security Administration  

National Nuclear Security Administration (NNSA)

Leadership | National Nuclear Security Administration Leadership | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog Our Leadership Home > About Us > Our Leadership Our Leadership The NNSA plays a critical role in ensuring the security of our Nation by maintaining the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without nuclear testing; reducing the global danger from

346

Supercomputing & Computation | More Science | ORNL  

NLE Websites -- All DOE Office Websites (Extended Search)

(Oak Ridge Automatic Computer and Logical Engine), the fastest computer in the world in 1954. Our experience in providing computational expertise and facilities to the U.S....

347

DOE Leadership & Career Development Programs | Department of...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

leadership occurs, and the broad global context of international trends and events that shape Government agendas. Since 1968, FEI has been known for the personal attention it gives...

348

Strategic Laboratory Leadership Program | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Erik Gottschalk (F); Devin Hodge (A); Jeff Chamberlain (A); Brad Ullrick (A); Bill Rainey (J). Image courtesy of Argonne National Laboratory. Strategic Laboratory Leadership...

349

ESD.801 Leadership Development, Fall 2004  

E-Print Network (OSTI)

Presents basic concepts in group dynamics and leadership. A structured set of outdoor experiences complements classroom activities. Restricted to entering students in the Technology and Policy Program.

Newman, Dava J.

350

NNSA Completes Successful Facilities and Infrastructure Recapitalization  

National Nuclear Security Administration (NNSA)

Completes Successful Facilities and Infrastructure Recapitalization Completes Successful Facilities and Infrastructure Recapitalization Program | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog Home > Media Room > Press Releases > NNSA Completes Successful Facilities and Infrastructure Recapitalization Program Press Release NNSA Completes Successful Facilities and Infrastructure Recapitalization

351

Federal Leadership in High Performance and Sustainable Buildings...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Federal Leadership in High Performance and Sustainable Buildings Memorandum of Understanding Federal Leadership in High Performance and Sustainable Buildings Memorandum of...

352

Milwaukee Showcases Leadership in Energy Efficiency, Better Buildings...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Milwaukee Showcases Leadership in Energy Efficiency, Better Buildings Challenge Milwaukee Showcases Leadership in Energy Efficiency, Better Buildings Challenge November 6, 2013 -...

353

Federal Leadership in High Performance and Sustainable Buildings...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership in High Performance and Sustainable Buildings Memorandum of Understanding Federal Leadership in High Performance and Sustainable Buildings Memorandum of Understanding...

354

Executive Order 13148-Greening the Government Through Leadership...  

NLE Websites -- All DOE Office Websites (Extended Search)

You are here Home Executive Order 13148-Greening the Government Through Leadership in Environmental Management Executive Order 13148-Greening the Government Through Leadership...

355

NNSA Defense Programs leadership meets with Sandia employees...  

NLE Websites -- All DOE Office Websites (Extended Search)

Media Room > Photo Gallery > NNSA Defense Programs leadership meets with Sandia employees NNSA Defense Programs leadership meets with Sandia employees NNSANews posted a photo: NNSA...

356

Executive Order 13514-Federal Leadership in Environmental, Energy...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

514-Federal Leadership in Environmental, Energy, and Economic Performance Executive Order 13514-Federal Leadership in Environmental, Energy, and Economic Performance It is...

357

From Convection to Explosion: End-to-End Simulation of Type Ia Supernovae  

E-Print Network (OSTI)

INCITE award at the Oak Ridge Leadership Computational Facility (OLCF) at Oak Ridge National Laboratory

Bell, John B.

358

Computational Computational  

E-Print Network (OSTI)

38 Computational complexity Computational complexity In 1965, the year Juris Hartmanis became Chair On the computational complexity of algorithms in the Transactions of the American Mathematical Society. The paper the best talent to the field. Theoretical computer science was immediately broadened from automata theory

Keinan, Alon

359

Leadership Development Series: "Leadership In An Age Of Righteousness"  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership Development Series: "Leadership In An Age Of Leadership Development Series: "Leadership In An Age Of Righteousness" Leadership Development Series: "Leadership In An Age Of Righteousness" January 16, 2014 1:00PM to 3:00PM EST Registration link: By e-mail, $0 Course type: Classroom/Auditorium, Video Cast and Teleconference Course Location: DOE Headquarters, Forrestal Building, Washington, DC/ Main Auditorium Course Description: Dr. Jackson Nickerson, The Brookings Institution. With the pitched battle in Congress that led to the recent government shutdown and the growing debt-ceiling debate, can leadership lessons be drawn from this conflict to help leaders of organizations deal with similar internal battles? Dr. Jackson Nickerson from The Brookings Institution will be joining us at the Department of Energy to discuss the answer to this and

360

Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research  

E-Print Network (OSTI)

limit of available NERSC and OLCF computing on heterogeneousperspective. Centers like the OLCF have imposed a paradigmNTM Neoclassical Tearing Mode OLCF Oak Ridge Leadership

Gerber, Richard

2012-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


361

Nuclear Facilities Production Facilities  

National Nuclear Security Administration (NNSA)

Nuclear Security Administration under contract DE-AC04-94AL85000. Sand 2011-4582P. ENERGY U.S. DEPARTMENT OF Gamma Irradiation Facility (GIF) The GIF provides test cells for...

362

Florida Atlantic University College of Engineering & Computer Science  

E-Print Network (OSTI)

· Product Development Project · Solar Water Heater Project · Sustainability Leadership for Engineers · Co · Leading Climate Change Mitigation Strategies · Transportation, Facilities, Energy, Water, Waste, Community

Fernandez, Eduardo

363

External Leadership Resources | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

External Leadership Resources External Leadership Resources External Leadership Resources Here we provide specific links to resources, including training, guidance, blogs, newsletters, etc., for leadership development. Brainpickings - Brain Pickings is a human-powered discovery engine for interestingness, a subjective lens on what matters in the world and why, bringing you things you didn't know you were interested in - until are you. Department of Commerce- DOC has developed a succession strategy to: 1) Implement a leadership succession pipeline that links to the Department's mission critical occupations; 2) Manage a graduated series of competitive programs that identifies, selects and develops emerging leaders in engaging learning experiences; 3) Create a continuous learning environment that builds skills and enhances competencies throughtout the

364

DOE LABORATORY, DOE USER FACILITY EMSL is a national scientific user facility located at Pacific  

E-Print Network (OSTI)

Northwest National Laboratory. EMSL'S USEr FaciLity PrograM How to Become a User www.EMSL.PNNL.gOv EMSL-emsl_announcements@lyris.pnl.gov. For general proposal inquiries, contact the EMSL User Support Office: emsl@pnnl.gov 509-371-6003 Geochemistry. PNNL's mission is to deliver leadership and advancements in science, energy, national security

365

Enabling Green Energy and Propulsion Systems via Direct Noise Computation |  

NLE Websites -- All DOE Office Websites (Extended Search)

High-fidelity simulation of exhaust nozzle under installed configuration High-fidelity simulation of exhaust nozzle under installed configuration Umesh Paliath, GE Global Research; Joe Insley, Argonne National Laboratory Enabling Green Energy and Propulsion Systems via Direct Noise Computation PI Name: Umesh Paliath PI Email: paliath@ge.com Institution: GE Global Research Allocation Program: INCITE Allocation Hours at ALCF: 105 Million Year: 2013 Research Domain: Engineering GE Global Research is using the Argonne Leadership Computing Facility (ALCF) to deliver significant improvements in efficiency, (renewable's) yield and lower emissions (noise) for advanced energy and propulsion systems. Understanding the fundamental physics of turbulent mixing has the potential to transform product design for components such as airfoils and

366

Your title  

Science Conference Proceedings (OSTI)

... Oak Ridge Leadership Computing Facility. ... Titan is the next generation Leadership Computing resource to be deployed at the OLCF in 2012 - 2013 ...

2012-06-25T23:59:59.000Z

367

Enabling event tracing at leadership-class scale through I/O forwarding middleware  

Science Conference Proceedings (OSTI)

Event tracing is an important tool for understanding the performance of parallel applications. As concurrency increases in leadership-class computing systems, the quantity of performance log data can overload the parallel file system, perturbing the ... Keywords: I/O forwarding, atomic append, event tracing

Thomas Ilsche; Joseph Schuchart; Jason Cope; Dries Kimpe; Terry Jones; Andreas Knpfer; Kamil Iskra; Robert Ross; Wolfgang E. Nagel; Stephen Poole

2012-06-01T23:59:59.000Z

368

Public Health Leadership In The 21st Century  

E-Print Network (OSTI)

Leadership in public health requires stretching the mind and soul in almost unimaginable ways. Living

Koh, Howard K.

2006-01-07T23:59:59.000Z

369

Clean Cities: Natural Gas Vehicle Technology Forum Leadership Committee  

NLE Websites -- All DOE Office Websites (Extended Search)

Vehicle Technology Forum Vehicle Technology Forum Leadership Committee Meeting to someone by E-mail Share Clean Cities: Natural Gas Vehicle Technology Forum Leadership Committee Meeting on Facebook Tweet about Clean Cities: Natural Gas Vehicle Technology Forum Leadership Committee Meeting on Twitter Bookmark Clean Cities: Natural Gas Vehicle Technology Forum Leadership Committee Meeting on Google Bookmark Clean Cities: Natural Gas Vehicle Technology Forum Leadership Committee Meeting on Delicious Rank Clean Cities: Natural Gas Vehicle Technology Forum Leadership Committee Meeting on Digg Find More places to share Clean Cities: Natural Gas Vehicle Technology Forum Leadership Committee Meeting on AddThis.com... Goals & Accomplishments Partnerships National Clean Fleets Partnership

370

Leadership Development Resource Center (LDRC) | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Resource Center (LDRC) Resource Center (LDRC) Leadership Development Resource Center (LDRC) There is never a time when building an organization's leadership bench strength is not of critical importance. The results of successful leadership development will always manifest in helping to realize the greatest potential for mission accomplishment. Government and private industry organizations alike depend on their leaders to guide them through change, implement their strategic plans successfully and prepare for future competition. Today, effective leadership is commonly viewed as being central to organizational success and more importance is placed on leadership development than ever. Effective leadership is central to organizational success and we have implemented the Leadership Development Resource Center (LDRC). This will

371

Summary Site Environmental Report for Calendar Year 2009  

E-Print Network (OSTI)

the Advanced Photon Source, the Center for Nanoscale Materials, the Argonne Leadership Computing Facility, and the Argonne Leadership Computing Facility. These facilities are used by scientists and engineers from around to conduct research on material-based problems. Argonne Leadership Computing Facility The Argonne Leadership

Kemner, Ken

372

Towards robotics leadership: an analysis of leadership characteristics and the roles robots will inherit in future human society  

Science Conference Proceedings (OSTI)

This paper aims to present the idea of robotics leadership. By investigating leadership definitions and identifying domains where humans have failed to lead, this paper proposes how robots can step in to fill various leadership positions. This is exemplified ... Keywords: lovotics, robotics leadership

Hooman Aghaebrahimi Samani; Jeffrey Tzu Kwan Valino Koh; Elham Saadatian; Doros Polydorou

2012-03-01T23:59:59.000Z

373

High Explosives Application Facility | National Nuclear Security  

National Nuclear Security Administration (NNSA)

Explosives Application Facility | National Nuclear Security Explosives Application Facility | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog The National Nuclear Security Administration High Explosives Application Facility Home > About Us > Our Programs > Defense Programs > Office of Research, Development, Test, and Evaluation > Office of Research and Development >

374

Leadership  

NLE Websites -- All DOE Office Websites (Extended Search)

EV Everywhere Challenge: EV Everywhere Challenge: Setting the Technical Targets Jacob Ward, Vehicle Technologies Senior Analyst August 23, 2012 Campus Center, University of Massachusetts - Boston Boston, Massachusetts The EV Everywhere Grand Challenge A clean energy grand challenge to make electric-powered vehicles as affordable and convenient as gasoline-powered vehicles for the average American family within a decade. For "EV Everywhere" Analysis, Three Scenarios 1. PHEV40 - reduces battery size while removing range issues, but involves the higher cost of two powertrains 2. AEV100 - minimizes vehicle purchase cost, but introduces range/vehicle use/infrastructure tradeoffs 3. AEV300 - helps to address range issues, but large battery leads to high vehicle cost

375

Leadership  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Anticipating and Responding to System Disturbances in a Anticipating and Responding to System Disturbances in a Self-Healing Manner June 20, 2008 Washington DC Addressing System Disturbances - Automated Prevention, Containment, and Restoration Major Findings Tweaking the Characteristic Prevention includes real time 1 monitoring and other means to anticipate and avoid problems Automated does not mean total but cost effective and appropriate levels of autonomy Containment and restoration as quick as needed Other Key Points Strategies must address both large-scale catastrophes and smaller scale events Smart grid includes wide area coverage from generation to consumption Metrics needed for the design phase as well as for the build phase and operate phase (values) Specific targets for some metrics will vary by the grid topology

376

Leadership  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Optimizing Asset Utilization Optimizing Asset Utilization and Operating Efficien cy cy Efficiently June 20, 2008 Washington DC Major Findings/Caveats Optimizing asset utilization and operating efficiently depends on proper integration of technologies with business processes and associated IT Build metrics, by definition, need to be updated regularly to reflect new technology Build metrics should not be technology prescriptive or result in narrowing technology options for Smart Grid (should be as "technology agnostic" as possible) Build metrics need to differentiate between statistics measuring number of deployed widgets/data versus having the widgets/data available for use Focused value metrics are probably more critical, relevant, and meaningful than "build"

377

Leadership  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Major Findings / Major Findings / Caveats Design options to optimize system resources Provide customers with options, capability and information to manage energy usage Diversity of options may be unique to regions, utility, delivery point Design rates to reflect appropriate economic signals Informed rather than "active" Let customers make their own economic decisions and manage their use and bills Feedback loop to improve coordination between utility and consumers and to minimize customer disruptions and improve customer service Enabling Active Participation by Consumers 1 Assumptions Smart Grid investments are appropriate and will receive regulators' and legislators' support when the value exceeds consumer costs Too early to presume targets for metrics

378

Leadership  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

resources Provide customers with options, capability and information to manage energy usage Diversity of options may be unique to regions, utility, delivery point Design rates to...

379

Leadership  

NLE Websites -- All DOE Office Websites (Extended Search)

EV Everywhere Challenge: Setting the Technical Targets Jacob Ward, Vehicle Technologies Senior Analyst August 23, 2012 Campus Center, University of Massachusetts - Boston Boston,...

380

Magellan: experiences from a Science Cloud  

E-Print Network (OSTI)

Leadership Computing Facility (ALCF) and the National EnergyCom- puting Facility (ALCF) and the National Energy Research

Ramakrishnan, Lavanya

2013-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


381

Principal leadership for technology integration: a study of principal technology leadership.  

E-Print Network (OSTI)

??Todays school systems require building principals to be not only managers of information and instructional practice but also leaders. This leadership is a key component (more)

Kozloski, Kristen C.

2006-01-01T23:59:59.000Z

382

LOFT facility and test program  

SciTech Connect

The Loss-of-Fluid Test (LOFT) test facility, program objectives, and the experiments planned are described. The LOFT facility is related to the smaller Semiscale facility and the larger commercial pressurized water reactors. The fact that LOFT is a computer model assessment tool rather than a demonstration test is emphasized. Various types of reactor safety experiments planned through 1983 are presented.

McPherson, G.D.

1979-11-01T23:59:59.000Z

383

PERMITTING LEADERSHIP IN THE UNITED STATES  

SciTech Connect

In accordance with the Southern States Energy Board (SSEB) proposal, as incorporated into NETL/DE-FC26-97FT34199, the objective of this agreement is to streamline the environmental technology permitting process site-to-site, state-to-state, and industry-to-industry to achieve remediation and waste processing faster, better and cheaper. SSEB is working with member Governors, legislators and regulators to build consensus on streamlining the permitting process for new and innovative technologies for addressing the legacy of environmental problems from 50 years of weapons research, development and production. This report reviews mechanisms whereby industry consortiums and the Department of Energy (DOE) have been working with State regulators and other officials in technology deployment decisions within the DOE complex. The historic development of relationships with State regulators is reviewed and the current nature of the relationships examined. The report contains observations from internal DOE reviews as well as recommendations from the General Accounting Office (GAO) and other external organizations. The report discusses reorganization initiatives leading up to a DOE Top-to-Bottom review of the Environmental Management (EM) Program and highlights points of consideration for maintaining effective linkages with State regulators. It notes how the proposed changes will place new demands upon the National Energy Technology Laboratory (NETL) and how NETL can leverage its resources by refocusing existing EM efforts specifically to states that have DOE facilities within their borders (host-states). Finally, the report discusses how SSEB's Permitting Leadership in the United States (PLUS) program can provide the foundation for elements of NETL's technical assistance program that are delivered to regulators and other decision- makers in host-states. As a regional compact commission, SSEB provides important direct linkages to regulators and stakeholders who need technical assistance to evaluate DOE's cleanup plans. In addition, the PLUS program has facilitated the involvement of key regulators from host-states beyond the Southern region.

Ken Nemeth

2002-09-01T23:59:59.000Z

384

NREL: Sustainable NREL - Integrated Biorefinery Research Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Integrated Biorefinery Research Facility Integrated Biorefinery Research Facility A photo of a grey, three-story research facility on a large campus. The Integrated Biorefinery Research Facility The Integrated Biorefinery Research Facility (IBRF) incorporates a large number of energy efficiency and sustainability practices into its cutting-edge design. This facility received a Leadership in Energy and Environmental Design (LEED®) Gold-level certification from the U.S. Green Building Council and supports a variety of advanced biofuels projects and enables researchers and industry partners to develop, test, evaluate, and demonstrate processes for the production of bio-based products and fuels. Fast Facts Cost: $33.5M Square feet: 27,000 Occupants: 32 Labs/Equipment: high-bay biochemical conversion pilot plant that

385

Opportunities for discovery: Theory and computation in Basic Energy Sciences  

SciTech Connect

New scientific frontiers, recent advances in theory, and rapid increases in computational capabilities have created compelling opportunities for theory and computation to advance the scientific mission of the Office of Basic Energy Sciences (BES). The prospects for success in the experimental programs of BES will be enhanced by pursuing these opportunities. This report makes the case for an expanded research program in theory and computation in BES. The Subcommittee on Theory and Computation of the Basic Energy Sciences Advisory Committee was charged with identifying current and emerging challenges and opportunities for theoretical research within the scientific mission of BES, paying particular attention to how computing will be employed to enable that research. A primary purpose of the Subcommittee was to identify those investments that are necessary to ensure that theoretical research will have maximum impact in the areas of importance to BES, and to assure that BES researchers will be able to exploit the entire spectrum of computational tools, including leadership class computing facilities. The Subcommittee s Findings and Recommendations are presented in Section VII of this report.

Harmon, Bruce; Kirby, Kate; McCurdy, C. William

2005-01-11T23:59:59.000Z

386

Research Facilities  

NLE Websites -- All DOE Office Websites (Extended Search)

FLEX lab image, windows testing lab, scientist inside a lab, Research Facilities EETD maintains advanced research and test facilities for buildings, energy technologies, air...

387

Leadership Development Program Catalog | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership Development Program Catalog Leadership Development Program Catalog Leadership Development Program Catalog A well-trained workforce is vital to the long-term effectiveness of the Federal Government. As such, all Federal employees, particularly those who serve or hope to serve in senior management positions, are encouraged to take advantage of opportunities to enhance their professional skills and develop the competencies needed for success as leaders. The Leadership Development Program Catalog by ECQ is a comprehensive list of training opportunities intended to assist all Federal leaders grow in the five Executive Core Qualifications (ECQs) and Fundamental Competencies. The resources listed will facilitate your growth and development as both a Federal employee and as a person, and will be helpful to all levels of

388

ESD.801 Leadership Development, Fall 2005  

E-Print Network (OSTI)

This seminar meets six times during the semester. Students work in a seminar environment to develop leadership capabilities. An initial Outward Bound experience builds trust, teamwork and communications. Readings and ...

Newman, Dava

389

Groundbreaking at National Ignition Facility | National Nuclear Security  

NLE Websites -- All DOE Office Websites (Extended Search)

Groundbreaking at National Ignition Facility | National Nuclear Security Groundbreaking at National Ignition Facility | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog Home > About Us > Our History > NNSA Timeline > Groundbreaking at National Ignition Facility Groundbreaking at National Ignition Facility May 29, 1997 Livermore, CA Groundbreaking at National Ignition Facility

390

Facility Microgrids  

Science Conference Proceedings (OSTI)

Microgrids are receiving a considerable interest from the power industry, partly because their business and technical structure shows promise as a means of taking full advantage of distributed generation. This report investigates three issues associated with facility microgrids: (1) Multiple-distributed generation facility microgrids' unintentional islanding protection, (2) Facility microgrids' response to bulk grid disturbances, and (3) Facility microgrids' intentional islanding.

Ye, Z.; Walling, R.; Miller, N.; Du, P.; Nelson, K.

2005-05-01T23:59:59.000Z

391

Global Energy Leadership Fellows Research and Leadership for Solving the World's Energy Problems  

E-Print Network (OSTI)

Global Energy Leadership Fellows Research and Leadership for Solving the World's Energy Problems on solving the world's most challenging energy problems. e fellowships are for one-year (with opportunity Initiative for Renewable Energy and the Environment Institute on the Environment, University of Minnesota

de Weck, Olivier L.

392

Corporate Information & Computing Services  

E-Print Network (OSTI)

Corporate Information & Computing Services High Performance Computing Report March 2008 Author The University of Sheffield's High Performance Computing (HPC) facility is provided by CiCS. It consists of both Graduate Students and Staff. #12;Corporate Information & Computing Services High Performance Computing

Martin, Stephen John

393

Public Reading Facilities | National Nuclear Security Administration  

National Nuclear Security Administration (NNSA)

Reading Facilities | National Nuclear Security Administration Reading Facilities | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog Public Reading Facilities Home > About Us > Our Operations > NNSA Office of General Counsel > Freedom of Information Act (FOIA) > Public Reading Facilities Public Reading Facilities The FOIA and E-FOIA require that specific types of records as well as

394

TransForum v4n4 - TTRDC Facilities  

NLE Websites -- All DOE Office Websites (Extended Search)

R&D Laboratory Engine Research FacilityHeavy-Duty Truck Engine Test Cell High-Performance Computing Research Facility Tribology Laboratory Selective Continuous Recycling of...

395

Ohio University Voinovich School of Leadership and Public Affairs | Open  

Open Energy Info (EERE)

Voinovich School of Leadership and Public Affairs Voinovich School of Leadership and Public Affairs Jump to: navigation, search Name Ohio University Voinovich School of Leadership and Public Affairs Address The Voinovich School of Leadership and Public Affairs Building 21, The Ridges 1 Ohio University Place Athens, Ohio Zip 45701 Website http://www.ohio.edu/voinovichs References Ohio University Voinovich School of Leadership and Public Affairs [1] LinkedIn Connections This article is a stub. You can help OpenEI by expanding it. Ohio University Voinovich School of Leadership and Public Affairs is a research institution based in Athens, Ohio. References ↑ "Ohio University Voinovich School of Leadership and Public Affairs" Retrieved from "http://en.openei.org/w/index.php?title=Ohio_University_Voinovich_School_of_Leadership_and_Public_Affairs&oldid=367673"

396

Leadership in small online collaborative learning groups: a distributed perspective  

Science Conference Proceedings (OSTI)

We examined emergent leadership in small online collaborative learning groups of pre-service math and science teachers. Groups worked online to design interdisciplinary instructional units. We employed a distributed leadership framework (Spillane, 2007) ...

Julia Gressick; Sharon Derry

2008-06-01T23:59:59.000Z

397

KCP Field Office hosts leadership meeting | National Nuclear...  

National Nuclear Security Administration (NNSA)

managers and headquarters' senior management team on Aug. 6-8. KCP Field Office hosts leadership meeting KCP Field Office hosts leadership meeting Posted on August 14, 2013 at 4:39...

398

Advanced Biofuels Leadership Conference (ABLC) Next 2013 | Department...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Advanced Biofuels Leadership Conference (ABLC) Next 2013 Advanced Biofuels Leadership Conference (ABLC) Next 2013 October 9, 2013 12:00PM EDT to October 11, 2013 12:00PM EDT...

399

Newest Los Alamos facility receives LEED® Gold certification  

NLE Websites -- All DOE Office Websites (Extended Search)

Newest facility receives LEED® Gold certification Newest facility receives LEED® Gold certification Newest Los Alamos facility receives LEED® Gold certification The Radiological Laboratory Utility Office Building is first to achieve both the Leadership in Energy and Environmental Design status and LEED Gold certification. June 13, 2012 Radiological Laboratory Utility Office Building Radiological Laboratory Utility Office Building Contact Kim Powell Communications Office (505) 695-6159 Email LOS ALAMOS, New Mexico, June 13, 2012-Los Alamos National Laboratory's newest facility, the Radiological Laboratory Utility Office Building (RLUOB), is also its first to achieve both the Leadership in Energy and Environmental Design (LEED) status and LEED Gold certification from the U.S. Green Building Council (USGBC).

400

Application: Facilities  

Science Conference Proceedings (OSTI)

... Option.. Papavergos, PG; 1991. Halon 1301 Use in Oil and Gas Production Facilities: Alaska's North Slope.. Ulmer, PE; 1991. ...

2011-12-22T23:59:59.000Z

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


401

Business Games for Leadership Development: A Systematic Review  

Science Conference Proceedings (OSTI)

Leadership development poses great challenges to modern organizations. One possible method to develop leaders is the use of experiential techniques based on business games. The objective of this article was to identify, based on literature, business ... Keywords: business games, experiential learning, games, leadership, leadership development, simulations, simulators, systematic review

Mauricio Capobianco Lopes, Francisco A. P. Fialho, Cristiano J. C. A. Cunha, Sofia Ins Niveiros

2013-08-01T23:59:59.000Z

402

Energy Leadership Forum | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership Forum Leadership Forum Energy Leadership Forum January 20, 2006 - 11:04am Addthis Remarks Prepared for Energy Secretary Samuel Bodman I want to thank all of you here for your participation. I understand you have had a very productive and informative conference so far, and I look forward to participating in one of the sessions in a few minutes. What I mainly want to do there is listen. That, in fact, is the major purpose of this conference not to come here and tell you what needs to be done, but to hear from you about how we can do better next time. Hurricanes Katrina and Rita emphasized how important it is to anticipate and plan for catastrophic events, but in the case of energy supplies, we must also remember that a disruption can occur for a variety of reasons

403

Energy Leadership Forum | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Energy Leadership Forum Energy Leadership Forum Energy Leadership Forum January 20, 2006 - 11:04am Addthis Remarks Prepared for Energy Secretary Samuel Bodman I want to thank all of you here for your participation. I understand you have had a very productive and informative conference so far, and I look forward to participating in one of the sessions in a few minutes. What I mainly want to do there is listen. That, in fact, is the major purpose of this conference not to come here and tell you what needs to be done, but to hear from you about how we can do better next time. Hurricanes Katrina and Rita emphasized how important it is to anticipate and plan for catastrophic events, but in the case of energy supplies, we must also remember that a disruption can occur for a variety of reasons

404

A method for the assessment of site-specific economic impacts of commercial and industrial biomass energy facilities. A handbook and computer model  

DOE Green Energy (OSTI)

A handbook on ``A Method for the Assessment of Site-specific Econoomic Impacts of Industrial and Commercial Biomass Energy Facilities`` has been prepared by Resource Systems Group Inc. under contract to the Southeastern Regional Biomass Energy Program (SERBEP). The handbook includes a user-friendly Lotus 123 spreadsheet which calculates the economic impacts of biomass energy facilities. The analysis uses a hybrid approach, combining direct site-specific data provided by the user, with indirect impact multipliers from the US Forest Service IMPLAN input/output model for each state. Direct economic impacts are determined primarily from site-specific data and indirect impacts are determined from the IMPLAN multipliers. The economic impacts are given in terms of income, employment, and state and federal taxes generated directly by the specific facility and by the indirect economic activity associated with each project. A worksheet is provided which guides the user in identifying and entering the appropriate financial data on the plant to be evaluated. The WLAN multipliers for each state are included in a database within the program. The multipliers are applied automatically after the user has entered the site-specific data and the state in which the facility is located. Output from the analysis includes a summary of direct and indirect income, employment and taxes. Case studies of large and small wood energy facilities and an ethanol plant are provided as examples to demonstrate the method. Although the handbook and program are intended for use by those with no previous experience in economic impact analysis, suggestions are given for the more experienced user who may wish to modify the analysis techniques.

Not Available

1994-10-01T23:59:59.000Z

405

November 13 - 15, 2012 HSS Work Group Leadership Meeting Summary - Work Force Retention  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Work Force Retention Work Group Work Force Retention Work Group Co-Lead Telecom November 16, 2012 DRAFT Discussion Overview Purpose: This HSS Focus Group Work Group telecom was held with the Work Group Co-Leads to discuss change elements and strategic direction to support accelerated efforts to advancing progress, productivity and performance within each of the Work Groups. Although current roles within all of the Work Groups and Focus Group efforts remain the same, the addition of centralized leadership and oversight by representatives (2) of the Departmental Representative to the Defense Nuclear Facilities Safety Board are established. 1. Leadership Transition * Co-Leads will continue to provide technical functions * Functions of the Focus Group Program will remain the same. [Lily/Stephanie]

406

November 13 - 15, 2012 HSS Work Group Leadership Meeting Summary - Strategic Initiatives  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Strategic Initiatives Work Group Strategic Initiatives Work Group Co-Lead Telecom November 13, 2012 DRAFT Discussion Overview Purpose: This HSS Focus Group Work Group telecom was held with the Work Group Co-Leads to discuss change elements and strategic direction to support accelerated efforts to advancing progress, productivity and performance within each of the Work Groups. Although current roles within all of the Work Groups and Focus Group efforts remain the same, the addition of centralized leadership and oversight by representatives (2) of the Departmental Representative to the Defense Nuclear Facilities Safety Board are established. 1. Leadership Transition * Co-Leads will continue to provide technical functions * Functions of the Focus Group Program will remain the same. [Lily/Stephanie]

407

November 13 - 15, 2012 HSS Work Group Leadership Meeting Summary - 851 Implementation  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

851 Implementation Work Group 851 Implementation Work Group Co-Lead Telecom November 13, 2012 DRAFT Discussion Overview Purpose: This HSS Focus Group Work Group telecom was held with the Work Group Co-Leads to discuss change elements and strategic direction to support accelerated efforts to advancing progress, productivity and performance within each of the Work Groups. Although current roles within all of the Work Groups and Focus Group efforts remain the same, the addition of centralized leadership and oversight by representatives (2) of the Departmental Representative to the Defense Nuclear Facilities Safety Board are established. 1. Leadership Transition * Co-Leads will continue to provide technical functions * Functions of the Focus Group Program will remain the same. [Lily/Stephanie]

408

Partner of the Year profiles in leadership | ENERGY STAR Buildings & Plants  

NLE Websites -- All DOE Office Websites (Extended Search)

» Partner of the Year profiles in leadership » Partner of the Year profiles in leadership Secondary menu About us Press room Contact Us Portfolio Manager Login Facility owners and managers Existing buildings Commercial new construction Industrial energy management Small business Service providers Service and product providers Verify applications for ENERGY STAR certification Design commercial buildings Energy efficiency program administrators Commercial and industrial program sponsors Associations State and local governments Federal agencies Tools and resources Training In this section Get started with ENERGY STAR Make the business case Build an energy management program Measure, track, and benchmark Improve energy performance Industrial service and product providers Earn recognition ENERGY STAR Partner of the Year Award

409

Canada-Sustainable Communities Leadership Academy (SCLA) | Open Energy  

Open Energy Info (EERE)

Canada-Sustainable Communities Leadership Academy (SCLA) Canada-Sustainable Communities Leadership Academy (SCLA) Jump to: navigation, search Logo: Canada-Sustainable Communities Leadership Academy (SCLA) Name Canada-Sustainable Communities Leadership Academy (SCLA) Agency/Company /Organization Institute for Sustainable Communities (ISC) Partner Smart Growth America, Housing and Urban Development (HUD) Sector Climate, Energy Focus Area Buildings, Economic Development, Energy Efficiency, Food Supply, Greenhouse Gas, Land Use, People and Policy, Transportation, Water Conservation Topics Adaptation, Finance, Implementation, Low emission development planning, Policies/deployment programs Program Start 2008 Program End 2015 Country Canada Northern America References Sustainable Communities Leadership Academy[1]

410

Sustainable Communities Leadership Academy (SCLA) | Open Energy Information  

Open Energy Info (EERE)

Leadership Academy (SCLA) Leadership Academy (SCLA) Jump to: navigation, search Logo: Sustainable Communities Leadership Academy (SCLA) Name Sustainable Communities Leadership Academy (SCLA) Agency/Company /Organization Institute for Sustainable Communities (ISC) Partner Smart Growth America, Housing and Urban Development (HUD) Sector Climate, Energy Focus Area Buildings, Economic Development, Energy Efficiency, Food Supply, Greenhouse Gas, Land Use, People and Policy, Transportation, Water Conservation Topics Adaptation, Finance, Implementation, Low emission development planning, Policies/deployment programs Program Start 2008 Program End 2015 Country Canada, United States Northern America, Northern America References Sustainable Communities Leadership Academy[1] Overview

411

DOE Leadership & Career Development Programs | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Development » DOE Leadership & Career Development Programs Development » DOE Leadership & Career Development Programs DOE Leadership & Career Development Programs Senior Executive Service Candidate Development Program (SESCDP): This program consists of four Senior Executive Service Development Seminars designed to help position participants for selection into the SES. Each seminar reflects different key components of OPM's Executive Core Qualifications (ECQs). For more information please contact David Rosenmarkle Federal Executive Institute (FEI): At FEI, you will explore and build your knowledge and skills in personal leadership, transforming public organizations, the policy framework in which Government leadership occurs, and the broad global context of international trends and events that shape Government agendas. Since 1968,

412

United States-Sustainable Communities Leadership Academy (SCLA) | Open  

Open Energy Info (EERE)

United States-Sustainable Communities Leadership Academy (SCLA) United States-Sustainable Communities Leadership Academy (SCLA) Jump to: navigation, search Logo: United States-Sustainable Communities Leadership Academy (SCLA) Name United States-Sustainable Communities Leadership Academy (SCLA) Agency/Company /Organization Institute for Sustainable Communities (ISC) Partner Smart Growth America, Housing and Urban Development (HUD) Sector Climate, Energy Focus Area Buildings, Economic Development, Energy Efficiency, Food Supply, Greenhouse Gas, Land Use, People and Policy, Transportation, Water Conservation Topics Adaptation, Finance, Implementation, Low emission development planning, Policies/deployment programs Program Start 2008 Program End 2015 Country United States Northern America References Sustainable Communities Leadership Academy[1]

413

ARM - Facility News Article  

NLE Websites -- All DOE Office Websites (Extended Search)

August 15, 2010 [Facility News] August 15, 2010 [Facility News] Micropulse Lidars Get Boost from Recovery Act Bookmark and Share Shown here during installation on the aft deck of the RV Connecticut, the upgraded MPL includes a sleek new computer that can fit into smaller spaces. The laser window at the top is covered by a cone until the instrument is turned on. Shown here during installation on the aft deck of the RV Connecticut, the upgraded MPL includes a sleek new computer that can fit into smaller spaces. The laser window at the top is covered by a cone until the instrument is turned on. Through funding from the American Recovery and Reinvestment Act of 2009, ARM is upgrading the micropulse lidars (MPL) throughout the user facility. Similar to a radar, the MPL sends pulses of energy into the atmosphere.

414

SEU Test Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Logo The SEU Test Facility Logo The SEU Test Facility 1. Introduction The uninterrupted and progressive miniaturization of microelectronic devices while resulting in more powerful computers, has also made these computers more susceptible to the effects of ionizing radiation. This is of particular concern for space applications due to the radiation fields encountered outside the protective terrestrial atmosphere and magnetosphere. Starting in 1987, a coalition of US government agencies (NSA, NASA, NRL and USASSDC ) collaborated with BNL to develop a powerful and user-friendly test facility for investigating space-radiation effects on micro-electronic devices[1]. The main type of effects studied are the so called Single Event Upsets (SEUs) where ionization caused by the passage of

415

ARM - Facility News Article  

NLE Websites -- All DOE Office Websites (Extended Search)

W-Band Cloud Radar Added to ARM Mobile Facility in Africa W-Band Cloud Radar Added to ARM Mobile Facility in Africa Bookmark and Share Most of the WACR is mounted on top of one of the AMF shelters. The WACR computer and chiller (used to keep the WACR cool in temperatures up to 47 degrees C) are located in the shelter below the radar. Most of the WACR is mounted on top of one of the AMF shelters. The WACR computer and chiller (used to keep the WACR cool in temperatures up to 47 degrees C) are located in the shelter below the radar. A W-band ARM Cloud Radar (WACR) recently joined the suite of baseline capabilities offered by the ARM Mobile Facility (AMF). The term "W-band" refers to the specific radio frequency range of this radar, which is a 95 gigahertz pulse Doppler zenith pointing radar, providing profiles of cloud

416

Software and Libraries on Eureka/Gadzooks | Argonne Leadership Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Software and Libraries on Eureka/Gadzooks Software and Libraries on Eureka/Gadzooks Software and softenv Many packages are installed in /soft/apps. If a binary you're looking for isn't already in your PATH, check /soft/apps. Let us know if you find any software packages that you need that are not currently installed. softenv is installed. We are using a newer version that uses ~/.softenvrc instead of ~/.soft. Keep in mind that softenv is sensitive to the order of items in your .softenvrc file. If you add items but get unexpected results, try changing the order; in particular, you may want "@default" to be the last item in .softenvrc. The command "softenv" will output a list of keys available on the system you're logged in to. To see all available keys in the softenv database, use

417

Overview of How to Compile and Link | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

This version has the fine-grained MPICH locking. Error checking and assertions are disabled. This setting can provide a substantial performance improvement when an application...

418

Block and Job State Documentation | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

IntrepidChallengerSurveyor Decommissioning of BGP Systems and Resources Introducing Challenger Quick Reference Guide System Overview BGP Driver Information Prior BGP Driver...

419

Performance FAQs on BG/Q Systems | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

Blue GeneQ Versus Blue GeneP MiraCetusVesta IntrepidChallengerSurveyor Decommissioning of BGP Systems and Resources Introducing Challenger Quick Reference Guide System...

420

Mira: An Engine for Discovery | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

Mira: An Engine for Discovery September 27, 2013 Printer-friendly version Every day, researchers around the country are working to understand mysterious phenomena, from the origins...

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


421

System Overview for BG/P Systems | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

Project Allocations, Balance, and Job Charges Use the cbank command to query allocations. Please refer to this page: Query Allocations with cbank Large File...

422

ARM - Facility News Article  

NLE Websites -- All DOE Office Websites (Extended Search)

New Backup Software Improves Processing, Reliability at Data Management New Backup Software Improves Processing, Reliability at Data Management Facility Bookmark and Share Real-time data from all three of the ARM Climate Research Facility sites (North Slope of Alaska, Southern Great Plains, and Tropical Western Pacific) are collected and processed at the ARM Climate Research Facility Data Management Facility (DMF) each day. Processing involves the application of algorithms for performing simple averaging routines, qualitative comparisons, or more complicated experimental calculations. With continual advances in computer technology, keeping up with the volume and pace of incoming data is a daunting challenge. And because the remote sites do not provide backups, reliable backups of these data at the DMF are critical. In addition, significant numbers of value-added datasets are

423

MAX Fluid Dynamics facility  

NLE Websites -- All DOE Office Websites (Extended Search)

MAX Fluid Dynamics facility MAX Fluid Dynamics facility Capabilities Engineering Experimentation Reactor Safety Testing and Analysis Overview Nuclear Reactor Severe Accident Experiments MAX NSTF SNAKE Aerosol Experiments System Components Laser Applications Robots Applications Other Facilities Other Capabilities Work with Argonne Contact us For Employees Site Map Help Join us on Facebook Follow us on Twitter NE on Flickr MAX Fluid Dynamics facility Providing high resolution data for development of computational tools that model fluid flow and heat transfer within complex systems such as the core of a nuclear reactor. 1 2 3 4 5 Hot and cold air jets are mixed within a glass tank while laser-based anemometers and a high-speed infrared camera characterize fluid flow and heat transfer behavior. Click on image to view larger size image.

424

Leadership practitioner Brief Brandeis International Business School  

E-Print Network (OSTI)

energy businesses are therefore subject to different economic forces and trends. This complex, rapidly price--clean energy deserves and needs meaningful government help, even if it entails short for Global Business Leadership Clean Energy in Perspective The Perlmutter Institute for Global Business

Snider, Barry B.

425

User Facilities  

NLE Websites -- All DOE Office Websites (Extended Search)

Lawrence Berkeley National Laboratory's National User Facilities are available for cooperative research with institutions and the private sector worldwide. The Environmental...

426

C-AD Experimental Support & Facilities Division  

NLE Websites -- All DOE Office Websites (Extended Search)

Software Quark Gluon Spectroscopy RHIC Computing Facility Places of Interest Amtrak Conference Room Scheduler Government Agencies Internet Pilot to Physics LI Shore Info...

427

Status of the National Ignition Facility Project, IG-0598 | Department...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

conventional facility; laser system; target experimental system; integrated computers and controls; assembly, installation, and refurbishment equipment; and utilities. To...

428

ARM - Facility News Article  

NLE Websites -- All DOE Office Websites (Extended Search)

Upgrade to Millimeter Wave Cloud Radar Increases Volume of Data Collection Upgrade to Millimeter Wave Cloud Radar Increases Volume of Data Collection Bookmark and Share In mid-April, hardware and software upgrades to the millimeter wave cloud radar (MMCR) at the ARM Climate Research Facility's North Slope of Alaska (NSA) were completed. Hardware upgrades included replacing the OS/2 and Solaris computers with two Windows 2000 computers. One of these computers is for the MMCR radar. It now has a new digital signal processing board that allows much more efficient processing of the radar return signals, resulting in higher temporal resolution. The receiver was also upgraded from a 12 bit to 14 bit analog-to-digital converter. Software on the MMCR radar computer was upgraded to run a modified version of Vaisala's LAP-XM software for controlling and acquiring the radar data. The other computer,

429

Mobile Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Facility Facility AMF Information Science Architecture Baseline Instruments AMF1 AMF2 AMF3 Data Operations AMF Fact Sheet Images Contacts AMF Deployments Hyytiälä, Finland, 2014 Manacapuru, Brazil, 2014 Oliktok Point, Alaska, 2013 Los Angeles, California, to Honolulu, Hawaii, 2012 Cape Cod, Massachusetts, 2012 Gan Island, Maldives, 2011 Ganges Valley, India, 2011 Steamboat Springs, Colorado, 2010 Graciosa Island, Azores, 2009-2010 Shouxian, China, 2008 Black Forest, Germany, 2007 Niamey, Niger, 2006 Point Reyes, California, 2005 Mobile Facilities Pictured here in Gan, the second mobile facility is configured in a standard layout. Pictured here in Gan, the second mobile facility is configured in a standard layout. To explore science questions beyond those addressed by ARM's fixed sites at

430

NSF UChicago: Tier 2 and Tier 3 1 Tier 2 and Tier 3 Facilities  

E-Print Network (OSTI)

generally through Open Science Grid. We have deployed a high performance computing facility (the ATLAS

431

MPA-11 Facilities  

NLE Websites -- All DOE Office Websites (Extended Search)

Our Cleanroom Facility is available for use by LANL researchers MPA-11 Facilities Fuel cell testing, acoustics laboratories, and a wide spectrum of characterization equipment are essential to the research conducted in our group. Fuel Cell Testing. ........Acoustics. ........Characterization . ........ Many other multi-disciplinary staff and experimental/computational capabilities throughout Los Alamos National Laboratory are available to support our research. Access to enabling capabilities for the Fuel Cell Program is facilitated by the Laboratory's Institute for Hydrogen and Fuel Cell Research. Fuel Cell Testing Experimental equipment that is essential to our fuel cell efforts is housed in 24 laboratories at the Los Alamos National Laboratory. A partial list of

432

Facility Representative Program: 2002 Facility Representative...  

NLE Websites -- All DOE Office Websites (Extended Search)

McLaughlin, LANL Root Cause Analysis Course - Marke LaneKen Albers Honeywell Kansas City Plant 10:30 a.m. Leadership Development Panel Moderator: Emil Morrow, Senior Technical...

433

Director of the National Ignition Facility, Lawrence Livermore National  

NLE Websites -- All DOE Office Websites (Extended Search)

Director of the National Ignition Facility, Lawrence Livermore National Director of the National Ignition Facility, Lawrence Livermore National Laboratory | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog Home > About Us > Who We Are > In The Spotlight > Edward Moses Director of the National Ignition Facility, Lawrence Livermore National Laboratory

434

Green Building Standards for State Facilities | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Green Building Standards for State Facilities Green Building Standards for State Facilities Green Building Standards for State Facilities < Back Eligibility State Government Savings Category Heating & Cooling Home Weatherization Construction Commercial Weatherization Commercial Heating & Cooling Design & Remodeling Bioenergy Manufacturing Buying & Making Electricity Solar Lighting Windows, Doors, & Skylights Heating Water Water Heating Wind Program Info State Arkansas Program Type Energy Standards for Public Buildings Provider Arkansas Economic Development Commission Effective July 1, 2005, Act 1770 (the Arkansas Energy and Natural Resources Conservation Act), encourages all state agencies, including institutions of higher education, to use Leadership in Energy and Environmental Design (LEED) and Green Globes rating systems whenever possible and appropriate in

435

Enabling Event Tracing at Leadership-Class Scale through I/O Forwarding Middleware  

Science Conference Proceedings (OSTI)

Event tracing is an important tool for understanding the performance of parallel applications. As concurrency increases in leadership-class computing systems, the quantity of performance log data can overload the parallel file system, perturbing the application being observed. In this work we present a solution for event tracing at leadership scales. We enhance the I/O forwarding system software to aggregate and reorganize log data prior to writing to the storage system, significantly reducing the burden on the underlying file system for this type of traffic. Furthermore, we augment the I/O forwarding system with a write buffering capability to limit the impact of artificial perturbations from log data accesses on traced applications. To validate the approach, we modify the Vampir tracing tool to take advantage of this new capability and show that the approach increases the maximum traced application size by a factor of 5x to more than 200,000 processors.

Ilsche, Thomas [Technische Universitat Dresden; Schuchart, Joseph [Technische Universitat Dresden; Cope, Joseph [Argonne National Laboratory (ANL); Kimpe, Dries [Argonne National Laboratory (ANL); Jones, Terry R [ORNL; Knuepfer, Andreas [Technische Universitat Dresden; Iskra, Kamil [Argonne National Laboratory (ANL); Ross, Robert [Argonne National Laboratory (ANL); Nagel, Wolfgang E. [Technische Universitat Dresden; Poole, Stephen W [ORNL

2012-01-01T23:59:59.000Z

436

ADDRESSING SOCIETAL PROBLEMS WITH MOLECULAR SCIENCE  

E-Print Network (OSTI)

The Argonne Leadership Computing Facility An instrument of Change #12;A Preeminent Global Resource t y #12;Argonne Leadership Computing Facility ALCF Continues a Tradition of Computing Innovation for researchers to understand how the Argonne Leadership Computing Facility can accelerate their research

Pritchard, Jonathan

437

Princeton Plasma Physics Lab - Lab Leadership  

NLE Websites -- All DOE Office Websites (Extended Search)

lab-leadership en Adam Cohen lab-leadership en Adam Cohen http://www.pppl.gov/people/adam-cohen

From Hot Cells to Hot PlasmasCohen approaches science challenges with practicalityBy John GreenwaldAdam Cohen grew up as the family handyman. "I was the kid who tacked down the carpet, repaired the roof, fixed the toilet and worked on the car," he said of his youth in northern New Jersey. "I would pull apart batteries and tear apart things and try to make them work again."That Mr. Fixit

438

Sandia National Laboratories: About Sandia: Leadership  

NLE Websites -- All DOE Office Websites (Extended Search)

About About Leadership Leadership with Paul Hommert : President and Laboratories Director Throughout its history, Sandia has been guided by the core principle of - in the words of President Harry Truman - providing "exceptional service in the national interest." Paul Hommert Paul Hommert President & Laboratories Director Paul Hommert is the director of Sandia National Laboratories and president of Sandia Corporation. Sandia has principal sites in Albuquerque, N.M., and Livermore, Calif., an annual budget of $2.5 billion, and approximately 9,400 employees. View full biography President's message Download biography (PDF, 1.5 MB) Kim Sawyer Kim Sawyer Deputy Laboratories Director & Executive Vice President for Mission Support Kimberly (Kim) C. Sawyer is the deputy Laboratories director and executive

439

Fermilab Leadership Institute Integrating Internet, Instruction and  

NLE Websites -- All DOE Office Websites (Extended Search)

Fermilab Leadership Institute Integrating Fermilab Leadership Institute Integrating Internet, Instruction and Curriculum Online Materials Projects ACT Program Information Example ACT Class Page Example LInC Class Page Please sign up here to be notified of future LInC program opportunities. Fermilab LInC Online is creating a cadre of educational leaders who effectively integrate technology in their classrooms to support engaged learning student investigations on real-world issues. Participants range from classroom teachers and technology coordinators through library media specialists, who create engaged learning projects that incorporate the best uses of technology. The new Fermilab LInC ACT course guides teachers through the process of evaluating, selecting and customizing an inquiry-based online project to

440

NREL: Sustainable NREL - Research Support Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Research Support Facility Research Support Facility Take a Closer Look RSF Brochure Design-Build Process Booklet Photos Videos Media Contacts Content on this page requires a newer version of Adobe Flash Player. Get Adobe Flash player Text Version An artist's rendering of an H-shaped building. The rendering includes a key at the bottom with letters A-K that correspond with letters on the building. Each letter, when selected, provides additional information about the building feature. Use the interactive rendering to learn more about the RSF's renewable energy and energy efficiency features and design. The Research Support Facility (RSF) is the laboratory's newest sustainable green building. This 360,000 ft2 Leadership in Energy and Environmental Design (LEED®) Platinum office building is a showcase for energy

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


441

NREL: Sustainable NREL - Science and Technology Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Science and Technology Facility Science and Technology Facility The 71,000 sq ft Science and Technology Facility (S&TF), was certified as Platinum in FY 2007 by the U.S. Green Buildings Council under its Leadership in Energy and Environmental Design (LEED®) Green Building program. It's the first federal LEED® Platinum building. Energy Use The S&TF is a showcase for energy efficiency. The S&TF was designed to provide a 41% reduction in energy cost compared to a standard laboratory building. Its energy-saving features include: Public transportation is located within one-half mile of the S&TF. The roof is ENERGY STAR® compliant (high reflectivity, low emissivity). The building design exceeds ASHRAE 90.1 1999 requirements for energy efficiency. Orientation along an east-west axis so that windows on the north and south

442

GLEANing Scientific Insights More Quickly | Argonne Leadership...  

NLE Websites -- All DOE Office Websites (Extended Search)

data movement between the compute, analysis, and storage resources of high-performance computing systems. This speeds the computer's ability to read and write data, also...

443

ARM - Facility News Article  

NLE Websites -- All DOE Office Websites (Extended Search)

More Storage Space, Better Reliability Now at the ARM Data Management More Storage Space, Better Reliability Now at the ARM Data Management Facility Bookmark and Share To support the ever-increasing file storage needs of the ARM Data Management Facility (DMF) and ARM Engineering computers, a Network Appliance (NetApp®) file server with 2.68 terabytes, or 2.95 trillion bytes, of highly-reliable and extremely-fast, usable disk storage joined the DMF servers. The NetApp system performs nearly four times faster than the previous file server and is engineered for a higher degree of reliability-critical improvements needed to maintain uptime for ARM data availability at the DMF. A NetApp server increases ARM storage capacity and keeps the data flowing at the Data Management Facility. A NetApp server increases ARM storage capacity and keeps the data flowing

444

ORNL's Peng wins Fusion Power Associates Leadership Award | ornl...  

NLE Websites -- All DOE Office Websites (Extended Search)

ORNL's Martin Peng, recipient of Fusion Power Associates' Leadership Award, explains an ITER fusion experiment diagram. OAK RIDGE, Tenn., Aug. 17, 2010 - Martin Peng, a researcher...

445

NREL: News - NREL Fills Key Leadership Role for Energy Systems...  

NLE Websites -- All DOE Office Websites (Extended Search)

513 NREL Fills Key Leadership Role for Energy Systems Integration June 17, 2013 Bryan J. Hannegan will join the Energy Department's National Renewable Energy Laboratory on June 24...

446

Memorandum, Personal Commitment to Health and Safety through Leadership,  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Memorandum, Personal Commitment to Health and Safety through Memorandum, Personal Commitment to Health and Safety through Leadership, Employee Engagement, and Organizational Learning Memorandum, Personal Commitment to Health and Safety through Leadership, Employee Engagement, and Organizational Learning September 20, 2013 Secretary of Energy Ernest J. Moniz and Deputy Secretary Daniel B. Poneman recently issued a joint Memorandum, entitled Personal Commitment to Health and Safety through Leadership, Employee Engagement, and Organizational Learning. The Department's thousands of Federal, laboratory, and contractor employees work hard every day in pursuit of energy independence, global scientific leadership, national security, and environmental stewardship.The Department's ultimate safety objective is to have zero accidents,

447

Memorandum, Personal Commitment to Health and Safety through Leadership,  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Memorandum, Personal Commitment to Health and Safety through Memorandum, Personal Commitment to Health and Safety through Leadership, Employee Engagement, and Organizational Learning Memorandum, Personal Commitment to Health and Safety through Leadership, Employee Engagement, and Organizational Learning September 20, 2013 Secretary of Energy Ernest J. Moniz and Deputy Secretary Daniel B. Poneman recently issued a joint Memorandum, entitled Personal Commitment to Health and Safety through Leadership, Employee Engagement, and Organizational Learning. The Department's thousands of Federal, laboratory, and contractor employees work hard every day in pursuit of energy independence, global scientific leadership, national security, and environmental stewardship.The Department's ultimate safety objective is to have zero accidents,

448

Executive Order 13148-Greening the Government Through Leadership in  

NLE Websites -- All DOE Office Websites (Extended Search)

48-Greening the Government Through Leadership 48-Greening the Government Through Leadership in Environmental Management Executive Order 13148-Greening the Government Through Leadership in Environmental Management The head of each Federal agency is responsible for ensuring that all necessary actions are taken to integrate environmental accountability into agency day-to-day decisionmaking and long-term planning processes, across all agency missions, activities, and functions. Executive Order 13148-Greening the Government Through Leadership in Environmental Management More Documents & Publications Executive Order 12969-Federal Acquisition and Community RightTo-Know Executive Order 13423- Strengthening Federal Environmental, Energy, and Transportation Management NATIONAL DEFENSE AUTHORIZATION ACT FOR FISCAL YEAR 2000

449

The Effectiveness of Leadership Development Programs on Small Farm Producers  

E-Print Network (OSTI)

Although there were numerous leadership development programs throughout the country, most ignored the small producers located throughout the south. In order to address the needs of these traditionally underserved individuals, the National Small Farmer Agricultural Leadership Institute was created to address the concerns of small farmers in rural communities. This research specifically targeted the effectiveness of leadership development over a period by exploring the factors that motivate the program participants to enhance their leadership skills and the ability to transform that motivation into effective leadership. The group involved in this study is a convenience population of small farmers and ranchers from across the Southern United States, who graduated from the National Small Farm Leadership Institute. These participants represent 2 graduating classes from 2007 and 2009. A retrospective post survey methodology was used to conduct this study. The instrument is divided into a knowledge base before they took the program (pre) and a retrospective post assessment. Each of the questions allowed the participants to rate their ability on a 5 point Likert-Type scale. The responses ranged from 1 to 5 with the following responses Very Poor, Poor, Fair, Good and Very Good. The survey research examined four educational constructs that were covered during the leadership development program. These were Leadership Skill Development, Leadership Theory, Agricultural Skill enhancement and the Transformation of their leadership skills. Through analysis of the four educational constructs the research reveals substantial increases in knowledge and skills such as Group Problem Solving, Consensus Building, Team Building, Group Decision Making and Obtaining information to help in decision making. Participants were definitely found to have increased their leadership skills through teaching of Leadership Philosophy, linkages to Federal and agricultural resources, the appreciation of different styles of leadership and awareness of agricultural policy issues. The study revealed that in each of the four educational construct areas of the National Small Farm Leadership Institute that there were substantial increases in knowledge and changes in behavior such as: understanding and explaining personal leadership philosophy, increased awareness of Agricultural Policy Issues and transferring the leadership back to the community.

Malone, Allen A.

2010-08-01T23:59:59.000Z

450

Executive Order 13148, Greening the Government Through Leadership...  

NLE Websites -- All DOE Office Websites (Extended Search)

Order (EO) 13148, Greening the Government Through Leadership in Environmental Management. The report was prepared in accordance with the guidance provided in your letter to...

451

Secretary Moniz at White House Women's Leadership Summit on Climate...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Sites Power Marketing Administration Other Agencies You are here Home Secretary Moniz at White House Women's Leadership Summit on Climate and Energy Secretary Moniz at...

452

Executive Order 13031-Federal Alternative Fueled Vehicle Leadership...  

NLE Websites -- All DOE Office Websites (Extended Search)

in the use of alternative fueled vehicles (AFVs). Executive Order 13031-Federal Alternative Fueled Vehicle Leadership More Documents & Publications NATIONAL DEFENSE...

453

An ethnographic case study of transformative learning in leadership development.  

E-Print Network (OSTI)

??This qualitative study investigated how transformative learning and membership in a community of practice influenced leadership development. It sought a phenomenological understanding of how participants (more)

Powell, Linda Chastain

2009-01-01T23:59:59.000Z

454

Pantex receives United Way leadership award | National Nuclear...  

National Nuclear Security Administration (NNSA)

United Way leadership award | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response...

455

Large Scale Computing and Storage Requirements for Nuclear Physics Research  

E-Print Network (OSTI)

of Science, Advanced Scientific Computing Research (ASCR)Office of Advanced Scientific Computing Research, FacilitiesNP) Office of Advanced Scientific Computing Research (ASCR)

Gerber, Richard A.

2012-01-01T23:59:59.000Z

456

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

of Science, Advanced Scientific Computing Research (ASCR)Office of Advanced Scientific Computing Research, FacilitiesOffice of Advanced Scientific Computing Research (ASCR), and

Gerber, Richard A.

2011-01-01T23:59:59.000Z

457

Large Scale Computing and Storage Requirements for Nuclear Physics Research  

E-Print Network (OSTI)

proceedings of High Performance Computing 2011 (HPC-2011)In recent years, high performance computing has becomeNERSC is the primary high-performance computing facility for

Gerber, Richard A.

2012-01-01T23:59:59.000Z

458

National Laser User Facilities Program | National Nuclear Security  

National Nuclear Security Administration (NNSA)

User Facilities Program | National Nuclear Security User Facilities Program | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog National Laser User Facilities Program Home > National Laser User Facilities Program National Laser User Facilities Program National Laser Users' Facility Grant Program Overview The Laboratory for Laser Energetics (LLE) at the University of Rochester

459

National Laser User Facilities Program | National Nuclear Security  

NLE Websites -- All DOE Office Websites (Extended Search)

Laser User Facilities Program | National Nuclear Security Laser User Facilities Program | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog National Laser User Facilities Program Home > National Laser User Facilities Program National Laser User Facilities Program National Laser Users' Facility Grant Program Overview The Laboratory for Laser Energetics (LLE) at the University of Rochester

460

ARM - Facility News Article  

NLE Websites -- All DOE Office Websites (Extended Search)

31, 2006 [Facility News] 31, 2006 [Facility News] Comprehensive Instrument Validation Campaign Concludes Bookmark and Share As the Aqua satellite moves along, the AIRS mirror scans a "swath" across the Earth's surface and directs infrared energy into the instrument. This energy is separated into wavelengths, which are transferred from Aqua to computers on the ground for additional processing. (Source: http://airs.jpl.nasa.gov As the Aqua satellite moves along, the AIRS mirror scans a "swath" across the Earth's surface and directs infrared energy into the instrument. This energy is separated into wavelengths, which are transferred from Aqua to computers on the ground for additional processing. (Source: http://airs.jpl.nasa.gov After almost four years, the last soundings in the final phase of the

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


461

ARM - Facility News Article  

NLE Websites -- All DOE Office Websites (Extended Search)

November 30, 2009 [Facility News] November 30, 2009 [Facility News] ARM Joins Global Reference Upper-Air Network Bookmark and Share Similar to a standard radiosonde, the frost point hygrometer is a digitally-controlled instrument attached to a weather balloon. As it rises through the air, atmospheric data collected by the sensor is recorded on the ground. This photo shows the computer chips, battery pack, and connector that make up the instrument package. Similar to a standard radiosonde, the frost point hygrometer is a digitally-controlled instrument attached to a weather balloon. As it rises through the air, atmospheric data collected by the sensor is recorded on the ground. This photo shows the computer chips, battery pack, and connector that make up the instrument package. One of the largest challenges from a global climate observations

462

ARM - Facility News Article  

NLE Websites -- All DOE Office Websites (Extended Search)

August 31, 2008 [Facility News] August 31, 2008 [Facility News] Phase 2 of Orbiting Carbon Observatory Field Campaign Begins Bookmark and Share A camera, weather station, and sun tracker with a protective dome are located on the roof of the fully automated FTS mobile laboratory. Inside the shelter, the spectrometer receives the reflected solar beam from the sun tracker, while the main computer system operates all the instruments and acquires the data. A camera, weather station, and sun tracker with a protective dome are located on the roof of the fully automated FTS mobile laboratory. Inside the shelter, the spectrometer receives the reflected solar beam from the sun tracker, while the main computer system operates all the instruments and acquires the data. The Orbiting Carbon Observatory, or OCO, is a National Aeronautics and

463

ARM - Facility News Article  

NLE Websites -- All DOE Office Websites (Extended Search)

March 31, 2007 [Facility News] March 31, 2007 [Facility News] Radiometers Operate in Low Water Vapor Conditions in Barrow, Alaska Bookmark and Share A researcher checks the GVR antennae on a cold, crisp day at the ARM site in Barrow, Alaska. The radiometer is inside the insulated box beneath the antenna; the data is collected and displayed on the computer inside the instrument shelter. A researcher checks the GVR antennae on a cold, crisp day at the ARM site in Barrow, Alaska. The radiometer is inside the insulated box beneath the antenna; the data is collected and displayed on the computer inside the instrument shelter. To provide more accurate ground-based measurements of water vapor in extremely arid environments, three types of 183.3-GHz radiometers operated simultaneously in February and March at the ARM North Slope of Alaska site

464

--No Title--  

NLE Websites -- All DOE Office Websites (Extended Search)

Center OLCFOak Ridge Leadership Computing Facility SNSSpallation Neutron Source Science & Discovery Advanced Materials Clean Energy National Security Neutron Science Nuclear...

465

Roof-and-attic system delivers year-round efficiency | ornl.gov  

NLE Websites -- All DOE Office Websites (Extended Search)

Center OLCFOak Ridge Leadership Computing Facility SNSSpallation Neutron Source Science & Discovery Advanced Materials Clean Energy National Security Neutron Science Nuclear...

466

BUILDING ECONOMIC DEVELOPMENT  

new facilities: the Spallation Neutron Source, the Center for Nanophase Materials Sciences, the National Leadership Computing Center, and the

467

Magellan at NERSC Progress Report for June 2010  

E-Print Network (OSTI)

DOE clouds (at NERSC and ALCF) provide high availability byLeadership Computing Facility (ALCF) and the National Energy

Broughton, Richard Canon, Lavanya Ramakrishnan, Brent Draney, Jeff

2011-01-01T23:59:59.000Z

468

Efficient and Scalable Retrieval Techniques for Global File Properties  

E-Print Network (OSTI)

and Oak Ridge Leadership Computing Facility. Titan Web Page, 2011. http://www.olcf.ornl.gov/titan/. [25] C

Miller, Barton P.

469

Radiation Modeling Using the Uintah Heterogeneous CPU/GPU Runtime System  

E-Print Network (OSTI)

and Oak Ridge Leadership Computing Facility. Titan Web Page, 2011. http://www.olcf.ornl.gov/titan/. [25] C

Utah, University of

471

Simplified Parallel Domain Traversal Wesley Kendall  

E-Print Network (OSTI)

architecture [9] Jaguar is the primary system in the ORNL Leadership Computing Facility (OLCF) [9]. It consists

Tennessee, University of

472

White House Leadership Summit on Women, Climate and Energy | Department of  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Leadership Summit on Women, Climate and Energy Leadership Summit on Women, Climate and Energy White House Leadership Summit on Women, Climate and Energy Addthis White House Leadership Summit on Women, Climate and Energy 1 of 14 White House Leadership Summit on Women, Climate and Energy Date taken: 2013-05-23 10:05 White House Leadership Summit on Women, Climate and Energy 2 of 14 White House Leadership Summit on Women, Climate and Energy Date taken: 2013-05-23 10:06 White House Leadership Summit on Women, Climate and Energy 3 of 14 White House Leadership Summit on Women, Climate and Energy Date taken: 2013-05-23 10:07 White House Leadership Summit on Women, Climate and Energy 4 of 14 White House Leadership Summit on Women, Climate and Energy Nancy Sutley, Chair of the White House Council on Environmental Quality,

473

California Energy Commission California Leadership on Land Use  

E-Print Network (OSTI)

California Energy Commission California Leadership on Land Use and Climate Change California Leadership on Land Use and Climate Change Panama Bartholomy Advisor to the Chairman California Energy Commission New Partners for Smart GrowthNew Partners for Smart Growth Washington, DCWashington, DC February 8

474

Indonesia's Ascent: Power, Leadership and Asia's Security Order  

E-Print Network (OSTI)

Indonesia's Ascent: Power, Leadership and Asia's Security Order Canberra Conference and Workshop across Australia and the broader region #12;Indonesia's Ascent: Power, Leadership and Asia's Security Order Project Abstract As Indonesia's economy grows, it is increasingly being referred to as a rising

475

ACM's Computing Professionals Face New Challenges  

E-Print Network (OSTI)

ACM's Computing Professionals Face New Challenges COMMUNICATIONS OF THE ACM February 2002/Vol. 45, No. 2 31 LISAHANEY Viewpoint Ben Shneiderman T he ACM community is in a position to take a leadership been contacted to contribute designs for improving security at air- ports, verifying identity at check

Shneiderman, Ben

476

March 1, 2013, DOE/Union Leadership Safety Culture Meeting - Meeting Summary  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

3-12-13 3-12-13 DOE/Union Leadership Safety Culture Meeting March 1, 2013 Meeting Summary History:  DOE's Office of Enforcement and Oversight [Independent Oversight], within HSS, conducted an independent assessment of the nuclear safety culture and management of nuclear safety concerns at DOE's Waste Treatment and Immobilization Plant (WTP) in response to a Recommendation by the Defense Nuclear Facilities Safety Board.  As a result of the safety culture weaknesses unveiled, DOE embarked on a mission to determine the extent of the condition, and HSS was tasked to conduct independent assessments at 5 primary DOE nuclear facilities.  DOE is currently pursuing corrective actions. A consolidated report of the Independent

477

Center for Computational Medicine and Bioinformatics fostering interdisciplinary research in computational medicine and biology  

E-Print Network (OSTI)

, and administrative activities of CCMB. Facilities include high performance computing, file and database servers, workstations, web servers, networking, and printing services. CCDU supports multiple high performance computing

Rosenberg, Noah

478

Topology-aware data movement and staging for I/O acceleration on Blue Gene/P supercomputing systems  

E-Print Network (OSTI)

of the Argonne Leadership Comput- ing Facility, I/O forwarding in BG/P, and a summary of our previous findings for the sys- tem evaluation in this paper. 2.1 Argonne Leadership Computing Facility Figure 1: The Argonne of the Argonne Leadership Computing Facility at Argonne Na- tional Laboratory. This work was supported

479

DOE Congratulates Under Secretary Johnson for Technology Leadership Award |  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

DOE Congratulates Under Secretary Johnson for Technology Leadership DOE Congratulates Under Secretary Johnson for Technology Leadership Award DOE Congratulates Under Secretary Johnson for Technology Leadership Award May 12, 2010 - 12:00am Addthis Washington, DC - U.S. Department of Energy Under Secretary Kristina M. Johnson has been selected to receive the 2010 Women of Vision Leadership Award from the Anita Borg Institute for Women and Technology (ABI). ABI honors women making significant contributions to technology in the categories of leadership, innovation and social impact. The Women of Vision Awards ceremony will take place Wednesday, May 12th at the Mission City Ballroom in Santa Clara, CA. "Under Secretary Johnson has proven to be a leader in technology and engineering throughout her illustrious career," said Secretary Chu. "The

480

2013 National Council of La Raza Leadership Development Workshops |  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

2013 National Council of La Raza Leadership Development Workshops 2013 National Council of La Raza Leadership Development Workshops 2013 National Council of La Raza Leadership Development Workshops July 23, 2013 2:15PM EDT New Orleans, LA The National Council of La Raza (NCLR) is hosting a series of leadership development workshops geared towards Federal employees on July 23, 2013, in New Orleans, LA. These workshops are a part of the 2013 NCLR Annual Conference from July 20-23, 2013. The workshop topics on July 23rd will include an overview of the Senior Executive Service (SES) and instruction on how to prepare and apply for a SES position. The five (5) hours dedicated to the leadership development workshops on July 23rd qualify as training in compliance with 5 U.S.C. chapter 41 and are open to all Federal employees.

Note: This page contains sample records for the topic "leadership computing facility" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


481

Berkeley India Joint Leadership on Energy and Environment | Open Energy  

Open Energy Info (EERE)

India Joint Leadership on Energy and Environment India Joint Leadership on Energy and Environment Jump to: navigation, search Logo: Berkeley India Joint Leadership on Energy and Environment Name Berkeley India Joint Leadership on Energy and Environment Agency/Company /Organization Lawrence Berkeley National Laboratory Sector Energy Focus Area Energy Efficiency Topics Policies/deployment programs, Pathways analysis, Background analysis Website http://india.lbl.gov/ Country India Southern Asia References Program Homepage[1] Abstract The Berkeley India Joint Leadership on Energy and Environment (BIJLEE) is a Lawrence Berkeley National Laboratory joint research and development program in which researchers work with the government and private sector of India to assist in the adoption of pathways and approaches for reducing the emissions of greenhouse gases while pursuing sustainable economic development.

482

Organization of Chinese Americans Federal Leadership Training | Department  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Organization of Chinese Americans Federal Leadership Training Organization of Chinese Americans Federal Leadership Training Organization of Chinese Americans Federal Leadership Training July 18, 2013 2:15PM EDT to July 19, 2013 5:15PM EDT Washington DC The Organization of Chinese Americans (OCA) will hold its Federal Leadership Training (FLT) on July 18-19, 2013, during its National Convention in Washington, D.C. The theme of this year's convention is "Celebrating 40 Years of Advocacy and Empowerment." The FLT is focused on promoting the professional development and continuing education of all employees. This FLT qualifies as training in compliance with 5 U.S.C. chapter 41, and is open to all Federal employees. It will provide training and workshops in a variety of areas including Professional Development, Leadership

483

2013 National Council of La Raza Leadership Development Workshops |  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

2013 National Council of La Raza Leadership Development Workshops 2013 National Council of La Raza Leadership Development Workshops 2013 National Council of La Raza Leadership Development Workshops July 20, 2013 4:00PM EDT to July 23, 2013 6:00PM EDT Morial Convention Center for four incredible days in New Orleans, Louisiana The National Council of La Raza (NCLR) is hosting a series of leadership development workshops geared towards Federal employees on July 23, 2013, in New Orleans, LA. These workshops are a part of the 2013 NCLR Annual Conference from July 20-23, 2013. The workshop topics on July 23rd will include an overview of the Senior Executive Service (SES) and instruction on how to prepare and apply for a SES position. The five (5) hours dedicated to the leadership development workshops on July 23rd qualify as training in compliance with 5 U.S.C. chapter 41 and

484

DOE/Labor Leadership Roundtable Meetings | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

DOE/Labor Leadership Roundtable Meetings DOE/Labor Leadership Roundtable Meetings DOE/Labor Leadership Roundtable Meetings The Roundtable Forum was established to foster a healthy exchange of ideas relative to the Department and the leadership of our Federal and contractor workforce related to safety, communications, and operations. The Department will continue to engage these elements in efforts to improve performance through enhanced labor management relations and encouraging opportunities for collaboration and partnerships. Meeting Documents Available for Download May 17, 2012 DOE/Labor Leadership Roundtable Meeting Meeting agenda Meeting Date: May 17, 2012 January 25, 2011 DOE Roundtable Meeting with Union Leaders Meeting agenda and summary Meeting Date: January 25, 2011 March 16, 2010 Deputy Secretary Roundtable Meeting with Unions

485

John Bardeen Engineering Leadership Program | Overview  

NLE Websites -- All DOE Office Websites (Extended Search)

Overview Overview Description The John Bardeen Engineering Leadership Program is designed to provide full-time entry-level opportunities for outstanding engineering graduates who are interested in working in a cutting edge research environment. Fermilab provides opportunities in the fields of electrical, electronics, radio frequency systems, power distribution, magnets, RF cavities, mechanical, materials science and cryogenic engineering. The program honors John Bardeen's revolutionary achievements as both a physicist and engineer. Applications are now being accepted. Eligibilty Applicants must be recipients of a Master or Doctoral degree in engineering from an accredited institution and apply within three years of graduation or completion of a first postdoctoral position.

486

Titan | U.S. DOE Office of Science (SC)  

Office of Science (SC) Website

Facilities » Oak Ridge Facilities » Oak Ridge Leadership Computing Facility (OLCF) » Titan Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Accessing ASCR Supercomputers Oak Ridge Leadership Computing Facility (OLCF) Argonne Leadership Computing Facility (ALCF) National Energy Research Scientific Computing Center (NERSC) Energy Sciences Network (ESnet) Research & Evaluation Prototypes (REP) Innovative & Novel Computational Impact on Theory and Experiment (INCITE) ASCR Leadership Computing Challenge (ALCC) Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy SC-21/Germantown Building

487

Independent Activity Report, Defense Nuclear Facilities Safety Board Public  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Defense Nuclear Facilities Safety Defense Nuclear Facilities Safety Board Public Meeting - October 2012 Independent Activity Report, Defense Nuclear Facilities Safety Board Public Meeting - October 2012 October 2012 Defense Nuclear Facilities Safety Board Public Meeting on the Status of Integration of Safety Into the Design of the Uranium Processing Facility [HIAR-Y-12-2012-10-02] The Office of Health, Safety and Security (HSS) observed the public hearing of the DNFSB review of the UPF project status for integrating safety into design. The meeting was broken into three parts: a panel discussion and questioning of National Nuclear Security Administration (NNSA) oversight and execution; a panel discussion and questioning of the B&W Y-12 Technical Services, LLC (B&W Y-12) design project team leadership; and an open public

488

Independent Activity Report, Defense Nuclear Facilities Safety Board Public  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Defense Nuclear Facilities Safety Defense Nuclear Facilities Safety Board Public Meeting - October 2012 Independent Activity Report, Defense Nuclear Facilities Safety Board Public Meeting - October 2012 October 2012 Defense Nuclear Facilities Safety Board Public Meeting on the Status of Integration of Safety Into the Design of the Uranium Processing Facility [HIAR-Y-12-2012-10-02] The Office of Health, Safety and Security (HSS) observed the public hearing of the DNFSB review of the UPF project status for integrating safety into design. The meeting was broken into three parts: a panel discussion and questioning of National Nuclear Security Administration (NNSA) oversight and execution; a panel discussion and questioning of the B&W Y-12 Technical Services, LLC (B&W Y-12) design project team leadership; and an open public

489

Newest LANL Facility Receives LEED Gold Certification | National Nuclear  

National Nuclear Security Administration (NNSA)

Newest LANL Facility Receives LEED Gold Certification | National Nuclear Newest LANL Facility Receives LEED Gold Certification | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog Home > NNSA Blog > Newest LANL Facility Receives LEED Gold Certification Newest LANL Facility Receives LEED Gold Certification Posted By Office of Public Affairs RULOB LANL's newest facility, the Radiological Laboratory Utility Office

490

NNSA Holds Groundbreaking at MOX Facility | National Nuclear Security  

National Nuclear Security Administration (NNSA)

Groundbreaking at MOX Facility | National Nuclear Security Groundbreaking at MOX Facility | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog Home > About Us > Our History > NNSA Timeline > NNSA Holds Groundbreaking at MOX Facility NNSA Holds Groundbreaking at MOX Facility October 14, 2005 Aiken, SC NNSA Holds Groundbreaking at MOX Facility

491

Independent Activity Report, Defense Nuclear Facilities Safety Board Public  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Defense Nuclear Facilities Safety Defense Nuclear Facilities Safety Board Public Meeting - October 2012 Independent Activity Report, Defense Nuclear Facilities Safety Board Public Meeting - October 2012 October 2012 Defense Nuclear Facilities Safety Board Public Meeting on the Status of Integration of Safety Into the Design of the Uranium Processing Facility [HIAR-Y-12-2012-10-02] The Office of Health, Safety and Security (HSS) observed the public hearing of the DNFSB review of the UPF project status for integrating safety into design. The meeting was broken into three parts: a panel discussion and questioning of National Nuclear Security Administration (NNSA) oversight and execution; a panel discussion and questioning of the B&W Y-12 Technical Services, LLC (B&W Y-12) design project team leadership; and an open public

492

Research Support Facility - A Model of Super Efficiency (RSF) (Fact Sheet)  

SciTech Connect

This fact sheet published by the National Renewable Energy Laboratory discusses the lab's newest building, the Research Support Facility (RSF). The RSF is a showcase for ultra-efficient workplaces. Various renewable energy and energy efficiency features have been employed so that the building achieves a Leadership in Energy and Environmental Design (LEED) Platinum rating from the U.S. Green Building Council.

Not Available

2010-08-01T23:59:59.000Z

493

PROJECTIZING AN OPERATING NUCLEAR FACILITY  

SciTech Connect

This paper will discuss the evolution of an operations-based organization to a project-based organization to facilitate successful deactivation of a major nuclear facility. It will describe the plan used for scope definition, staff reorganization, method estimation, baseline schedule development, project management training, and results of this transformation. It is a story of leadership and teamwork, pride and success. Workers at the Savannah River Site's (SRS) F Canyon Complex (FCC) started with a challenge--take all the hazardous byproducts from nearly 50 years of operations in a major, first-of-its-kind nuclear complex and safely get rid of them, leaving the facility cold, dark, dry and ready for whatever end state is ultimately determined by the United States Department of Energy (DOE). And do it in four years, with a constantly changing workforce and steadily declining funding. The goal was to reduce the overall operating staff by 93% and budget by 94%. The facilities, F Canyon and its adjoined sister, FB Line, are located at SRS, a 310-square-mile nuclear reservation near Aiken, S.C., owned by DOE and managed by Washington Group International subsidiary Washington Savannah River Company (WSRC). These facilities were supported by more than 50 surrounding buildings, whose purpose was to provide support services during operations. The radiological, chemical and industrial hazards inventory in the old buildings was significant. The historical mission at F Canyon was to extract plutonium-239 and uranium-238 from irradiated spent nuclear fuel through chemical processing. FB Line's mission included conversion of plutonium solutions into metal, characterization, stabilization and packaging, and storage of both metal and oxide forms. The plutonium metal was sent to another DOE site for use in weapons. Deactivation in F Canyon began when chemical separations activities were completed in 2002, and a cross-functional project team concept was implemented to successfully accomplish deactivation. This concept had to allow for continued operations in FB Line until 2005, while providing distinct task-oriented teams for deactivation of the FCC. Facility workers, always the most knowledgeable about any facility, were integral parts of the project team. The team defined the scope, developed a bottoms-up estimate, reorganized personnel to designated project teams, and developed a baseline schedule with about 12,000 activities. Training was implemented to prepare the facility workers to use project management tools and concepts, which were to execute the project, coordinate activities and track progress. The project budget was estimated at $579 million. The team completed F Canyon and FB Line deactivation in August 2006, four months ahead of schedule and under budget.

Adams, N

2007-07-08T23:59:59.000Z

494

Security Controls for Computer Systems (U): Report of ...  

Science Conference Proceedings (OSTI)

... This first step is essential in order that ... other computing systems, any facilities for security ... management controls and procedures, facility clearance is ...

2013-04-15T23:59:59.000Z

495

Operating procedures: Fusion Experiments Analysis Facility  

SciTech Connect

The Fusion Experiments Analysis Facility (FEAF) is a computer facility based on a DEC VAX 11/780 computer. It became operational in late 1982. At that time two manuals were written to aid users and staff in their interactions with the facility. This manual is designed as a reference to assist the FEAF staff in carrying out their responsibilities. It is meant to supplement equipment and software manuals supplied by the vendors. Also this manual provides the FEAF staff with a set of consistent, written guidelines for the daily operation of the facility.

Lerche, R.A.; Carey, R.W.

1984-03-20T23:59:59.000Z

496

ARM - Facility News Article  

NLE Websites -- All DOE Office Websites (Extended Search)

29, 2012 [Facility News] 29, 2012 [Facility News] Workshop Identifies Critical Climate Science Challenges Bookmark and Share This DOE report summarizes a two-and-a-half day workshop held between U.S. and European collaborators to review outstanding climate change science questions related to clouds, aerosols and precipitation, and the observational strategies for addressing them. This DOE report summarizes a two-and-a-half day workshop held between U.S. and European collaborators to review outstanding climate change science questions related to clouds, aerosols and precipitation, and the observational strategies for addressing them. Clouds and aerosols remain as major sources of uncertainty in computer models of Earth systems. In large part, this uncertainty is due to a lack

497

Computer Science & Computer Engineering  

E-Print Network (OSTI)

CSCE Computer Science & Computer Engineering #12;Computer scientists and computer engineers design and implement e cient software and hardware solutions to computer-solvable problems. They are involved, virtual reality and robotics. Within the Computer Science department, we o er four exciting majors from

Rohs, Remo

498

Computer resources Computer resources  

E-Print Network (OSTI)

Computer resources 1 Computer resources available to the LEAD group Cédric David 30 September 2009 #12;Ouline · UT computer resources and services · JSG computer resources and services · LEAD computers· LEAD computers 2 #12;UT Austin services UT EID and Password 3 https://utdirect.utexas.edu #12;UT Austin

Yang, Zong-Liang

499

Petascale Algorithms for Reactor Hydrodynamics Paul Fischer, James Lottes, David Pointer, and Andew Siegel  

E-Print Network (OSTI)

to P = 65, 000 processors on the IBM BG/P at the Argonne Leadership Computing Facility. 1. Introduction-AC02-06CH11357. Computer time on the Argonne Leadership Computing Facility was provided through a 2008 cores. The aim is to leverage petascale platforms at DOE's Leadership Computing Figure 1. Turbulence

Fischer, Paul F.

500

Executive Order 13514-Federal Leadership in Environmental, Energy, and  

NLE Websites -- All DOE Office Websites (Extended Search)

14-Federal Leadership in Environmental, 14-Federal Leadership in Environmental, Energy, and Economic Performance Executive Order 13514-Federal Leadership in Environmental, Energy, and Economic Performance It is therefore the policy of the United States that Federal agencies shall increase energy efficiency; measure, report, and reduce their greenhouse gas emissions from direct and indirect activities; conserve and protect water resources through efficiency, reuse, and stormwater management; eliminate waste, recycle, and prevent pollution; leverage agency acquisitions to foster markets for sustainable technologies and environmentally preferable materials, products, and services; design, construct, maintain, and operate high performance sustainable buildings in sustainable locations; strengthen the vitality and livability of the