Powered by Deep Web Technologies
Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


1

Leadership | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Message from the Director Board of Governors Organization Chart Argonne Distinguished Fellows Emeritus Scientists & Engineers History Discoveries Prime Contract Contact Us Leadership Argonne integrates world-class science, engineering, and user facilities to deliver innovative research and technologies. We create new knowledge that addresses the scientific and societal needs of our nation. Eric D. Isaacs Eric D. Isaacs, Director, Argonne National Laboratory Director, Argonne National Laboratory Argonne National Laboratory Eric D. Isaacs, a prominent University of Chicago physicist, is President of UChicago Argonne, LLC, and Director of Argonne National Laboratory. Mark Peters Mark Peters, Deputy Lab Director for Programs Deputy Laboratory Director for Programs

2

Argonne Leadership Computing Facility  

E-Print Network (OSTI)

on constant Q surface. (Credit: Anurag Gupta/GE Global) www.alcf.anl.gov The Leadership Computing Facility Division operates the Argonne Leadership Computing Facility -- the ALCF -- as part of the U.S. Department.......................................................................................... 63 2010 ALCF Projects ............................................................................ 64

Kemner, Ken

3

The Argonne Leadership Computing  

E-Print Network (OSTI)

Leadership Computing Facility (ALCF) was created and exists today as a preeminent global resource t y #12;Argonne Leadership Computing Facility ALCF Continues a Tradition of Computing Innovation--a tradition that continues today at the ALCF. The seedbed for such groundbreaking software as MPI, PETSc, PVFS

Kemner, Ken

4

Careers | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Specialist As a member of the Argonne Leadership Computing Facilitys (ALCF) High Performance Computing (HPC) team, appointee will participate in the technical operation, support...

5

Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites

area area Contact Us | Careers | Staff Directory | User Support Search form Search Search Argonne Leadership Computing Facility an Office of Science user facility Home . About Overview History Staff Directory Careers Visiting Us Contact Us Resources & Expertise Mira Cetus Vesta Intrepid Challenger Surveyor Visualization Clusters Data and Networking Our Teams User Advisory Council Science at ALCF INCITE 2014 Projects ALCC 2013 Projects ESP Projects View All Projects Allocation Programs Early Science Program Publications Industry Collaborations News & Events Web Articles In the News Upcoming Events Past Events Informational Materials Photo Galleries User Services User Support Machine Status Presentations Training & Outreach User Survey Getting Started How to Get an Allocation New User Guide

6

Argonne Leadership Computing Facility (ALCF) | U.S. DOE Office of Science  

Office of Science (SC) Website

Argonne Argonne Leadership Computing Facility (ALCF) Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Accessing ASCR Supercomputers Oak Ridge Leadership Computing Facility (OLCF) Argonne Leadership Computing Facility (ALCF) National Energy Research Scientific Computing Center (NERSC) Energy Sciences Network (ESnet) Research & Evaluation Prototypes (REP) Innovative & Novel Computational Impact on Theory and Experiment (INCITE) ASCR Leadership Computing Challenge (ALCC) Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy SC-21/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301) 903-7486 F: (301)

7

Tony Tolbert | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Tony Tolbert Consultant - Business Intelligence Argonne Leadership Computing Facility 9700 S. Cass Avenue Bldg. 240 Wkstn. 3D29 Argonne, IL 60439 630-252-6027 wtolbert@alcf.anl...

8

Strategic Laboratory Leadership Program | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Erik Gottschalk (F); Devin Hodge (A); Jeff Chamberlain (A); Brad Ullrick (A); Bill Rainey (J). Image courtesy of Argonne National Laboratory. Strategic Laboratory Leadership...

9

History | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

HPC at Argonne ALCF Continues a Tradition of Computing Innovation In 1949, because the computers they needed weren't yet available commercially for purchase, Argonne physicists...

10

Web Articles | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

In the News In the News Supercomputers: New Software Needed Information Week Story UA Chemist Leads Supercomputer Effort to Aid Nuclear Understanding UA News Rewinding the universe Deixis Magazine more news Web Articles sort ascending Theory and Computing Sciences building at Argonne ALCF and MCS Establish Joint Lab for Evaluating Computing Platforms To centralize research activities aimed at evaluating future high performance computing platforms, a new joint laboratory at Argonne will provide significant opportunities for the Argonne Leadership Computing Facility (ALCF) and the Mathematics and Computer Science (MCS), both located in the Theory and Computing Sciences building, to work collaboratively on prototype technologies for petascale and beyond. January 08, 2014

11

Leadership Development | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

include work-life balance, stress management and innovative solutions to career and gender issues. Photo Gallery: Strategic Laboratory Leadership Program Strategic Laboratory...

12

Accounts Policy | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Decommissioning of BG/P Systems and Resources Decommissioning of BG/P Systems and Resources Blue Gene/Q Versus Blue Gene/P Mira/Cetus/Vesta Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy Job Scheduling Policy on BG/P Job Scheduling Policy on BG/Q Refund Policy Software Policy User Authentication Policy Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Accounts Policy All holders of user accounts must abide by all appropriate Argonne Leadership Computing Facility and Argonne National Laboratory computing usage policies. These are described at the time of the account request and

13

Data Policy | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Intrepid/Challenger/Surveyor Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy Job Scheduling Policy on BG/P Job Scheduling Policy on BG/Q Refund Policy Software Policy User Authentication Policy Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Data Policy Contents ALCF Data Confidentiality ALCF Staff with Root Privileges Use of Proprietary/Licensed Software Prohibited Data Export Control Data Storage Systems Home File System Space Data/Parallel File System Space Capacity and Retention Policies Back to top ALCF Data Confidentiality The Argonne Leadership Computing Facility (ALCF) network is an

14

David Martin | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

David Martin Industrial Outreach Lead David Martin Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 3126 Argonne, IL 60439 630-252-0929 dem...

15

Douglas Waldron | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Douglas Waldron Senior Data Architect Douglas Waldron Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 3122 Argonne, IL 60439 630-252-2884 dwaldron...

16

Training & Outreach | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

User Services User Support Machine Status Presentations Training & Outreach Getting Started Videoconference Data Management Webinar Argonne Training Program on Extreme-Scale...

17

Visiting Us | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Visitor Registration Form Visitor Registration Form As a national laboratory, formal registration is REQUIRED for all visitors coming to Argonne National Laboratory. Visitors must...

18

Resources & Expertise | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

is dedicated to large-scale computation and builds on Argonne's strengths in high-performance computing software, advanced hardware architectures and applications expertise. It...

19

Scott Parker | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

at the National Center for Supercomputing Applications, where he focused on high-performance computing and scientific applications. At Argonne since 2008, he works on performance...

20

ALCC Program | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Getting Started How to Get an Allocation New User Guide Intrepid to Mira: Key Changes INCITE Program ALCC Program Director's Discretionary Program ALCC Program ASCR Leadership...

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


21

Michael Papka | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Both his laboratory leadership roles and his research interests relate to high-performance computing in support of scientific discovery. Dr. Papka holds a Senior Fellow...

22

Richard Coffey | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

a supercomputer for those groups, and participated in campus-wide IT and high-performance computing leadership efforts. Before that, he worked for ten years at the University...

23

User Services | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Services Services User Support Machine Status Presentations Training & Outreach User Survey User Services The ALCF User Assistance Center provides support for ALCF resources. The center's normal support hours are 9 a.m. until 5 p.m. (Central time) Monday through Friday, exclusive of holidays. Contact Us Email: support@alcf.anl.gov Telephone: 630-252-3111 866-508-9181 Service Desk: Building 240, 2-D-15/16 Argonne National Laboratory 9700 South Cass Avenue Argonne, IL 60439 Trouble Ticket System You can check the status of your existing tickets as well as send new support tickets. To do this, log in with your ALCF username and ALCF web accounts system password. (This is the password you chose when you requested your account, or which was assigned to you and sent with your new

24

User Authentication Policy | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Eureka / Gadzooks Eureka / Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy Job Scheduling Policy on BG/P Job Scheduling Policy on BG/Q Refund Policy Software Policy User Authentication Policy Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] User Authentication Policy Users of the Argonne production systems are required to use a CRYPTOCard one time password, multifactor authentication system. This document explains the policies users must follow regarding CRYPTOCard tokens for accessing the Argonne resources. MultiFactor Authentication "Authentication systems are frequently described by the authentication

25

NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Annual Report 2008-2009 Front cover image: Simulated structure of a lean hydrogen flame on a laboratory-scale low-swirl burner. See page 16. NERSC 2008-2009 Annual Report Ernest Orlando Lawrence Berkeley National Laboratory 1 Cyclotron Road, Berkeley, CA 94720-8148 This work was supported by the Director, Office of Science, Office of Advanced Scientific Computing Research of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. LBNL-3476E, May 2010 i Table of Contents Table of Contents 1 The Year in Perspective 5 Climate and Energy Research: Expanding Our Choices 6 It's Not Too Late: Cuts in greenhouse gas emissions would save arctic ice, reduce sea level rise 10 Artificial Photosynthesis: Researchers are working to design catalysts that can convert water and carbon dioxide into fuels using solar energy

26

NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Annual Report Annual Report NERSC Ernest Orlando Lawrence Berkeley National Laboratory 1 Cyclotron Road, Berkeley, CA 94720-8148 This work was supported by the Director, Office of Science, Office of Advanced Scientific Computing Research of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. National Energy Research Scientific Computing Center 2010 Annual Report i Table of Contents Table of Contents 1 The Year in Perspective 5 Research News 6 Overlooked Alternatives: New research restores photoisomerization and thermoelectrics to the roster of promising alternative energy sources 12 A Goldilocks Catalyst: Calculations show why a nanocluster may be "just right" for recycling carbon dioxide by converting it to methanol 16 Down with Carbon Dioxide: Detailed computational models help predict

27

The Argonne Leadership Computing Facility 2010 annual report.  

SciTech Connect

Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued to provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers that will be faster than petascale-class computers by a factor of a thousand. Pete Beckman, who served as the ALCF's Director for the past few years, has been named director of the newly created Exascale Technology and Computing Institute (ETCi). The institute will focus on developing exascale computing to extend scientific discovery and solve critical science and engineering problems. Just as Pete's leadership propelled the ALCF to great success, we know that that ETCi will benefit immensely from his expertise and experience. Without question, the future of supercomputing is certainly in good hands. I would like to thank Pete for all his effort over the past two years, during which he oversaw the establishing of ALCF2, the deployment of the Magellan project, increases in utilization, availability, and number of projects using ALCF1. He managed the rapid growth of ALCF staff and made the facility what it is today. All the staff and users are better for Pete's efforts.

Drugan, C. (LCF)

2011-05-09T23:59:59.000Z

28

National Energy Research Scientific Computing Center (NERSC) | U.S. DOE  

Office of Science (SC) Website

National National Energy Research Scientific Computing Center (NERSC) Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Accessing ASCR Supercomputers Oak Ridge Leadership Computing Facility (OLCF) Argonne Leadership Computing Facility (ALCF) National Energy Research Scientific Computing Center (NERSC) Energy Sciences Network (ESnet) Research & Evaluation Prototypes (REP) Innovative & Novel Computational Impact on Theory and Experiment (INCITE) ASCR Leadership Computing Challenge (ALCC) Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy SC-21/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301) 903-7486 F: (301)

29

NERSC 2011: High Performance Computing Facility Operational Assessment for the National Energy Research Scientific Computing Center  

E-Print Network (OSTI)

the Argonne and Oak Ridge Leadership Computing Facilitieslike Leadership Computing Facilities at Argonne and Oak

Antypas, Katie

2013-01-01T23:59:59.000Z

30

Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.  

Science Conference Proceedings (OSTI)

The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms. The Data Analytics and Visualization Team lends expertise in tools and methods for high-performance, post-processing of large datasets, interactive data exploration, batch visualization, and production visualization. The Operations Team ensures that system hardware and software work reliably and optimally; system tools are matched to the unique system architectures and scale of ALCF resources; the entire system software stack works smoothly together; and I/O performance issues, bug fixes, and requests for system software are addressed. The User Services and Outreach Team offers frontline services and support to existing and potential ALCF users. The team also provides marketing and outreach to users, DOE, and the broader community.

Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

2012-08-16T23:59:59.000Z

31

Argonne Leadership Computing Facility | www.alcf.anl.gov | info@alcf.anl.gov | (877) 737-8615 Materials Science  

E-Print Network (OSTI)

CONTACT Argonne Leadership Computing Facility | www.alcf.anl.gov | info@alcf.anl.gov | (877) 737 and computational readiness. For more information about ALCC and other programs at the ALCF, visit: http://www.alcf 2012 alcf-alcc_list-0212 ALCC ASCR Leadership Computing Challenge ENERGY U.S. DEPARTMENT OF Argonne

Kemner, Ken

32

NERSC / Cray  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Cray COE Nuclear Science Science Highlights HPC Requirements Reviews NERSC HPC Achievement Awards Home Science at NERSC Math & Computer Science NERSC Cray COE...

33

Argonne Leadership Compu2ng Facility www.alcf.anl.gov Katherine Riley  

E-Print Network (OSTI)

Argonne Leadership Compu2ng Facility ­ www.alcf.anl.gov Katherine Riley Manager and feasibility Managed By INCITE management commi^ee (ALCF & OLCF) DOE Office. Communica2on with ALCF is extremely helpful. You can request 2me on Mira

Kemner, Ken

34

Petascale Simulations of Turbulent Nuclear Combustion | Argonne Leadership  

NLE Websites -- All DOE Office Websites (Extended Search)

Flame Bubble in an Open Computational Domain for Constant Flame Speed, Gravitational Acceleration, and Changing Density Flame Bubble in an Open Computational Domain for Constant Flame Speed, Gravitational Acceleration, and Changing Density This image is from the simulation "Flame Bubble in an Open Computational Domain for Constant Flame Speed, Gravitational Acceleration, and Changing Density." Science: Ray Bair, Katherine Riley, Argonne National Laboratory; Anshu Dubey, Don Lamb, Dongwook Lee, University of Chicago; Robert Fisher, University of Massachusetts at Dartmouth and Dean Townsley, University of Alabama; Visualization: Jonathan Gallagher, University of Chicago; Randy Hudson, John Norris, and Michael E. Papka, Argonne National Laboratory/University of Chicago Petascale Simulations of Turbulent Nuclear Combustion PI Name: Don Lamb PI Email: lamb@oddjob.uchicago.edu

35

CONTACT Argonne Leadership Computing Facility | www.alcf.anl.gov | info@alcf.anl.gov | (877) 737-8615 LEADERSHIP-CLASS COMPUTING  

E-Print Network (OSTI)

Leadership Computing Facility (ALCF) was created and exists today as a preeminent global resource t y #12;Argonne Leadership Computing Facility ALCF Continues a Tradition of Computing Innovation--a tradition that continues today at the ALCF. The seedbed for such groundbreaking software as MPI, PETSc, PVFS

Kemner, Ken

36

NERSC Acknowledgement  

NLE Websites -- All DOE Office Websites (Extended Search)

Acknowledge NERSC How to get a NERSC account Passwords Managing Your Account How usage is charged Account Policies NERSC Computer User Agreement Form Production Project Accounts...

37

NERSC Stakeholders  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Allocation Managers at DOE Office of Advanced Scientific Computing Research (ASCR) DOE Phonebook Advisory Groups NERSC Users Group The NERSC Users Group (NUG) provides...

38

How to Queue a Job | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Reservations Queueing Running Jobs HTC Mode MPMD and MPIEXEC FAQs Queueing and Running Debugging and Profiling Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] How to Queue a Job Using the Job Resource Manager on BG/P: Commands, Options and Examples This document provides examples of how to submit jobs on the Argonne BG/P system. It also provides examples of commands that can be used to query the status of jobs, what partitions are available, etc. For an introduction to using the job resource manager and running jobs on BG/P, see Running Jobs on the BG/P System. How To Examples and Results

39

High-Order Surface Reconstruction and its Applications | Argonne Leadership  

NLE Websites -- All DOE Office Websites (Extended Search)

High-Order Surface Reconstruction and its Applications High-Order Surface Reconstruction and its Applications Event Sponsor: Mathematics and Computing Science Seminar Start Date: Dec 12 2013 - 10:30am Building/Room: Building 240/Room 4301 Location: Argonne National Laboratory Speaker(s): Navamita Ray Speaker(s) Title: Postdoc Interviewee - MCS Host: Tim Tautges Surface meshes are widely used by many numerical methods for solving partial differential equations. They not only represent computational grids for various discretization methods, but also are numerical objects in themselves. The accuracy of numerical methods, especially high-order methods, are highly dependent on the geometrical accuracy of the mesh as well as on that of differential or integral quantities defined over them. The situation is further complicated if the surface mesh does not have an

40

Magellan at NERSC Progress Report for June 2010  

E-Print Network (OSTI)

DOE clouds (at NERSC and ALCF) provide high availability byLeadership Computing Facility (ALCF) and the National Energy

Broughton, Richard Canon, Lavanya Ramakrishnan, Brent Draney, Jeff

2011-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


41

NERSC History  

NLE Websites -- All DOE Office Websites (Extended Search)

History History NERSC History Powering Scientific Discovery Since 1974 Contact: Jon Bashor, jbashor@lbl.gov, +1 510 486 4236 For more information, read "25 Years of Leadership," a historical perspective written at NERSC's quarter-century mark.⨠Download (PDF, 1.7MB) The oil crisis of 1973 did more than create long lines at the gas pumps - it jumpstarted a supercomputing revolution. The quest for alternative energy sources led to increased funding for the Department of Energy's Magnetic Fusion Energy program, and simulating the behavior of plasma in a fusion reactor required a computer center dedicated to this purpose. Founded in 1974 at Lawrence Livermore National Laboratory, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that

42

Trinity / NERSC-8 SSP  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Powering Scientific Discovery Since 1974 Login Site Map | My NERSC search... Go Home About Overview NERSC Mission Contact us Staff Org Chart NERSC History NERSC Stakeholders...

43

NERSC Nick Balthaser NERSC Storage Systems Group  

NLE Websites -- All DOE Office Websites (Extended Search)

Archival Storage at NERSC Nick Balthaser NERSC Storage Systems Group nabalthaser@lbl.gov NERSC User Training March 8, 2011 * NERSC Archive Technologies Overview * Use Cases for the...

44

NERSC News  

E-Print Network (OSTI)

2007 New Director for NERSC Kathy Yelick, a professor ofbeen named director of NERSC. Yelick has received a numberas the next director of NERSC, and only the fifth director

Wang, Ucilia

2008-01-01T23:59:59.000Z

45

About NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Visitor Info Web Policies Home About About NERSC The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of...

46

Porting Charm++/NAMD to IBM Blue Gene/Q Wei Jiang Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Porting Charm++/NAMD to IBM Blue Gene/Q Porting Charm++/NAMD to IBM Blue Gene/Q Wei Jiang Argonne Leadership Computing Facility 7 th , March NAMD_esp NAMD - Parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems Portable to all popular supercomputing platforms Great scalability based on Charm++ parallel objects Scientific aims on Blue Gene/Q Ensemble run that launches large number of replicas concurrently - mainly for energetic simulation High throughput simulation for large scale systems ~100M atoms Requirements for charm++ New communication layer that supports parallel/parallel runs Enable charm++ programming paradigm on Parallel Active Messaging Interface (PAMI) Parallel Structure of NAMD Hybrid force/spatial decomposition Adaptive Overlap of Communication and Computation

47

NERSC Featured Announcements  

NLE Websites -- All DOE Office Websites (Extended Search)

Seeks New Director - Job Position Posted Seeks New Director - Job Position Posted February 19, 2012 by Francesca Verdier | 0 Comments Current NERSC Director Kathy Yelick was named Associate Lab Director for Computing Sciences at Berkeley Lab in September 2010. In order to focus more on strategic planning at the lab, she has opened a search for a new NERSC Division Director. This position provides vision and strategic leadership to establish and maintain the leading-edge computing capability available to scientists at NERSC while participating in the highest level of management at Berkeley Lab. You can view the NERSC Director Job Posting. 0 comments | Read the full post User Science Exhibition March 28-29 in Washington DC February 17, 2012 by Francesca Verdier | 0 Comments This March 28 and 29 the National User Facilities Organization is holding a

48

NERSC's Data Transfer Nodes  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Transfer Nodes Data Transfer Nodes Data Transfer Nodes Overview The data transfer nodes are NERSC servers dedicated to performing transfers between NERSC data storage resources such as HPSS and the NERSC Global Filesystem (NGF), and storage resources at other sites including the Leadership Computing Facility at ORNL (Oak Ridge National Laboratory). These nodes are being managed (and monitored for performance) as part of a collaborative effort between ESnet, NERSC, and ORNL to enable high performance data movement over the high-bandwidth 10Gb ESnet wide-area network (WAN). Restrictions In order to keep the data transfer nodes performing optimally for data transfers, we request that users restrict interactive use of these systems to tasks that are related to preparing data for transfer or are directly

49

NERSC Annual Report 2002  

E-Print Network (OSTI)

64 Appendix A: NERSC Policy67 Appendix B: NERSC Computational Review68 Appendix C: NERSC Users Group Executive

Hules, John

2003-01-01T23:59:59.000Z

50

NERSC 2001 Annual Report  

E-Print Network (OSTI)

89 NERSC Program Advisory29 NERSC STRATEGIC PLAN High-End9 NERSC Systems and Services Oakland Facility

Hules editor, John

2001-01-01T23:59:59.000Z

51

NERSC Annual Report 2005  

E-Print Network (OSTI)

National Laboratory NERSC . . . National Energy ResearchCenter NGF . . . . . NERSC Global Filesystem NMR . . . . .and Programming Group (NERSC) PB . . . . . . Petabyte

Hules Ed., John

2006-01-01T23:59:59.000Z

52

NERSC User Demographics  

NLE Websites -- All DOE Office Websites (Extended Search)

2 2011 2010 Careers Visitor Info Web Policies Home About NERSC Usage Demographics NERSC Usage Demographics NERSC Usage Demographics 2012 NERSC Usage Demographics 2011 ......

53

ASCR Leadership Computing Challenge (ALCC) | U.S. DOE Office of Science  

Office of Science (SC) Website

ASCR ASCR Leadership Computing Challenge (ALCC) Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Accessing ASCR Supercomputers Oak Ridge Leadership Computing Facility (OLCF) Argonne Leadership Computing Facility (ALCF) National Energy Research Scientific Computing Center (NERSC) Energy Sciences Network (ESnet) Research & Evaluation Prototypes (REP) Innovative & Novel Computational Impact on Theory and Experiment (INCITE) ASCR Leadership Computing Challenge (ALCC) ALCC Application Details ALCC Past Awards Frequently Asked Questions Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy

54

NERSC Nick Balthaser NERSC Storage Systems Group  

NLE Websites -- All DOE Office Websites (Extended Search)

Introduction to HPSS at NERSC Nick Balthaser NERSC Storage Systems Group nabalthaser@lbl.gov Joint Genome Institute, Walnut Creek, CA Feb 10, 2011 * NERSC Archive Technologies...

55

NERSC Software  

NLE Websites -- All DOE Office Websites (Extended Search)

Debugging and Profiling Visualization and Analytics Grid Software and Services NERSC Software Downloads Accounts & Allocations Policies Data Analytics & Visualization...

56

User Facilities | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

User Facilities Advanced Photon Source Argonne Leadership Computing Facility Argonne Tandem Linear Accelerator System Center for Nanoscale Materials Electron Microscopy Center...

57

Oak Ridge Leadership Computing Facility (OLCF) | U.S. DOE Office of Science  

Office of Science (SC) Website

Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Accessing ASCR Supercomputers Oak Ridge Leadership Computing Facility (OLCF) Argonne Leadership Computing Facility (ALCF) National Energy Research Scientific Computing Center (NERSC) Energy Sciences Network (ESnet) Research & Evaluation Prototypes (REP) Innovative & Novel Computational Impact on Theory and Experiment (INCITE) ASCR Leadership Computing Challenge (ALCC) Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy SC-21/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301) 903-7486 F: (301)

58

NERSC-6 Benchmarks  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarks NERSC-6 Benchmarks The NERSC-6 application benchmarks were used in the acquisition process that resulted in the NERSC Cray XE6 ("Hopper") system. A technical report...

59

NERSC Publications and Reports  

NLE Websites -- All DOE Office Websites (Extended Search)

Reports Publications & Reports Download the NERSC Strategic Plan (PDF | 3.2 MB) NERSC Annual Reports NERSC's annual reports highlight the scientific accomplishments of its users...

60

NERSC User Accounts  

NLE Websites -- All DOE Office Websites (Extended Search)

Accounts User Accounts (Logins) Acknowledge NERSC Please acknowledge in your pubications the role NERSC facilities played in your research. Read More How to get a NERSC account...

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


61

NERSC Annual Reports  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Annual Reports NERSC Annual Reports Sort by: Default | Name anrep2000.png NERSC Annual Report 2000 Download Image: anrep2000.png | png | 203 KB Download File:...

62

Visiting NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

stations Parking near NERSC's Oakland Scientific Facility Oakland Weather Oakland Restaurant Guide Downtown Oakland Hotels - the Downtown Oakland Courtyard by Marriot is close...

63

NERSC Mission  

NLE Websites -- All DOE Office Websites (Extended Search)

is to accelerate scientific discovery at the DOE Office of Science through high performance computing and data analysis. NERSC is the principal provider of high performance...

64

NERSC Passwords  

NLE Websites -- All DOE Office Websites (Extended Search)

(also known as a login name) and associated password that permits herhim to access NERSC resources. This usernamepassword pair may be used by a single individual only:...

65

Getting Started at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Operations for: Passwords & Off-Hours Status 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https:nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or...

66

NERSC-8  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC-8 Procurement NERSC-8 Procurement NERSC-8 Procurement NERSC-8 Procurement Overview The U.S. Department of Energy (DOE) Office of Science (SC) requires a high performance production computing system in the 2015/2016 timeframe to support the rapidly increasing computational demands of the entire spectrum of DOE SC computational research. The system needs to provide a significant upgrade in computational capabilities, with a target increase between 10-30 times the sustained performance over the NERSC-6 Hopper system. In addition to increasing the computational capability available to DOE computational scientists, the system also needs to be a platform that will begin to transition DOE scientific applications to more energy-efficient, many-core architectures. This need is closely aligned with the US

67

NERSC/DOE FES Requirements Workshop Worksheet - John Ludlow  

NLE Websites -- All DOE Office Websites (Extended Search)

allocations at NERSC in Oakland, CA, NICS in Knoxville, TN, NCCS in Oak Ridge, TN, and ALCF in Argonne, IL; with pending proposals to HECToR in Edinburgh, UK and PRACE in Julich,...

68

NERSC Calendar  

NLE Websites -- All DOE Office Websites (Extended Search)

to display correctly. You can visit the HTML-only version of this page at: https:www.google.comcalendarhostedlbl.govhtmlembed?titleNERSC%20EVENTS&showTabs0&showCalendars0&...

69

NERSC News  

Science Conference Proceedings (OSTI)

This month's issue has the following 3 articles: (1) Kathy Yelick is the new director for the National Energy Research Scientific Computing Center (NERSC); (2) Head of the Class--A cray XT4 named Franklin passes a rigorous test and becomes an official member of the NERSC supercomputing family; and (3) Model Comparisons--Fusion research group published several recent papers examining the results of two types of turbulence simulations and their impact on tokamak designs.

Wang, Ucilia

2007-11-25T23:59:59.000Z

70

Katie Antypas Named New Head of NERSC User Services Department  

NLE Websites -- All DOE Office Websites (Extended Search)

Katie Antypas Named Katie Antypas Named New Head of NERSC Services Department Katie Antypas Named New Head of NERSC Services Department September 3, 2013 katie2 Katie Antypas Katie Antypas, who has led NERSC's User Services Group since October 2010, has been named as the new Services Department Head, effective September 23. Antypas succeeds Francesca Verdier, who will serve as Allocations Manager until her planned retirement in June 2014. Antypas is also the project lead for the NERSC-8 system procurement, a project to deploy NERSC's next generation system in the 2015 timeframe. "Katie's leadership in ensuring that NERSC users are able to maximize their use of both our current and future systems has positioned her well to help lead NERSC users and staff into the next era of extreme scale

71

A SLPEC/EQP Method for MPECs | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

A SLPEC/EQP Method for MPECs A SLPEC/EQP Method for MPECs Event Sponsor: Mathematics and Computing Science - LANS Seminar Start Date: Dec 4 2013 - 3:00pm Building/Room: Building 240/Room 4301 Location: Argonne National Laboratory Speaker(s): Todd Munson Sven Leyffer Speaker(s) Title: Argonne National Laboratory, MCS Host: Stefan Wild Event Website: http://www.mcs.anl.gov/research/LANS/events/listn/detail.php?id=2249 We discuss a new method for mathematical programs with complementarity constraints that is globally convergent to B-stationary points. The method solves a linear program with complementarity constraints to obtain an estimate of the active set. It then fixes the activities and solves an equality-constrained quadratic program to obtain fast convergence. The method uses a filter to promote global convergence. We establish

72

NERSC Computer Security  

NLE Websites -- All DOE Office Websites (Extended Search)

Security Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or modification. Among NERSC's security goal are: 1. To protect NERSC systems from unauthorized access. 2. To prevent the interruption of services to its users. 3. To prevent misuse or abuse of NERSC resources. Security Incidents If you think there has been a computer security incident you should contact NERSC Security as soon as possible at security@nersc.gov. You may also call the NERSC consultants (or NERSC Operations during non-business hours) at 1-800-66-NERSC. Please save any evidence of the break-in and include as many details as possible in your communication with us. NERSC Computer Security Tutorial

73

NERSC-8 / Trinity Benchmarks  

NLE Websites -- All DOE Office Websites (Extended Search)

Benchmarks NERSC-8 Trinity Benchmarks These benchmark programs are for use as part of the joint NERSC ACES NERSC-8Trinity system procurement. There are two basic kinds of...

74

LAPACK at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

http:help.nersc.gov consult@nersc.gov 1-800-66-NERSC, option 3 or 510-486-8611 Home For Users Software Programming Libraries Math Libraries LAPACK LAPACK...

75

NERSC Email Lists  

NLE Websites -- All DOE Office Websites (Extended Search)

Email Lists Email Lists When you get a NERSC user account, an email alias is created for you of the form username@nersc.gov username@nersc.gov is an email alias that will forward...

76

NERSC 2001 Annual Report  

E-Print Network (OSTI)

official DOE response to the NERSC Strategic Plan for 2002nersc strategic plan Intensive Support Enables DOE's Highestthe DOE accepted the broad outline of the strategic plan and

Hules editor, John

2001-01-01T23:59:59.000Z

77

NERSC Software Downloads  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Software Downloads HPSS Software Downloads Accounts & Allocations Policies Data Analytics & Visualization Science Gateways User Surveys NERSC Users Group User Announcements...

78

NERSC Accounts and Allocations  

NLE Websites -- All DOE Office Websites (Extended Search)

Allocations NIM (NERSC Information Management portal) Awarded projects Policies Data Analytics & Visualization Science Gateways User Surveys NERSC Users Group User Announcements...

79

Qbox at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

pseudopotential formalism. Qbox is designed for operation on large parallel computers. How to Access Qbox Qbox is available at NERSC only on Hopper. NERSC uses modules to...

80

Connecting to NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Transferring Data Transferring Data Network Performance Queues and Scheduling Job Logs & Analytics Training & Tutorials Software Accounts & Allocations Policies Data Analytics & Visualization Data Management Policies Science Gateways User Surveys NERSC Users Group User Announcements Help Operations for: Passwords & Off-Hours Status 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or 510-486-8612 Consulting http://help.nersc.gov consult@nersc.gov 1-800-66-NERSC, option 3 or 510-486-8611 Home » For Users » Network Connections » Connecting to NERSC Connecting to NERSC All NERSC computers (except HPSS) are reached using either the Secure Shell (SSH) communication and encryption protocol (version 2) or by Grid tools

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


81

Reliability Results of NERSC Systems  

E-Print Network (OSTI)

of various systems at NERSC. They graciously provided meReliability Results of NERSC Systems Akbar Mokhtarani,William Kramer, Jason Hick NERSC - LBNL Abstract In order to

Mokhtarani, Akbar; Petascale Data Storage Institute (PDSI)

2008-01-01T23:59:59.000Z

82

Blood vessel simulation probes secrets of brain | Argonne National...  

NLE Websites -- All DOE Office Websites (Extended Search)

can be treated. Argonne's Blue GeneP supercomputer, housed at the Argonne Leadership Computing Facility (ALCF), allows scientists to tackle these immense problems with the power...

83

Breakthrough_Science_NERSC-CTW.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

Breakthrough Science at NERSC Richard Gerber NERSC User Services Group Cray Technical Workshop, Isle of Palms, SC February 25, 2009 Outline * Overview of NERSC * NERSC's Cray XT4 -...

84

Compilers at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Development Tools Development Tools Programming Libraries Debugging and Profiling Visualization and Analytics Grid Software and Services NERSC Software Downloads Accounts & Allocations Policies Data Analytics & Visualization Data Management Policies Science Gateways User Surveys NERSC Users Group User Announcements Help Operations for: Passwords & Off-Hours Status 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or 510-486-8612 Consulting http://help.nersc.gov consult@nersc.gov 1-800-66-NERSC, option 3 or 510-486-8611 Home » For Users » Software » Compilers Compilers PGI Compilers (Fortran, C, C++) The Portland Group Fortran compiler offers full support for the Fortran 77, 90 and 95 language standards, as well as C and C++. Read More »

85

Mathematical Libraries at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

FFTW FFTW GNU Science Library (GSL) LAPACK LibSci Math Kernel Library (MKL) NAG PETSc PSPLINE ScaLAPACK SLEPc SPRNG Trilinos Math Libraries List NCAR/NCL IO Libraries ALTD Debugging and Profiling Visualization and Analytics Grid Software and Services NERSC Software Downloads Accounts & Allocations Policies Data Analytics & Visualization Data Management Policies Science Gateways User Surveys NERSC Users Group User Announcements Help Operations for: Passwords & Off-Hours Status 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or 510-486-8612 Consulting http://help.nersc.gov consult@nersc.gov 1-800-66-NERSC, option 3 or 510-486-8611 Home » For Users » Software » Programming Libraries » Math Libraries

86

NERSC Software List  

NLE Websites -- All DOE Office Websites (Extended Search)

Compilers Compilers Development Tools Programming Libraries Debugging and Profiling Visualization and Analytics Grid Software and Services NERSC Software Downloads Accounts & Allocations Policies Data Analytics & Visualization Data Management Policies Science Gateways User Surveys NERSC Users Group User Announcements Help Operations for: Passwords & Off-Hours Status 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or 510-486-8612 Consulting http://help.nersc.gov consult@nersc.gov 1-800-66-NERSC, option 3 or 510-486-8611 Home » For Users » Software » All Software List All Software List Package Platform Category Version Module Install Date Date Made Default edison / 1.0.0 craypkg-gen/1.0.0 2013-11-27 2013-11-27

87

W.J. Cody Associates | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

computing systems. You will work in a research environment that includes the Argonne Leadership Computing Facility, with access to one of DOE's leadership-class computers. W....

88

NERSC Featured Announcements  

NLE Websites -- All DOE Office Websites (Extended Search)

Phase-1 of NERSC's Cray Edison System Has Arrived November 28, 2012 by Francesca Verdier | 0 Comments Phase-1 of the new Edison system, a Cray XC30 (Cascade), arrived at NERSC on...

89

Matlab at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

computation. How to Use Matlab Matlab is available at NERSC on Carver, Hopper, and Edison. The number of Matlab licenses at NERSC is not very large, so users should not be...

90

NERSC 2012 awards  

NLE Websites -- All DOE Office Websites (Extended Search)

2 awards NERSC 2012 awards December 8, 2011 by Francesca Verdier (0 Comments) The NERSC 2012 awards have been posted. Post your comment You cannot post comments until you have...

91

NERSC 2013 awards  

NLE Websites -- All DOE Office Websites (Extended Search)

3 awards NERSC 2013 awards December 14, 2012 by David Turner (0 Comments) The NERSC 2013 awards have been posted. Post your comment You cannot post comments until you have logged...

92

NERSC Annual Report 1999  

Science Conference Proceedings (OSTI)

The NERSC Annual Report highlights major events and accomplishments at the National Energy Research Scientific Computing Center during FY 1999. Topics include research by NERSC clients and staff and integration of new computing technologies.

Hules (editor), John A.

2000-03-01T23:59:59.000Z

93

NERSC In the News  

NLE Websites -- All DOE Office Websites (Extended Search)

Microscopy Reveals 'Atomic Antenna' Behavior in Graphene January 21, 2012 | NERSC Managers Shed Light on 'Edison' April 29, 2013 | Named Edison, NERSC's new machine is a Cray XC30...

94

2012 NERSC User Survey  

NLE Websites -- All DOE Office Websites (Extended Search)

Results 2012 User Survey Text 2012 NERSC User Survey Text The 2012 NERSC User Survey is closed. The following is the text of the survey. Section 1: Overall Satisfaction with...

95

NERSC Annual Reports  

NLE Websites -- All DOE Office Websites (Extended Search)

Annual Reports NERSC Annual Reports Sort by: Default | Name annrep2011.png NERSC Annual Report 2011 Download Image: annrep2011.png | png | 2.7 MB Download File: annrep2011.pdf |...

96

NERSC-BES.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

October 8 , 2 013 NERSC Overview --- 2 --- NERSC History 1974 Founded a t L ivermore t o s upport f usion research w ith a C DC s ystem 1978 Cray 1 i nstalled 1983 Expanded t o s...

97

NERSC 2001 Annual Report  

SciTech Connect

The National Energy Research Scientific Computing Center (NERSC) is the primary computational resource for scientific research funded by the DOE Office of Science. The Annual Report for FY2001 includes a summary of recent computational science conducted on NERSC systems (with abstracts of significant and representative projects); information about NERSC's current systems and services; descriptions of Berkeley Lab's current research and development projects in applied mathematics, computer science, and computational science; and a brief summary of NERSC's Strategic Plan for 2002-2005.

Hules, John (editor)

2001-12-12T23:59:59.000Z

98

NERSC Annual Report 2002  

SciTech Connect

The National Energy Research Scientific Computing Center (NERSC) is the primary computational resource for scientific research funded by the DOE Office of Science. The Annual Report for FY2002 includes a summary of recent computational science conducted on NERSC systems (with abstracts of significant and representative projects), and information about NERSC's current and planned systems and service

Hules, John

2003-01-31T23:59:59.000Z

99

NERSC Annual Report 2005  

Science Conference Proceedings (OSTI)

The National Energy Research Scientific Computing Center (NERSC) is the premier computational resource for scientific research funded by the DOE Office of Science. The Annual Report includes summaries of recent significant and representative computational science projects conducted on NERSC systems as well as information about NERSC's current and planned systems and services.

Hules (Ed.), John

2006-07-31T23:59:59.000Z

100

NERSC Annual Report 2004  

Science Conference Proceedings (OSTI)

The National Energy Research Scientific Computing Center (NERSC) is the premier computational resource for scientific research funded by the DOE Office of Science. The Annual Report includes summaries of recent significant and representative computational science projects conducted on NERSC systems as well as information about NERSC's current and planned systems and services.

Hules, John; Bashor, Jon; Yarris, Lynn; McCullough, Julie; Preuss, Paul; Bethel, Wes

2005-04-15T23:59:59.000Z

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


101

NERSC Announcements  

NLE Websites -- All DOE Office Websites (Extended Search)

Announcements Announcements Announcements Email Announcement Archive Year: 2014 Select Year: 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 all Select List: all users mpp jgi hopper edison carver pdsf nug managers Retired Systems: bassi davinci franklin jacquard pvp euclid seaborg Search Announcements Body and Title Search Subject Date Author Re: [Edison-users] [usgc] Edison will be reserved for system debugging every other day starting from 10/10/2013 Thursday 16:00 PDT 2014-01-10 15:57:24 Zhengji Zhao [Jgi-users] Jenkins talk today at Noon (Dial-in for JGI participants) 2014-01-10 09:45:33 Douglas Jacobsen [Jgi-users] Please complete the NERSC 2013 User Survey 2014-01-09 17:48:50 Douglas Jacobsen

102

NERSC Visualization  

NLE Websites -- All DOE Office Websites (Extended Search)

Visualization Visualization Visualization jet-sdavid.jpg Visualization facilitates data exploration. It often supports simulation since it allows inspection of the output varying in time or with changes in parameter values, or for the locations of interesting regions in large data sets. The term visualization often is used to describe the rendering of 3D isosurfaces and volumes, whereas the term graphics usually is applied to plots such as scatter plots and histograms. Our mission is to assist researchers in achieving their scientific goals - solving some of the world's most challenging problems in scientific data understanding - through visualization and analytics while simultaneously advancing the state-of-the-art in visualization through our own research. From the standpoint of a user of NERSC, for example, the

103

Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

publishing. nazarewicz, W., Schunck, n., Wild, S.,* "Quality Input for Microscopic Fission Theory," Stockpile Stewardship Quarterly, May 2012, vol. 2, no. 1, pp. 6-7. ALCF | 2012...

104

Simulation Computed at NERSC Matches Historic Gamma-Ray Burst...  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Powering Scientific Discovery Since 1974 Login Site Map | My NERSC search... Go Home About Overview NERSC Mission Contact us Staff Org Chart NERSC History NERSC Stakeholders...

105

Argonne materials scientist Vilas Pol (former postdoc) was recently featured on the PBS NOVA series "Making Stuff: Cleaner," where  

E-Print Network (OSTI)

) Argonne Leadership Computing Facility (ALCF) Transportation Research and Analysis Computing Center (TRACC

Kemner, Ken

106

Trinity NERSC-8 Draft RFP  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC-8 / Trinity Benchmarks Trinity / NERSC-8 Capability Improvement Important Trinity / NERSC-8 Documents Q & A Testbeds NERSC 800 666-3772 Passwords & Account Support Web: http://nim.nersc.gov Email: accounts@nersc.gov Phone: 510 486-8612 Consulting Web: http://help.nersc.gov Email: consult@nersc.gov Phone: 510 486-8611 Home » Systems » Trinity-NERSC8-RFP Trinity / NERSC-8 RFP TrinityN8Logo.gif NERSC and the Alliance for Computing at Extreme Scale (ACES), a collaboration between Los Alamos National Laboratory and Sandia National Laboratory are partnering to release a joint Request for Proposal (RFP) for two next generation systems, Trinity and NERSC-8, to be delivered in the 2015 time frame. Interested Offerors are advised to monitor this web site and the LANL

107

Lisa Gerhardt NERSC User Services"  

NLE Websites -- All DOE Office Websites (Extended Search)

Gerhardt " NERSC User Services" September 10, 2013" Getting Started at NERSC: Running Jobs Jobs at NERSC * Most j obs a re p arallel, u sing 1 0s t o 1 00,000+ c ores * Produc8on r...

108

NERSC Workshops and Training Events  

NLE Websites -- All DOE Office Websites (Extended Search)

BerkeleyGW2013 Edison Performance New User Training HPC Using GPUs Getting Started at NERSC Training -- Edison Getting Started at NERSC NERSC Training at SC12 Effective Use of...

109

NERSC Image and Video Galleries  

NLE Websites -- All DOE Office Websites (Extended Search)

Archive User Science Images Home News & Publications Galleries Galleries 18.jpg NERSC Systems In this gallery view images and videos of the systems that undergird all NERSC...

110

NERSC Allocations Overview and Eligibility  

NLE Websites -- All DOE Office Websites (Extended Search)

NIM (NERSC Information Management portal) Awarded projects Policies Data Analytics & Visualization Science Gateways User Surveys NERSC Users Group User Announcements Help...

111

NIM (NERSC Information Management) system  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Information Management (NIM) portal The NERSC Information Management (NIM) system is a web portal used to view and modify user account, usage, and allocations information....

112

Characterizing I/O performance on leadership-class systems |...  

NLE Websites -- All DOE Office Websites (Extended Search)

HPC production applications. Initially run at the IBM Blue Gene systems at the Argonne Leadership Computing Facility, Darshan recently was adapted by Argonne researchers,...

113

How to Contact NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Contact us Contact us Contact us Technical Questions, Computer Operations, Passwords, Account Support 1-800-666-3772 (or 1-510-486-8600) Computer Operations Account Support HPC Consulting menu option 1 (24/7) (for passwords and off-hours problem reports/inquiries) menu option 2 or accounts@nersc.gov or http://nim.nersc.gov menu option 3 or consult@nersc.gov or http://help.nersc.gov (On-line Help Desk, "ServiceNow") NERSC Center Contacts Director Services Department Public information Sudip Dosanjh NERSC Director Berkeley Lab, MS 50B4224 1 Cyclotron Road Berkeley, CA 94720-8150 USA email: sudip@lbl.gov phone: (510) 495-2488 fax: (510) 486-4316 Francesca Verdier Services Department Head Berkeley Lab, MS 943-256 1 Cyclotron Road Berkeley, CA 94720-8150 USA email: fverdier@lbl.gov

114

NERSC User Environment  

NLE Websites -- All DOE Office Websites (Extended Search)

Environment Environment NERSC User Environment Home Directories, Shells and Startup Files All NERSC systems (except PDSF) use global home directories, which are are pre-populated with startup files (also known as dot files) for all available shells. NERSC fully supports bash, csh, and tcsh as login shells. Other shells (ksh, sh, and zsh) are also available. The default shell at NERSC is csh. Changing Your Default Login Shell Use the NERSC Information management (NIM) portal if you want to to change your default login shell. To do this, select Change Shell from the NIM Actions pull-down menu. Managing Your Startup Files The "standard" dot-files are symbolic links to read-only files that NERSC controls. For each standard dot-file, there is a user-writable ".ext" file.

115

NERSC Holiday Schedule  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC NERSC Holiday Schedule NERSC Holiday Schedule December 20, 2013 by Francesca Verdier (0 Comments) All NERSC computing and storage systems will remain in full operation throughout the holiday season (Tuesday December 24 through Wednesday January 1). From Tuesday December 24 through Wednesday January 1, NERSC Consulting and Account Support services will be available *only* on Friday December 27 and Monday December 30, from 8:00 to 5:00 Pacific Time. Normal Consulting and Account Support schedules will resume on Thursday, January 2. As always, critical system problems may be reported at any time to NERSC Operations at 1-800-666-3772, menu option 1, or 1-510-486-6821. You should also contact Operations if you need to have your password reset (or login failures cleared) over the holidays.

116

Open Science Grid at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Open Science Open Science Grid Open Science Grid at NERSC | Tags: Grid NERSC provides computing to Open Science Grid (OSG) users through a special allocation. OSG Users must submit an OSG new user request form. This gives them access to NERSC resources, using the standard OSG interfaces. Users must also complete a NERSC Computer Usage Policy Form before their access to NERSC can be activated. In order to run at NERSC, an OSG user must go through the following process. If you already have a NERSC account, skip to step 5. Apply for a NERSC account through the OSG new user request form. Read and sign the NERSC Computer Usage Policy Form. Mail or fax the above form to NERSC Account support. Contact the person in charge of the group's activity at NERSC. By default, you will be associated with the OSG allocation and your P.I./P.I.

117

Exploring HPSS bandwidth - NERSC production experience  

E-Print Network (OSTI)

54515 Exploring HPSS Bandwidth - NERSC Production Experienceific Computing Center (NERSC). These tools provide graphicallarge supercomputing sites. NERSC is a developer site within

Holmes, Harvard H.

2003-01-01T23:59:59.000Z

118

The NERSC Sustained System Performance (SSP) Metric  

E-Print Network (OSTI)

The NERSC Sustained System Performance (SSP) Metric WilliamSSP) metric developed by NERSC for its procurements. Theis important. One system at NERSC consistently slowed down

Kramer, William; Shalf, John; Strohmaier, Erich

2005-01-01T23:59:59.000Z

119

NERSC-ScienceHighlightsOctober2012.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

October 2012 NERSC Science Highlights --- 1 --- NERSC User Science Highlights High Energy Physics NERSC users have published the first paper to investigate the implications of...

120

Transferring Data at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Transferring Data Advice and Overview NERSC provides many facilities for storing data and performing analysis. However, transfering data - whether over the wide area network...

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


121

Fusion Science at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Rotating Plasma Finding is Key for ITER Heavy-Ion Fusion Science (HIFS) Math & Computer Science Nuclear Science Science Highlights HPC Requirements Reviews NERSC HPC Achievement...

122

NERSC Computational Systems  

NLE Websites -- All DOE Office Websites (Extended Search)

My NERSC Getting Started Computational Systems Edison Hopper Carver Dirac PDSF Genepool Testbeds Retired Systems Data & File Systems Network Connections Queues and Scheduling Job...

123

NERSC Applications Software  

NLE Websites -- All DOE Office Websites (Extended Search)

For Users Software Applications Applications List of Applications List of math, chemistry and materials science software installed at NERSC. Mathematical Applications...

124

Energy Science at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Artificial Photosynthesis II - EFRC Carbon Capture and Sequestration Activities at NERSC Novel Methods for Harvesting Solar Energy Engineering Science Environmental Science Fusion...

125

NERSC Computational Systems  

NLE Websites -- All DOE Office Websites (Extended Search)

& Allocations Policies Data Analytics & Visualization Science Gateways User Surveys NERSC Users Group User Announcements Help Operations for: Passwords & Off-Hours Status...

126

NERSC Training and Tutorials  

NLE Websites -- All DOE Office Websites (Extended Search)

Online Tutorials Courses NERSC Training Accounts Request Form Training Links OSF HPC Seminiars Software Accounts & Allocations Policies Data Analytics & Visualization Science...

127

NERSC Science Gateways  

NLE Websites -- All DOE Office Websites (Extended Search)

Analytics & Visualization Science Gateways Demos Database services OpenDAP User Surveys NERSC Users Group User Announcements Help Home For Users Science Gateways Science...

128

Dirac, NERSC's GPU Testbed  

NLE Websites -- All DOE Office Websites (Extended Search)

& Allocations Policies Data Analytics & Visualization Science Gateways User Surveys NERSC Users Group User Announcements Help Operations for: Passwords & Off-Hours Status...

129

NERSC's Genepool Cluster  

NLE Websites -- All DOE Office Websites (Extended Search)

& Allocations Policies Data Analytics & Visualization Science Gateways User Surveys NERSC Users Group User Announcements Help Operations for: Passwords & Off-Hours Status...

130

NERSC HPSS Troubleshooting  

NLE Websites -- All DOE Office Websites (Extended Search)

on the Lustre file system IO Formats Sharing Data Transferring Data Unix Groups at NERSC Unix File Permissions Network Connections Queues and Scheduling Job Logs & Analytics...

131

NERSC Users Group  

NLE Websites -- All DOE Office Websites (Extended Search)

Policies Data Analytics & Visualization Science Gateways User Surveys NERSC Users Group Teleconferences Annual Meetings NUGEX Elections Charter User Announcements Help Operations...

132

Douglas Jacobsen! NERSC Bioinformatics Computing Consultant  

NLE Websites -- All DOE Office Websites (Extended Search)

ridEngine - qs NERSC - isjobcomplete NERSC - NERSC g enepool w ebsite * Inves@ga@ng c ompleted j obs - qacct Univa G ridEngine - qqacct NERSC - qqacct w ith q...

133

NERSC In the News  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC in the News NERSC in the News NERSC in the News Sort by: Default | Name | Date (low-high) | Date (high-low) | Source Creating the First 100 Gbps Research Link across the Atlantic May 1, 2013 | Source: Scientific Computing | NERSC Managers Shed Light on 'Edison' April 29, 2013 | Named Edison, NERSC's new machine is a Cray XC30 with 664 compute nodes and 10,624 cores. Each node has two eight-core Intel "Sandy Bridge" processors running at 2.6 GHz (16 cores per node), and has 64 GB of memory. Controlling proton source speeds catalyst in turning electricity to fuel April 26, 2013 | Source: Phys.org | Berkeley Lab scientists to use supercomputer for research on genome, clean energy April 16, 2013 | Source: Daily Californian | Silicon Atoms Dance on Graphene Sheet

134

NERSC-6 Workload Analysis and Benchmark Selection Process  

E-Print Network (OSTI)

Computational Characteristics for NERSC?6 Benchmarks. *CI isScience-Driven Computing: NERSCs Plan for 20062010,Erich Strohmaier, The NERSC Sustained System Performance (

Antypas, Katie

2008-01-01T23:59:59.000Z

135

NCAR Graphics Libraries at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

NCAR/NCL NCAR/NCL IO Libraries ALTD Debugging and Profiling Visualization and Analytics Grid Software and Services NERSC Software Downloads Accounts & Allocations Policies Data Analytics & Visualization Data Management Policies Science Gateways User Surveys NERSC Users Group User Announcements Help Operations for: Passwords & Off-Hours Status 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or 510-486-8612 Consulting http://help.nersc.gov consult@nersc.gov 1-800-66-NERSC, option 3 or 510-486-8611 Home » For Users » Software » Programming Libraries » NCAR/NCL NCAR Graphics What is NCAR Graphics? NCAR Graphics is a collection of graphics libraries that support the display of scientific data. Several interfaces are available for

136

Developing Next-Gen Batteries With Help From NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Helps Develop NERSC Helps Develop Next-Gen Batteries NERSC Helps Develop Next-Gen Batteries A genomics approach to materials research could speed up advancements in battery performance December 18, 2012 | Tags: Materials Science, Science Gateways Contact: Linda Vu, lvu@lbl.gov, +1 510 495 2402 XBD201110-01310.jpg Kristin Persson To reduce the United States' reliance on foreign oil and lower consumer energy costs, the Department of Energy (DOE) is bringing together five national laboratories, five universities and four private firms to revolutionize next-generation battery performance. This collaboration-dubbed the Joint Center for Energy Storage Research (JCESR)-will receive $120 million over five years to establish a new Batteries and Energy Storage Hub led by Argonne National Laboratory (ANL)

137

Nuclear Science at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Accelerator Science Accelerator Science Astrophysics Biological Sciences Chemistry & Materials Science Climate & Earth Science Energy Science Engineering Science Environmental Science Fusion Science Math & Computer Science Nuclear Science Science Highlights NERSC Citations HPC Requirements Reviews Home » Science at NERSC » Nuclear Science Nuclear Science Experimental and theoretical nuclear research carried out at NERSC is driven by the quest for improving our understanding of the building blocks of matter. This includes discovering the origins of nuclei and identifying the forces that transform matter. Specific topics include: Nuclear astrophysics and the synthesis of nuclei in stars and elsewhere in the cosmos; Nuclear forces and quantum chromodynamics (QCD), the quantum field

138

NERSC Grid Certificates  

NLE Websites -- All DOE Office Websites (Extended Search)

Grid Grid Certificates Grid Certificates | Tags: Grid Grid certificates allow you to access NERSC (and other Grid enabled computing facilities) via grid interfaces. Grid certificates are credentials that must be initialized for use with grid tools. Once a certificate is initialized it is automatically used by the grid tools to authenticate the user to the grid resource. Getting a Short Lived NERSC CA Certificate The NERSC Online CA now offers a quick and painless way to obtain grid certificates. You can obtain a grid certificate with a single command using this method. If you are on a NERSC system, load the globus module to set up your environment: % module load globus or % module load osg On the client system (assuming you have the globus binaries in your path), simply run:

139

NERSC Consulting services  

NLE Websites -- All DOE Office Websites (Extended Search)

Consulting Services Consulting Services Consulting Services Consulting Consultants and Account Support staff are available to assist users from 8 AM to 5 PM Pacific time, Monday through Friday (except Berkeley Lab holidays). Questions and problems submitted through the On-Line Help Desk are immediately sent to the consulting staff and this is the preferred method of communication. NERSC staff can be reached at 1-800-66-NERSC (USA) or 510-486-8600 (local and international); or email consult@nersc.gov to speak with a consultant and accounts@nersc.gov to speak with Account Support staff. Users may sometimes contact specific User Services staff members directly by phone or email, but in that case there is no guarantee of a timely response. Staff members are often involved with other time-critical assignments and may not

140

NERSC Users Group  

NLE Websites -- All DOE Office Websites (Extended Search)

NUGEX NUGEX NUG Executive Committee (NUGEX) NUGEX is the voice of the user community to NERSC and DOE. While all NUG events are open to all NERSC users, NUGEX members regularly participate in the monthly teleconfernces and the annual face-to-face meeting. NUGEX is consulted on many NERSC policy issues, e.g., batch configurations, disk quotas, services and training offerings. Members of NUGEX also participate in their office's NERSC Requirements Reviews of High Performance Computing and Storage. There are three representatives from each office and three members-at-large. Name Institution Email Address DOE Office Term Expires PI's Project Title Mark Adams Columbia University mark.adams@columbia.edu ASCR Dec 2015 Composable Hierarchically Nested Solvers Anubhav Jain Berkeley Lab

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


141

NERSC Science Gateway Development  

NLE Websites -- All DOE Office Websites (Extended Search)

Design Forward Design Forward Home » R & D » Science Gateway Development Science Gateway Development Science gateways are conduits for bringing HPC to the web. NERSC assists in the development and hosting of gateways that make NERSC compute and data resources more broadly useful. To ease the development of these gateways, the NERSC Web Toolkit (NEWT) makes science gateways accessible to anyone familiar with HTML and javascript. You can find more detailed information about science gateway development in the related NERSC user documentation and at the NEWT website. What are some use cases? A science gateway can be tailored to the needs within a team of researchers allowing them to share data, simulation results, and information among users who may be geographically distributed.

142

NERSC Featured Announcements  

NLE Websites -- All DOE Office Websites (Extended Search)

Cray to Install Cascade System at NERSC June 27, 2012 by Richard Gerber | 0 Comments Cray will install a next-generation supercomputer code-named "Cascade" and a next-generation...

143

HDFView at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

HDFView is a visual tool for browsing and editing HDF4 and HDF5 files. HDFView is a java based GUI application. When using HDFView on NERSC systems, connection via NX is...

144

NERSC HPSS Clients  

NLE Websites -- All DOE Office Websites (Extended Search)

Clients Clients There are a number of tools that can be used to access NERSC HPSS: HSI Usage HSI provides a unix-like interface into HPSS and can be used both interactively and...

145

NERSC HPSS Client HSI  

NLE Websites -- All DOE Office Websites (Extended Search)

Usage HSI Usage HSI is a flexible and powerful command-line utility to access the NERSC HPSS storage systems. Like FTP, you can use it to store and retrieve files but it has a much...

146

VASP at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

schemes and an efficient Pulay mixing. Access to VASP VASP is available only to NERSC users who already have an existing VASP license. If you have a VASP license, send your...

147

NERSC Featured Announcements  

NLE Websites -- All DOE Office Websites (Extended Search)

Track NERSC Outages in Google Calendar March 22, 2013 by Jack Deslippe | 0 Comments Outages are now available in Google calendar form. You can subscribe to this calendar by...

148

HPSS Passwords at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

example, see the screenshot below: This will provide you with a token (an encrypted string) in the pale yellow highlighted box that may be used on any machine in the NERSC...

149

Argonne Leadership Computing Facility Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

million hours to use Large Eddy Simulation (LES) as an acoustic diagnostic and design tool to spur innovation in next-generation quieter aircraft engines and wind turbines,...

150

Argonne Leadership Computing Facility - Fact Sheet | Argonne...  

NLE Websites -- All DOE Office Websites (Extended Search)

science - science that will change our world through major advances in biology, chemistry, energy, engineering, climate studies, astrophysics and more. ALCFFact SheetSept....

151

Argonne Leadership Computing Facility Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

... 2 Rapid Procedure Unites Quantum Chemistry with Artificial Intelligence ... 4 Lithium-Air: The "Mount Everest" of Batteries for Electric Vehicles...

152

Argonne Leadership Computing Facility Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Katherine Riley .... 7 Researchers Get Time on Mira Test and Development Prototypes at ESP Workshop ... 8 Call for Proposals for Mira Access...

153

Argonne Transportation Current News  

NLE Websites -- All DOE Office Websites (Extended Search)

Current News November 21, 2013 -- Pixelligent Technologies Working with Argonne to Develop Nanoadditives under DOE SBIR Grant November 4, 2013 -- New GREET Model Released October 25, 2013 -- Argonne Creates IdleBox Toolkit for DOE's Clean Cities Initiative to Help Reduce Vehicle Idling September 23, 2013 -- New VISION Model Released for Estimating Potential Energy Use, Oil Use and Carbon Emission Impacts September 17, 2013 -- Water Assessment for Transportation Energy Resources (WATER) Tool Released September 9, 2013 -- Dileep Singh to Receive Prestigious Lee Hsun Award July 17, 2013 -- Summer 2013 TransForum now available July 10, 2013 -- Argonne Wins Four R&D 100 Awards March 23, 2013 -- White House Women's Leadership Summit on Climate and Energy recognizes Argonne scientists

154

Workflow Management Strategies at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Hadoop Hadoop Data Exploration Data Analysis and Mining Software/Hardware Strategies Workflow Management Data Management Policies Science Gateways User Surveys NERSC Users Group User Announcements Help Operations for: Passwords & Off-Hours Status 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or 510-486-8612 Consulting http://help.nersc.gov consult@nersc.gov 1-800-66-NERSC, option 3 or 510-486-8611 Home » For Users » Data Analytics & Visualization » Workflow Management Workflow Management collage.jpg Workflow management refers to the process of connecting various software tools based on specific input and output parameters. The goal of workflow management is to automate specific sets of tasks that are repeated many

155

Lattice QCD and NERSC requirements  

NLE Websites -- All DOE Office Websites (Extended Search)

Doug Toussaint November 26, 2012 Rich Brower, Steven Gottlieb and Doug Toussaint () Lattice QCD at NERSC November 26, 2012 1 17 Lattice Gauge Theory at NERSC First-principles...

156

CS267: NERSC User Blog  

NLE Websites -- All DOE Office Websites (Extended Search)

Computers Discussion CS267: NERSC Discussion This area is intended for NERSC users and staff to help each other with questions about CS267. Post your comment Posting comments...

157

NERSC "Visualization Greenbook" Future visualization needs of the DOE computational science community hosted at NERSC  

E-Print Network (OSTI)

NERSC Visualization Greenbook Future Visualization NeedsScience Community Hosted at NERSC Report Prepared by: BerndDavis E. Wes Bethel, LBNL/NERSC Horst Simon, LBNL/NERSC Juan

Hamann, Bernd; Bethel, E. Wes; Simon, Horst; Meza, Juan

2002-01-01T23:59:59.000Z

158

An Analysis of Node Asymmetries on seaborg.nersc.gov  

E-Print Network (OSTI)

Skinner and Nicholas Cardo NERSC HPCF Abstract : A shortof work completed at NERSC over the past 6 months toresources provided by NERSCs IBM SP seaborg.nersc.gov.

Skinner, David; Cardo, Nicholas

2003-01-01T23:59:59.000Z

159

Argonne TDC: Licensing Intellectual Property from Argonne ...  

Licensing Intellectual Property from Argonne National Laboratory. Argonne's licensing program provides companies with opportunities to acquire rights in Argonne ...

160

NERSC's Hopper Breaks Petaflops Barrier  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC's Hopper Breaks NERSC's Hopper Breaks Petaflops Barrier NERSC's Hopper Breaks Petaflops Barrier Ranks 5th in the World November 14, 2010 Media Contact: Jon Bashor, jbashor@lbl.gov, 510-486-5849 hopper1.jpg NERSC's Cray XE6-Hopper BERKELEY, Calif.-The Department of Energy's National Energy Research Scientific Computing Center (NERSC), already one of the world's leading centers for scientific productivity, is now home to the fifth most powerful supercomputer in the world and the second most powerful in the United States, according to the latest edition of the TOP500 list, the definitive ranking of the world's top computers NERSC's newest supercomputer, a 153,408 processor-core Cray XE6 system, posted a performance of 1.05 petaflops (quadrillions of calculations per second) running the Linpack benchmark. In keeping with NERSC's tradition of

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


161

NERSC-8 Vendor Market Survey  

NLE Websites -- All DOE Office Websites (Extended Search)

Antypas! Antypas! NERSC-8 Project Lead NERSC-8 Market Survey --- 1 --- November 15, 2012 * Seek v endor i nput t o o p6mize 6 ming, r equirements and business prac6ces * Opportunity f or v endors t o p rovide i nput p rior t o formal p rocurement p rocess We are starting our next procurement, NERSC-8, with a round of market surveys Vendor B riefing --- 2 --- NERSC's mission is to enable science NERSC Mission: To accelerate the pace of scientific discovery by providing high-performance computing, data systems and services to the DOE Office of Science community. NERSC has over 4500 users in 650 projects that produce about 1500 publications per year! --- 3 --- Vendor B riefing NERSC's Long Term Strategy * New s ystem e very ~ 3 y ears, r un f or 5 ---6 y ears - Maximizes s tability r ather t han p eak / m achine

162

NERSC User Group Meeting  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC User Group Meeting NERSC User Group Meeting Oct 18, 2010 Outline * About OpenMP * Parallel Regions * Worksharing Constructs * Synchronization * Data Scope * Tasks * Using OpenMP at NERSC 2 3 Common Architectures * Shared Memory Architecture - Multiple CPUs share global memory, could have local cache - Uniform Memory Access (UMA) - Typical Shared Memory Programming Model: OpenMP, Pthreads, ... * Distributed Memory Architecture - Each CPU has own memory - Non-Uniform Memory Access (NUMA) - Typical Message Passing Programming Model: MPI, ... * Hybrid Architecture - UMA within one SMP node - NUMA across nodes - Typical Hybrid Programming Model: mixed MPI/OpenMP, ... What is OpenMP * OpenMP is an industry standard API of C/C++ and Fortran for shared memory parallel programming.

163

NERSC HPSS Charging  

NLE Websites -- All DOE Office Websites (Extended Search)

HPSS Charging NERSC uses Storage Resource Units (SRUs) to help manage HPSS storage. The goal is to provide a balanced computing environment with appropriate amounts of storage and adequate bandwidth to keep the compute engines fed with data. Performance and usage tracking allows NERSC to anticipate demand and maintain a responsive storage environment. Storage management also recognizes storage as a distinct resource in support of an increasing amount of data intensive computing. Storage management and the quota system are intended to encourage efficient usage by the user community. SRU Management SRUs are reported and managed through the NERSC Information Management (NIM) system. If a user is out of SRUs in all of their HPSS repositories that user will be restricted so that they can no longer write data to HPSS

164

SIESTA at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

SIESTA SIESTA SIESTA Description SIESTA is an O(N) DFT code for electronic structure calculations and ab initio molecular dynamics simulations for molecules and solids. It uses norm-conserving pseudopotentials and linear combination of numerical atomic orbitals (LCAO) basis set. How to Access SIESTA NERSC uses modules to manage access to software. To use the default version of SIESTA, type: % module load siesta Access Restrictions SIESTA is available at NERSC only to researchers at academic or public (non-defense) labs. You must certify that this condition is met by using the siesta_register command. A NERSC consulting trouble ticket will be generated and you will be provided access in a timely manner. The procedure for doing this is as follows: module load siesta siesta_register

165

NERSC Oakland Scientific Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Training 2012 Training 2012 February 1-2, 2012 NERSC Oakland Scientific Facility Debugging with DDT Woo-Sun Yang NERSC User Services Group Why a Debugger? * It makes it easy to find a bug in your program, by controlling pace of running your program - Examine execution flow of your code - Check values of variables * Typical usage scenario - Set breakpoints (places where you want your program to stop) and let your program run - Or advance one line in source code at a time - Check variables when a breakpoint is reached 2 DDT * Distributed Debugging Tool by Allinea * Graphical parallel debugger capable of debugging - Serial - OpenMP - MPI - CAF - UPC - CUDA - NERSC doesn't have a license on Dirac * Intuitive and simple user interfaces * Scalable * Available on Hopper, Franklin and Carver

166

Python Tools at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Python Tools Python Tools Python Tools Description and Overview Python is an interpreted, general-purpose high-level programming language. Various of versions of Python are installed on most of the NERSC systems, usually accompanied with computational tool such as numpy and scipy. Using Python on NERSC Systems To use the python installed as module on NERSC systems, you need do module load python To run a python script on the Hopper and Edison compute nodes, you must set an environment variable: setenv CRAY_ROOTFS DSL which is set automatically when the python module is loaded. To execute a script on the Hopper or Edison compute nodes dedicated to your job, you need to use aprun: aprun -n 1 python ./hello_world.py or, if the script is executable and contains '#!/usr/bin/env python' as the

167

NERSC Science Highlights Presentations  

NLE Websites -- All DOE Office Websites (Extended Search)

User Surveys User Surveys HPC Requirements for Science HPC Workshop Reports NERSC Staff Publications & Presentations Journal Cover Stories Galleries Home » News & Publications » Publications & Reports » Science Highlights Presentations Science Highlights Presentations NERSC collects highlights of recent scientific work carried out by its users. If you are a user and have work that you would like us to highlight please send e-mail to consult@nersc.gov. December 2013 Presentation [PDF] Model Shows Arrangement of Proteins in Photosynthetic Membranes [Geissler] How Many Earths are There? [Petigura] Read more... IceCube is 2013 Physics Breakthrough of the Year [Gerhardt] Read more... Simulation Couples with Experiment to Boost Energy Research [Smith] Simulation Captures the Essence of Carbonate Crystallization [Whitelam]

168

The NERSC Allocation Request Form (ERCAP)  

NLE Websites -- All DOE Office Websites (Extended Search)

NIM (NERSC Information Management portal) NIM (NERSC Information Management portal) Awarded projects Policies Data Analytics & Visualization Data Management Policies Science Gateways User Surveys NERSC Users Group User Announcements Help Operations for: Passwords & Off-Hours Status 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or 510-486-8612 Consulting http://help.nersc.gov consult@nersc.gov 1-800-66-NERSC, option 3 or 510-486-8611 How to Submit a 2014 ERCAP Request Home » For Users » Accounts & Allocations » Allocations » Allocation Request Form (ERCAP) Show All | 1 2 3 4 5 ... 16 | Next » The NERSC Allocation Request Form (ERCAP) Requests to use NERSC resources are submitted annually via a web form known as the ERCAP (Energy Research Computing Allocations Process) Request Form.

169

NERSC: National Energy Research Scientific Computing Center  

NLE Websites -- All DOE Office Websites

NERSC Powering Scientific Discovery Since 1974 NERSC Powering Scientific Discovery Since 1974 Login Site Map | My NERSC search... Go Home About Overview NERSC Mission Contact us Staff Org Chart NERSC History NERSC Stakeholders NERSC Usage Demographics Careers Visitor Info Web Policies Science at NERSC NERSC HPC Achievement Awards Accelerator Science Astrophysics Biological Sciences Chemistry & Materials Science Climate & Earth Science Energy Science Engineering Science Environmental Science Fusion Science Math & Computer Science Nuclear Science Science Highlights NERSC Citations HPC Requirements Reviews Systems Computational Systems Table Data Systems Table Edison Cray XC30 Hopper Cray XE6 Carver IBM iDataPlex PDSF Genepool NERSC Global Filesystem HPSS data archive Data Transfer Nodes History of Systems NERSC-8 Procurement

170

Phase-1 of Edison Arrives at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Phase 1 of Edison Arrives at NERSC Phase 1 of Edison Arrives at NERSC November 27, 2012 Photo by Roy Kaltschmidt, Berkeley Lab Phase 1 of NERSC's newest supercomputer, named...

171

NUG 2013: Training - Getting Started at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

- Richard Gerber & Jack Deslippe, NERSC ( 15 min ) 13:15 - Introduction to the NERSC API: NEWT. - Shreyas Cholia, NERSC ( 20 min ) 13:35 - Best Practices for Reading and...

172

NERSC Strategic Implementation Plan 2002-2006  

E-Print Network (OSTI)

LBNL/PUB-5465 Vol. 2 Rev. 2 NERSC Strategic ImplementationNo. DE-AC03-76SF00098. NERSC Strategic Implementation Plan20022006 Abstract NERSC, the National Energy Research

2002-01-01T23:59:59.000Z

173

getnim Command at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

getnim command getnim command getnim - NIM's Command Line Interface This page describes the inquiry-only command called getnim that users can use interactively and in scripts to get their account balances. GETNIM(l) NERSC GETNIM(l) NAME getnim - query the NERSC banking database for remaining allocation, resources and repository information SYNOPSIS getnim [ options ] -Rrname or getnim [ options ] -Rrname { -uuid | -Uuname } or getnim [ options ][ -D ] { -uuid | -Uuname } or getnim [ options ] -Rrname { -l | -L } or getnim [ options ] -Fbatchname PARAMETERS -R to specify the repository name -U to specify the user name -u or specify the user uid -l | -L give the charge info for each user in the reposi-

174

NERSC Staff Publications  

NLE Websites -- All DOE Office Websites (Extended Search)

Staff Staff Publications & Presentations NERSC Staff Publications & Presentations Sort by: Date | Author | Type 2013 Massimo Di Pierro, James Hetrick, Shreyas Cholia, James Simone, and Carleton DeTar., "The new "Gauge Connection" at NERSC", 21st International Lattice Data Grid Workshop, December 2013, Richard Gerber (Berkeley Lab), Ken Bloom (U. Nebraska-Lincoln), "Report of the Snowmass 2013 Computing Frontier Working Group on Distributed Computing and Facility Infrastructures", to be included in proceedings of Community Summer Study ("Snowmass") 2013, November 11, 2013, Download File: 1311.2208v1.pdf (pdf: 551 KB) Richard Gerber and Harvey Wasserman, eds., "Large Scale Computing and Storage Requirements for High Energy Physics - Target 2017", November 8,

175

NERSC Calculations Provide Independent Confirmation of Global...  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Calculations Provide Independent Confirmation of Global Land Warming Since 1901 NERSC Calculations Provide Independent Confirmation of Global Land Warming Since 1901...

176

NERSC/DOE HEP 2012 Review Logistics  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC HPC Achievement Awards Home Science at NERSC HPC Requirements Reviews High Energy Physics (HEP) Logistics Hotel Information Location The review will be held at...

177

2014 NERSC allocation requests due September 22  

NLE Websites -- All DOE Office Websites (Extended Search)

allocation requests due September 22 2014 NERSC allocation requests due September 22 August 13, 2013 by Francesca Verdier (0 Comments) NERSC's allocation submission system is...

178

2010/2011 NERSC User Survey Results  

NLE Websites -- All DOE Office Websites (Extended Search)

Satisfaction and Importance Scores HPC Resources Software Services Comments - What can NERSC do to make you more productive? Comments - What Does NERSC Do Best Response Summary A...

179

NERSC Data Management Strategy and Policies  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Management Policies NERSC Data Management Strategy and Policies Introduction NERSC provides its users with the means to store, manage, and share their research data products....

180

NERSC-Overview-NUG12.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Accomplishments and Plans Kathy Yelick Associate Laboratory Director for Computing Sciences NERSC Facility Leads DOE in Scientific Computing Productivity Computing for...

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


181

NERSC_Workload_Analysis_v3.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

Tina Butler, Richard Gerber, Cary Whitney, Nick Wright, Woo-Sun Yang, Zhengji Zhao NERSC Workload Analysis on Hopper --- 1 --- February 2 2, 2 013 Understanding the NERSC...

182

NERSC Allocations: For Principal Investigators and Managers  

NLE Websites -- All DOE Office Websites (Extended Search)

Allocations NERSC Allocations: for Principal Investigators and Account Managers Allocations Overview and Eligibility A researcher may apply for an allocation of NERSC resources if...

183

NERSC-IntroTalkV2.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

Biological and Environmental Research Joint BER ASCR NERSC Workshop Harvey Wasserman NERSC Science Driven System Architecture Group Lawrence Berkeley National Laboratory May...

184

NERSC/DOE BER Requirements Workshop Participants  

NLE Websites -- All DOE Office Websites (Extended Search)

for Biological and Environmental Research: Target 2017 A Joint ASCR BER NERSC Requirements Review September 11-12, 2012 DOE NERSC Participants and Organizers Name...

185

Petaflops Power to NERSC- NERSC Center News, May 31, 2011  

NLE Websites -- All DOE Office Websites (Extended Search)

Petaflops Power to Petaflops Power to NERSC Petaflops Power to NERSC Energy Department's Primary Scientific Computing Facility Accepts a New Supercomputer May 31, 2011 | Tags: Biological and Environmental Research (BER), Climate Research, Earth Sciences, Energy Technologies, Environmental Science, Hopper, NERSC Linda Vu, lvu@lbl.gov, +1 510 495 2402 Directors in front of Hopper From left to right: Horst Simon (Berkeley Lab deputy director), Kathy Yelick (NERSC division director), Dan Hitchcock (DOE), Paul Alivisatos (Berkeley Lab director) standing in front of the Hopper supercomputer system recently accepted by NERSC. The National Energy Research Scientific Computing Center (NERSC) recently marked a major milestone, putting its first petascale supercomputer into the hands of its 4,000 scientific users. The flagship Cray XE6 system is

186

Richmond's Kennedy High Visits NERSC- NERSC Center News, Feb...  

NLE Websites -- All DOE Office Websites (Extended Search)

High School's TechFutures Academy paid a visit to the National Energy Research Scientific Computing Center (NERSC) at the Berkeley Lab's Oakland Science Facility on February...

187

'Insights of the Decade' Enabled by NERSC - NERSC Center News...  

NLE Websites -- All DOE Office Websites (Extended Search)

in Lawrence Berkeley National Laboratory's (Berkeley Lab's) National Energy Research Scientific Computing Center (NERSC) and Computational Research Division (CRD). In their...

188

NERSC Helps Locate Jupiter's Missing Neon - NERSC Science News...  

NLE Websites -- All DOE Office Websites (Extended Search)

some supercomputing help from the Department of Energy's National Energy Research Scientific Computing Center (NERSC). Since 1995, astronomers have suspected that neon in...

189

NERSC-GoalsProcess.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

High Energy Physics Research High Energy Physics Research Joint HEP/ ASCR / NERSC Workshop Harvey Wasserman NERSC User Services November 12-13, 2009 Logistics: Schedule * Agenda on workshop web page - http://www.nersc.gov/projects/science_requirements/HEP/agenda.php * Mid-morning / afternoon break, lunch * Self-organization for dinner * 5 "science areas," one workshop - Science-focused but cross-science discussion - Explore areas of common need (within HEP) * Breakout sessions Friday AM in one room Why is NERSC Collecting Computational Requirements? * Help ASCR and NERSC make informed decisions for technology and services. * Input is used to guide procurements, staffing, and to improve the effectiveness of NERSC services. - Includes hardware, software, support, data, storage,

190

Parallel Scaling Characteristics of Selected NERSC User Project Codes  

E-Print Network (OSTI)

Characteristics of Selected NERSC User Project Codes DavidDurst, Richard Gerber, NERSC User Services Group, Lawrencescaling characteristics of NERSC user project codes between

Skinner, David; Verdier, Francesca; Anand, Harsh; Carter, Jonathan; Durst, Mark; Gerber, Richard

2005-01-01T23:59:59.000Z

191

Deploying Server-side File System Monitoring at NERSC  

E-Print Network (OSTI)

J. Shalf, and H. Wasserman. Nersc-6 workload analysis andDeploying Server-side File System Monitoring at NERSC AndrewUselton NERSC, Lawrence Berkeley National Laboratory

Uselton, Andrew

2009-01-01T23:59:59.000Z

192

Supporting National User Communities at NERSC and NCAR  

E-Print Network (OSTI)

National User Communities at NERSC and NCAR Timothy L.80301 and Horst D. Simon NERSC Center Division ErnestScientific Computing Center (NERSC) and the National Center

Killeen, Timothy L.; Simon, Horst D.

2006-01-01T23:59:59.000Z

193

NERSC Role in Fusion Energy Science Research Katherine Yelick  

NLE Websites -- All DOE Office Websites (Extended Search)

Fusion Energy Science Research Katherine Yelick NERSC Director Requirements Workshop NERSC Mission The mission of the National Energy Research Scientific Computing Center (NERSC)...

194

NERSC Cyber Security Challenges That Require DOE Development and Support  

E-Print Network (OSTI)

network segments. Table 1. Network Comparison: NERSC vs.Large Corporation NERSC External Network Traffic patternsLBNL-62284 NERSC Cyber Security Challenges That Require DOE

Draney, Brent; Campbell, Scott; Walter, Howard

2008-01-01T23:59:59.000Z

195

Magellan at NERSC Progress Report for June 2010  

E-Print Network (OSTI)

Antypas and H. Wasserman. Nersc-6 workload analysis andand E. Strohmaier, The NERSC Sustained System Performance (M. Shalf, and H. Wasserman, NERSC-6 workload analysis and

Broughton, Richard Canon, Lavanya Ramakrishnan, Brent Draney, Jeff

2011-01-01T23:59:59.000Z

196

NERSC HPSS New Users Guide  

NLE Websites -- All DOE Office Websites (Extended Search)

on the Lustre file system IO Formats Sharing Data Transferring Data Unix Groups at NERSC Unix File Permissions Network Connections Queues and Scheduling Job Logs & Analytics...

197

2003 NERSC User Survey Results  

NLE Websites -- All DOE Office Websites (Extended Search)

and Importance All Satisfaction Topics and Changes from Previous Years DOE and NERSC Scaling Initiatives Web, NIM, and Communications Hardware Resources Training User...

198

NERSC Account Policies and Security  

NLE Websites -- All DOE Office Websites (Extended Search)

Account Policies Account Policies There are a number of policies which apply to NERSC users. These policies originate from a number of sources, such as DOE regulations and...

199

1998 NERSC User Survey Results  

NLE Websites -- All DOE Office Websites (Extended Search)

8 User Survey Results 1998 User Survey Results Respondent Summary NERSC has completed its first user survey since its move to Lawrence Berkeley National Laboratory. The survey is...

200

Lists of NERSC Awarded Projects  

NLE Websites -- All DOE Office Websites (Extended Search)

NIM (NERSC Information Management portal) Awarded projects 2013 Awards 2012 Awards NISE 2012 Awards Data 2012 Awards 2011 Awards NISE 2011 Awards Previous Year Awards Policies...

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


201

NERSC Job Logs and Analytics  

NLE Websites -- All DOE Office Websites (Extended Search)

& Allocations Policies Data Analytics & Visualization Science Gateways User Surveys NERSC Users Group User Announcements Help Operations for: Passwords & Off-Hours Status...

202

NERSC's Hopper Breaks Petaflops Barrier  

NLE Websites -- All DOE Office Websites (Extended Search)

Calif.-The Department of Energy's National Energy Research Scientific Computing Center (NERSC), already one of the world's leading centers for scientific productivity, is now home...

203

Scalasca | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Transfer Data Transfer Debugging & Profiling Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Scalasca Introduction Scalasca is a software tool that supports the performance optimization of MPI and OpenMP parallel programs by measuring and analyzing their runtime behavior. The analysis identifies potential performance bottlenecks - in particular those concerning communication and synchronization - and

204

Intrepid | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Resources & Expertise Resources & Expertise Mira Cetus Vesta Intrepid Challenger Surveyor Visualization Clusters Data and Networking Our Teams User Advisory Council Intrepid Intrepid Data and Networking Intrepid Intrepid has a highly scalable torus network, as well as a high-performance collective network that minimizes the bottlenecks common in simulations on large, parallel computers. Intrepid uses less power per teraflop than systems built around commodity microprocessors, resulting in greater energy efficiency and reduced operating costs. Blue Gene applications use common languages and standards-based MPI communications tools, so a wide range of science and engineering applications are straightforward to port, including those used by the computational science community for cutting-edge research

205

Coreprocessor | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Coreprocessor Coreprocessor Coreprocessor is a basic parallel debugging tool that can be used to debug problems at all levels (hardware, kernel, and application). It is particularly useful when working with a large set of core files since it reveals where processors aborted, grouping them together automatically (for example, 9 died here, 500 were here, etc.). See the instructions below for using the Coreprocessor tool. References The Coreprocessor tool (IBM System Blue Gene Solution: Blue Gene/Q System Administration, Chapter 22) Location The coreprocessor.pl script is located at: /bgsys/drivers/ppcfloor/coreprocessor/bin/coreprocessor.pl or for user convenience, the SoftEnv key +utility_paths (that you get in your default environment by putting @default in your ~/.soft file) allows

206

Openspeedshop | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Openspeedshop Openspeedshop Introduction OpenSpeedshop is an open-source performance tool for the analysis of applications using: Sampling Experiments Support for Callstack Analysis Hardware Performance Counters MPI Profiling and Tracing I/O Profiling and Tracing Floating Point Exception Analysis A more detailed list of the individual experiment names and functionality definition follows: pcsamp Periodic sampling the program counters gives a low-overhead view of where the time is being spent in the user application. usertime Periodic sampling the call path allows the user to view inclusive and exclusive time spent in application routines. It also allows the user to see which routines called which routines. Several views are available, including the "hot" path.

207

Presentations | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Desc Apply Presented at: Data Management: Tools and Best Practices for Intrepid's Decommissioning and Beyond Managing Data at ALCF November 2013 Data Management: Tools and Best...

208

Tukey | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Compiling and Linking ParaView on Tukey Using Cobalt on Tukey VisIt on Tukey Eureka Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we...

209

MADNESS | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries Boost CPMD CodeSaturne...

210

Cetus | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Cetus Cetus and Vesta Cetus shares the same software environment and file systems as Mira. The primary role of Cetus is to run small jobs in order to debug problems that occurred...

211

BGPM | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

BGPM Introduction bgpm is the system level API for accessing the hardware performance counters on the BGQ nodes. Location bgpm is a standard part of the BGQ software and is...

212

Vesta | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. Feedback Form Vesta Vesta is the...

213

Surveyor | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Surveyor Surveyor Data and Networking Surveyor is used for tool and application porting, software testing and optimization, and systems software development. It is an IBM Blue...

214

Mira | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Resources & Expertise Mira Vesta Intrepid Challenger Surveyor Visualization Clusters Data & Networking Our Teams User Advisory Council Mira Submitted by Anonymous on November 27,...

215

GAMESS | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Performance Tools & APIs Software & Libraries Boost CPMD CodeSaturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References IntrepidChallengerSurveyor Tukey Eureka Gadzooks...

216

gdb | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgqstack gdb Coreprocessor TotalView on BGQ Systems Performance Tools & APIs Software & Libraries...

217

PAPI | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Tuning MPI on BGQ Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling...

218

Allocations | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Allocation Management Determining Allocation Requirements Querying Allocations Using cbank Decommissioning of BGP Systems and Resources Blue GeneQ Versus Blue GeneP MiraCetus...

219

Darshan | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Performance Tools & APIs Tuning MPI on BGQ Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BGQ Performance Counters BGPM...

220

GPAW | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Software & Libraries Boost CPMD CodeSaturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References IntrepidChallengerSurveyor Tukey Eureka Gadzooks Policies Documentation...

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


221

Programs | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Featured Science Slice of the translationally-invariant proton hexadecapole density of the ground state of 8Li, Nuclear Structure and Nuclear Reactions James Vary Allocation...

222

Boost | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Boost Overview Boost, a collection of modern, peer-reviewed C++ libraries, is installed in: softlibrariesboostcurrentcnk-gcccurrent -- for use with GCC C++ compilers:...

223

HPCToolkit | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

HPCToolkit References HPCToolkit Website HPCViewer Download Site HPCT Documentation Page Introduction HPCToolkit is an open-source suite of tools for profile-based performance...

224

HPCTW | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

HPCTW Introduction HPCTW is a set of libraries that may be linked to in order to gather MPI usage and hardware performance counter information. Location HPCTW is installed in:...

225

Challenger | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Challenger Challenger Data and Networking Challenger is the home for the prod-devel job submission queue. Moving the prod-devel queue to Challenger clears the way for capability...

226

LAMMPS | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

System Overview Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries Boost...

227

Vesta | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Resources & Expertise Mira Cetus Vesta Intrepid Challenger Surveyor Visualization Clusters Data and Networking Our Teams User Advisory Council Vesta Apply for time on Vesta Vesta...

228

Policies | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Eureka Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy ALCC...

229

Reservations | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

System Overview Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Reservations Cobalt Job Control How to Queue a Job Running Jobs FAQs Queuing and Running on...

230

QBox | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries Boost CPMD CodeSaturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Intrepid...

231

Publications | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

production processes," Acta Phys. Polon Supp., October 2013, vol. 6, Cracow, Poland, INSPIRE, 2013 , pp. 257-262. 10.5506APhysPolBSupp.6.257 S. Bogner, A. Bulgac, J....

232

GROMACS | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Software & Libraries Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] GROMACS Building and Running GROMACS on Vesta/Mira The Gromacs Molecular Dynamics package has a large number of executables. Some of them, such as luck, are just utilities that do not need to be built for the back end. Begin by building the serial version of Gromacs (i.e., the version that can run within one processor, with or without more than one thread) for the front end and then build the parallel version (i.e., with MPI) for the back end. This way, a full set of executables is available for the front end and

233

Why Argonne | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Science Work-Life Balance Diversity and Inclusion Sustainability Your Career Life at Argonne Benefits Apply for a Job Connect with Argonne LinkedIn Facebook Twitter YouTube...

234

Argonne Alumni | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

For Employees Inside Argonne (Intranet) Emergency Information Westgate Alternate Routes Reporting IllegalUnethical Activity Working Remotely Extracurricular Activities Library...

235

Argonne Accelerator Institute  

NLE Websites -- All DOE Office Websites (Extended Search)

Alexander Argonne National Laboratory Decker, Glenn Argonne National Laboratory Dejus, Roger Argonne National Laboratory Deriy, Boris N. Argonne National Laboratory Donley,...

236

NERSC Strategic Implementation Plan 2002-2006  

E-Print Network (OSTI)

Gflop/s Improve Effectiveness Decommissioning NERSC-5 SystemImprove Effectiveness Decommissioning Mass Storage Upgrades

2002-01-01T23:59:59.000Z

237

NERSC National Energy Research Scientific Computing Center  

NLE Websites -- All DOE Office Websites (Extended Search)

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Science for Humanity NERSC users share Nobel Peace Prize, among other honors . . . . . . . . . . . . . . . . 32...

238

Experiences with 100Gbps Network Applications Mehmet Balman, Eric Pouyoul, Yushu Yao, E. Wes Bethel  

E-Print Network (OSTI)

Leadership Class Facility8 . The ANI Testbed includes high-speed hosts at both NERSC and ALCF. The Magellan project included large clusters at both NERSC and ALCF. 16 hosts at NERSC were designated as I/O nodes://www.nersc.gov 7Argonne Leadership Class Facility http://www.alcf.anl.gov 8Oak Ridge Leadership Class Facility http

239

Git at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Git Git Git Prerequisites In order to create CVS, Subversion, or git repositories on the NERSC Global File System (NGF) a user must first have a project directory. After the project directory is set up, access to the directory can be controlled in the NERSC Information Management system (NIM) by the project's pricipal investigator (PI). Users who create repositories must have write access to the project directory. Once the user has a project directory, then the repository can be created under that project directory. For example, a valid repository path would be something like: /project/projectdirs/MyProjectDirectory/MygitRepo. The following instructions assume you have setup a project directory on NGF and have write access. References to the project path are identified with

240

NERSC Modules Software Environment  

NLE Websites -- All DOE Office Websites (Extended Search)

Environment » Modules Environment Environment » Modules Environment Modules Software Environment NERSC uses the module utility to manage nearly all software. There are two huge advantages of the module approach: NERSC can provide many different versions and/or installations of a single software package on a given machine, including a default version as well as several older and newer versions; and Users can easily switch to different versions or installations without having to explicitly specify different paths. With modules, the MANPATH and related environment variables are automatically managed. Users simply ``load'' and ``unload'' modules to control their environment. The module utility consists of two parts: the module command itself and the modulefiles on which it operates. Module Command

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


241

NERSC In the News  

NLE Websites -- All DOE Office Websites (Extended Search)

Creating the First 100 Gbps Research Link across the Atlantic Creating the First 100 Gbps Research Link across the Atlantic May 1, 2013 | Source: Scientific Computing | NERSC Managers Shed Light on 'Edison' April 29, 2013 | Named Edison, NERSC's new machine is a Cray XC30 with 664 compute nodes and 10,624 cores. Each node has two eight-core Intel "Sandy Bridge" processors running at 2.6 GHz (16 cores per node), and has 64 GB of memory. Controlling proton source speeds catalyst in turning electricity to fuel April 26, 2013 | Source: Phys.org | Berkeley Lab scientists to use supercomputer for research on genome, clean energy April 16, 2013 | Source: Daily Californian | Silicon Atoms Dance on Graphene Sheet April 4, 2013 | Source: nano.com | Berkeley code captures retreating Antarctic ice April 3, 2013 | Source: Phys.org |

242

NERSC File Systems  

NLE Websites -- All DOE Office Websites (Extended Search)

Systems Systems NERSC File Systems Overview NERSC file systems can be divided into two categories: local and global. Local file systems are only accessible on a single platform, providing best performance; global file systems are accessible on multiple platforms, simplifying data sharing between platforms. File systems are configured for different purposes. On each machine you have access to at least three different file system Home: Permanent, relatively small storage for data like source code, shell scripts, etc. that you want to keep. This file system is not tuned for high performance from parallel jobs. Referenced by the environment variable $HOME. Scratch: Large, high-performance file system. Place your large data files in this file system for capacity and capability computing. Data is

243

Subversion at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Subversion Subversion Subversion Prerequisites In order to create Subversion (SVN) repositories on the NERSC Global File System (NGF) a user must first have a project directory. After the project directory is setup then access to the directory can be controlled in the NERSC Information Management system (NIM) by the project's pricipal investigator (PI). Users who create repositories must have write access to the project directory. Once the user has a project directory, then the SVN repository can be created under that project directory. For example, a valid repository path would be something like: /project/projectdirs/MyProjectDirectory/MySVNRepo. Subversion and CVS repositories can be made available via the web using a software package called ViewVC. The recommended version of ViewVC can be

244

CVS at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

CVS CVS CVS Prerequisites In order to create CVS repositories on the NERSC Global File System (NGF) a user must first have a project directory. After the project directory is setup then access to the directory can be controlled in the NERSC Information Management system (NIM) by the project's pricipal investigator (PI). Users who create repositories must have write access to the project directory. Once the user has a project directory, then the CVS repository can be created under that project directory. For example, a valid repository path would be something like: /project/projectdirs/MyProjectDirectory/MyCVSRepo. Subversion and CVS repositories can be made available via the web using a software package called ViewVC. The recommended version of ViewVC can be

245

NERSC Jack Deslippe  

NLE Websites -- All DOE Office Websites (Extended Search)

BerkeleyGW at BerkeleyGW at NERSC Jack Deslippe Part 1: Intro to GW/BSE DFT Kohn-Sham Formulation Minimize Energy Functional By Solving Kohn Sham Eqns Total energy is exact so long as approximation for V xc is good. Commonly use Local Density Approximation (LDA) and Gradient Approximations (GGA) - Hybrid functionals etc... Kohn, W.; Sham, L. J. Phys. Rev. A 1965, 140, 1133. Interpretation of KS Eigenvalues

246

NERSC 1998 annual report  

Science Conference Proceedings (OSTI)

This 1998 annual report from the National Scientific Energy Research Computing Center (NERSC) presents the year in review of the following categories: Computational Science; Computer Science and Applied Mathematics; and Systems and Services. Also presented are science highlights in the following categories: Basic Energy Sciences; Biological and Environmental Research; Fusion Energy Sciences; High Energy and Nuclear Physics; and Advanced Scientific Computing Research and Other Projects.

Hules, John (ed.)

1999-03-01T23:59:59.000Z

247

NERSC-FE.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

March March 1 9, 2 013 NERSC Overview --- 2 --- NERSC History 1974 Founded a t L ivermore t o s upport f usion research w ith a C DC s ystem 1978 Cray 1 i nstalled 1983 Expanded t o s upport t oday's D OE O ffice of S cience 1986 ESnet e stablished a t N ERSC 1994 Cray T 3D M PP t estbed 1994 --- 2000 TransiOoned u sers f rom v ector processing t o M PP 1996 Moved t o B erkeley L ab 1996 PDSF d ata i ntensive c ompuOng s ystem for n uclear a nd h igh e nergy p hysics 1999 HPSS b ecomes m ass s torage p laTorm 2006 Facility w ide filesystem 2010 CollaboraOon w ith J GI --- 3 --- Cray 1 --- 1 978 Cray 2 - 1 985 Cray T 3E M curie --- 1 996 IBM P ower3 S eaborg --- 2 001 NERSC collaborates with computer companies to deploy advanced HPC and data resources --- 4 --- We e mploy e xperts i n h igh p erformance c ompu

248

Large File System Backup: NERSC Global File System Experience  

E-Print Network (OSTI)

Large File System Backup NERSC Global File System ExperienceNational Laboratory Abstract NERSCs Global File system (from all compute systems at NERSC, holds files and data from

Mokhtarani, Akbar

2008-01-01T23:59:59.000Z

249

NERSC Intro-v2.ppt  

NLE Websites -- All DOE Office Websites (Extended Search)

Status and Update Status and Update Bill Kramer NERSC General Manager kramer@nersc.gov NERSC User Group Meeting September 17, 2007 NERSC User Group Meeting, September 17, 2007 1 NERSC: A DOE Facility for the Future of Science NERSC is the #7 priority ".... NERSC ... will ... deploy a capability designed to meet the needs of an integrated science environment combining experiment, simulation, and theory by facilitating access to computing and data resources, as well as to large DOE experimental instruments. NERSC will concentrate its resources on supporting scientific challenge teams, with the goal of bridging the software gap between currently achievable and peak performance on the new terascale platforms." (page 21) NERSC is part of the # 2 priority - Ultra Scale Scientific Computing Capability

250

How are we doing? A self-assessment of the quality of services and systems at NERSC (October 1, 1996--September 30, 1997)  

Science Conference Proceedings (OSTI)

Since its inception nearly 25 years ago, the National Energy Research Scientific Computing Center has provided its ever-expanding client base with the latest in scientific computing resources. A key element of NERSC`s successful operation is its ability to anticipate and meet the diverse needs of clients. In order to further this strong working relationship, NERSC staff and clients meet periodically via ERSUG to share views, offer training and identify problems and solutions. The success of NERSC is measured in large part by the quality of science produced by its clients. NERSC`s job is to give them the reliable tools they need -- client support, software and access to computing resources. To ensure that those needs are being met, a set of 10 performance goals pertaining to NERSC systems and service has been established. The goals that have been set out cover the following areas: Reliable and timely service; Innovative assistance; Timely and accurate information; New technologies; Wise technology integration; Progress measurement; High-performance computing center Leadership; Technology transfer; Staff effectiveness; and Protected Infrastructure. This report, covering work from October 1996 through September 1997, has been produced to give NERSC clients, sponsors and staff a better idea of how NERSC is performing.

Kramer, W.T.

1998-01-01T23:59:59.000Z

251

Argonne TDC: Partnering with Argonne  

Both the partnering organization and Argonne contribute to the costs of the R&D and share the results. ... U.S. Department of Energy Office of Science ...

252

Argonne TDC: Working with Argonne  

Some federal and state programs do, however, and companies may use such funding to do R&D with Argonne. Regional Economic Development - Local Links

253

Annual Planning Summaries: Argonne Site Office (Argonne) | Department...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Argonne Site Office (Argonne) Annual Planning Summaries: Argonne Site Office (Argonne) Document(s) Available For Download January 9, 2012 2012 Annual Planning Summary for Argonne...

254

Biological Sciences at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Biological Sciences Biological Sciences Biological Sciences Better knowledge of biomolecules and processes they undergo is vital for achieving a predictive, systems-level understanding of complex biological systems that have potential use in bioenergy, carbon cycling and biosequestration, and biogeochemistry. Areas that NERSC helps to enable include: Research activities using genomics and systems biology to understand plants and microbes. Developing and applying atomistic-molecular to coarse-grained mathematical models of potential energy surfaces, characterizing these surfaces through sampling techniques and finally generating ensemble or time averaged physical properties of biological phenomena. Fundamental research in the redesign of microbial metabolic processes to harness their potential in the conversion of biomass to

255

Argonne Today  

NLE Websites -- All DOE Office Websites (Extended Search)

Argonne logo Argonne logo Argonne Today Wednesday, June 6, 2007 Seminars Submit seminar listings to seminars@anl.gov. There are no seminars scheduled today. Thursday, June 7 High Energy Physics Division Astrophysics Luncheon: "VERITAS - History, Status and First Results" by Deirdre Horan (HEP). Noon, Building 213 Cafeteria Private Dining Room A. Science update Wakefield facility achieves acceleration milestone Scientists at the Argonne Wakefield Accelerator facility are developing advanced technologies relevant to future high-energy physics machines. Their main goal is to identify and develop acceleration methods that may lead to more efficient, compact, and inexpensive particle accelerators. The method being pursued by the Argonne group is electron beam-driven wakefield acceleration in dielectric loaded structures, where a high-charge electron beam excites a high acceleration gradient.

256

NERSC Users Showered With Accolades - NERSC Center News, Apr...  

NLE Websites -- All DOE Office Websites (Extended Search)

29, 2011 April brought in a shower of accolades to longtime NERSC users as chemist Martin Head-Gordon was elected to the American Academy of Arts and Sciences, Darleane Hoffman...

257

NERSC Honored for Innovative Use of Globus Online- NERSC Center...  

NLE Websites -- All DOE Office Websites (Extended Search)

Cited for Innovative Use of Globus Online NERSC Cited for Innovative Use of Globus Online Users Benefit from Drag-and-drop Archiving April 14, 2011 Jon Bashor, jbashor@lbl.gov, +1...

258

NERSC Systems History  

NLE Websites -- All DOE Office Websites (Extended Search)

History of Systems History of Systems History of Systems Established in 1974 at Lawrence Livermore National Laboratory, NERSC was moved to Berkeley Lab in 1996 with a goal of increased interactions with the UC Berkeley campus. NERSC Systems System Name Installed System Type CPU Computational Pool Interconnect Disk (TB) Avg. Power Linpack HPL/ Top Rank Peak GFlops/s Type Speed Nodes SMP Size Total Cores Aggregate Memory (GB) Avg. Memory/ CPU Edison 2013 Cray XC30 Xeon 12-Core 2.3 GHz 5,200 24 124,800 332,800 2.67GB Hopper 2010 Cray XE6 Opteron Hex-Core 2.1 GHz 6,384 24 153,216 216,832 1.3 GB Gemini 2,000 1,054,000 (5) 1,054,000 Carver 2010 IBM iDataPlex Intel Nehalem Quad-Core 2.6 GHz 400 8 3,200 9,600 3 GB 4X QDR InfiniBand NGF 36,856 (322) 42,656

259

GNU Compilers at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

GNU GNU GNU Compilers (Fortran, C, and C++) Availability The GNU/GCC Fortran, C, and C++ compilers are available on all NERSC systems. Current NERSC GNU/GCC bugs are listed at GNU bugs. On Hopper, use the following: % module swap PrgEnv-pgi PrgEnv-gnu On Edison, use this: % module swap PrgEnv-intel PrgEnv-gnu On Carver, type the following: % module unload pgi openmpi % module load gcc openmpi-gcc Package Platform Category Version Module Install Date Date Made Default GCC carver compilers/ programming 4.4.2 gcc/4.4.2 2010-02-08 2012-01-13 GCC carver compilers/ programming 4.5.2 gcc/4.5.2 2012-01-13 GCC carver compilers/ programming 4.6.1 gcc/4.6.1 2012-01-13 GCC carver compilers/ programming 4.7.0 gcc/4.7.0 2012-03-27 2012-04-11 GCC carver compilers/ programming 4.7.3 gcc-sl6/4.7.3 2013-10-24 2013-10-24

260

Instrumented SSH on NERSC Systems  

NLE Websites -- All DOE Office Websites (Extended Search)

Security » Instrumented SSH Security » Instrumented SSH Instrumented SSH on NERSC Systems NERSC uses a modified version of SSH on all of our systems that allows us to record and analyze the content of interactive SSH sessions. Why are We Doing This? Credential theft represents the single greatest threat to security here at NERSC. We are addressing this problem by analyzing user command activity and looking for behavior that is recognizably hostile. Until SSH came into widespread use, it was trivial to monitor login sessions and analyze them for mischievous activity. Furthermore, this kind of intrusion detection proved to be very effective with few "false positives". Using this version of SSH at NERSC, we are simply recovering that capability. However, we recognize the importance of being candid about

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


261

2006 NERSC User Survey Results  

NLE Websites -- All DOE Office Websites (Extended Search)

6 User Survey Results 6 User Survey Results Show All | 1 2 3 4 5 ... 15 | Next » 2006 User Survey Results Table of Contents Survey Results Users are invited to provide overall comments about NERSC: Here are the survey results: Respondent Demographics Overall Satisfaction and Importance All Satisfaction, Importance and Usefulness Ratings All Usefulness Topics Hardware Resources Software Visualization and Data Analysis HPC Consulting Services and Communications Web Interfaces Training Comments about NERSC Survey Results Many thanks to the 256 users who responded to this year's User Survey. This represents a response rate of about 13 percent of the active NERSC users. The respondents represent all six DOE Science Offices and a variety of home institutions: see Respondent Demographics. The survey responses provide feedback about every aspect of NERSC's

262

1999 NERSC User Survey Results  

NLE Websites -- All DOE Office Websites (Extended Search)

9 User Survey Results 9 User Survey Results Show All | 1 2 3 4 5 ... 11 | Next » 1999 User Survey Results Table of Contents Respondent Summary Overall Satisfaction User Information Visualization Consulting and Account Support Information Technology and Communication Hardware Resources Software Training Comments about NERSC All Satisfaction Questions and FY 1998 to FY 1999 Changes Respondent Summary NERSC would like to thank all the users who participated in this year's survey. Your responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, point us to areas we can improve, and show how we compare to similar facilities. This year 177 users responded to our survey, compared with 138 last year.

263

Richard Gerber! NERSC User Services  

NLE Websites -- All DOE Office Websites (Extended Search)

r efine r equirements f or 2 017 - Case s tudy w orksheets - Discussions a t t his m ee>ng - Post---mee>ng r efinement o f c ase s tudies * NERSC e ditors ( Richard & H arvey) -...

264

NERSC Training at SC11  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC staff will be participating in a number of tutorials Nov. 13-14 at SC11 in Seattle. S10: Scaling to Petascale and Beyond: Performance Analysis and Optimization of...

265

Edison, NERSC's Cray Cascade System  

NLE Websites -- All DOE Office Websites (Extended Search)

Edison Phase I - Retired 6242013 Edison Jeff2 Edison Phase I system was retired on 06242013 for the Phase II installation NERSC's newest supercomputer, named Edison after U.S....

266

NERSC Training at SC11  

NLE Websites -- All DOE Office Websites (Extended Search)

Made Simple S13: HPC Archive Solutions Made Simple Sunday, Nov. 13 1:30-5:00 Alan Powers, CSC Jason Hick, NERSC Matt Cary, NASA Advanced Simulation Facility http:...

267

NERSC 2010 Initial Allocation Awards  

NLE Websites -- All DOE Office Websites (Extended Search)

m1132 of the Structure and Reactivity of the Molecular Constituents of Oil Sand and Oil Shale Paropkari, Viraj NERSC 83914 cgpgpu S 5,000 1,000 GPU cluster for High gpgpu...

268

The NERSC Global File System  

NLE Websites -- All DOE Office Websites (Extended Search)

impact of computational science: "as easy as online banking" * NEWT - NERSC Web ToolkitAPI - Building blocks for science on the web - Write a Science Gateway by using HTML +...

269

2012 NERSC User Survey Results  

NLE Websites -- All DOE Office Websites (Extended Search)

operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve. The survey strives to be...

270

2001 NERSC User Survey Results  

NLE Websites -- All DOE Office Websites (Extended Search)

1 User Survey Results 1 User Survey Results Show All | 1 2 3 4 5 ... 11 | Next » 2001 User Survey Results Table of Contents Response Summary User Information Overall Satisfaction and Importance All Satisfaction Questions and Changes from Previous Years NERSC Information Management (NIM) System Web and Communications Hardware Resources Software Training User Services Comments about NERSC Response Summary NERSC extends its thanks to the 237 users who participated in this year's survey; this compares with 134 respondents last year. The respondents represent all five DOE Science Offices and a variety of home institutions: see User Information. Your responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well

271

ScaLAPACK at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. How to Access ScaLAPACK There are several installations of ScaLAPACK at NERSC and...

272

Mark Peters | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

About About Core Capabilities Leadership Message from the Director Board of Governors Organization Chart Argonne Distinguished Fellows Emeritus Scientists & Engineers History Discoveries Prime Contract Contact Us Mark Peters, Deputy Lab Director for Programs Mark Peters Deputy Laboratory Director for Programs Dr. Mark Peters is the Deputy Laboratory Director for Programs at Argonne National Laboratory. He is responsible for the management and integration of the Laboratory's science and technology portfolio, strategic planning, Laboratory Directed Research and Development (LDRD) program and technology transfer. Dr. Peters also serves as a senior advisor to the Department of Energy on nuclear energy technologies and research and development programs, and nuclear waste policy.

273

Richard Gerber! NERSC User Services NUG Teleconference  

NLE Websites -- All DOE Office Websites (Extended Search)

PIN: 4 866820 Topics * Edison Update * NUG 2 014 A nnual U sage G roup M ee?ng * 2014 NISE Call Coming * NERSC A chievement A wards * Queue C ommiHee * NERSC "...

274

nersc-for-hep-theory.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

Ph.D. NERSC User Services High Performance Computing and Big Data at NERSC --- 1 --- April 3 , 2 013 Outline * What i s N ERSC? * Who r uns a t N ERSC? * Can y ou u sedo y ou n...

275

Cray LibSci at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

about using LibSci at NERSC please contact the consultants at consult@nersc.gov Troubleshooting Cray has renamed the multi-threaded libsci libraries to help the performance of...

276

Cray to Install Cascade System at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Cray to Install Cascade System at NERSC Cray to Install Cascade System at NERSC June 27, 2012 by Richard Gerber (0 Comments) Cray will install a next-generation supercomputer...

277

NERSC Signs Supercomputing Agreement with Cray  

NLE Websites -- All DOE Office Websites (Extended Search)

Signs Supercomputing Agreement with Cray NERSC Signs Supercomputing Agreement with Cray June 27, 2012 NERSC Contact: Linda Vu, lvu@lbl.gov, +1 510 495 2402 Cray Media: Nick Davis...

278

2011 NERSC User Survey (Read Only)  

NLE Websites -- All DOE Office Websites (Extended Search)

Results Survey Text 2011 NERSC User Survey (Read Only) The survey is closed. Section 1: Overall Satisfaction with NERSC When you are finished with this page click "Save & Go to...

279

Procurement | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Procurement "Doing business with Argonne and Fermi national labs" - Aug. 21, 2013 Read more about "Doing business with Argonne and Fermi national labs" - Aug. 21, 2013 Argonne and...

280

Argonne Accelerator Institute  

NLE Websites -- All DOE Office Websites (Extended Search)

FERMILAB Collaboration Webpage Argonne-Fermilab Collaboration Visitors to Argonne All Fermilab participants need an approved gate pass to access the Argonne Site. Please fill out...

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


281

NERSC Annual Report 2005  

E-Print Network (OSTI)

Lab); Robert Sanders (UC Berkeley); Mario Aguilera and Cindy Clark (Scripps Institution of Oceanography); Karen McNulty Walsh (Brookhaven NationalLabs Universities National Center for Atmospheric Research Sandia Oak Ridge Argonne National Renewable Energy Stanford Linear Accelerator Center 477,387 Brookhaven

Hules Ed., John

2006-01-01T23:59:59.000Z

282

LAMMPS at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

LAMMPS LAMMPS Description LAMMPS is a large scale classical molecular dynamics code, and stands for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers), solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. How to Access LAMMPS NERSC uses modules to manage access to software. To use the default version of LAMMPS, type: % module load lammps Using LAMMPS on Hopper There are two ways of running lAMMPS on Hopper: submitting a batch job, or running interactively in an interactive batch session. Sample Batch Script to Run LAMMPS on Hopper

283

CPMD at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

CPMD CPMD CPMD Description CPMD is a plane wave/pseudopotential DFT code for ab initio molecular dynamics simulations How to Access CPMD NERSC uses modules to manage access to software. To use the default version of CPMD, type: % module load cpmd Using CPMD on Hopper There are two ways of running CPMD on Hopper: submitting a batch job or running interactively in an interactive batch session. Sample Batch Script for CPMD on Hopper #PBS -N myjob #PBS -q regular #PBS -l mppwidth=16 #PBS -l walltime=08:00:00 #PBS -j oe #PBS -V cd $PBS_O_WORKDIR module load cpmd aprun -n 16 cpmd.x test.in [PP-path] > test.out Where, PP-path is the directory where the pseudo potential file resides. Then submit the job script using qsub command, eg., assume the job script name is test_cpmd.pbs,

284

NERSC Requirements Workshop November  

NLE Websites -- All DOE Office Websites (Extended Search)

Requirements Requirements Workshop November 2009 Lattice gauge theory and some other HE theory Doug Toussaint (University of Arizona) Help from: Paul Mackenzie (Fermilab) Crude comparison of lattice hadron spec- trum to the real world. Lattice Gauge Theory First-principles computations in QCD Also, computations in other strongly coupled field theories * Find hadronic factors to get fundamental physics from experi- ments * Understand structure and interactions of hadrons, maybe even nuclei * Understand QCD: confinement and chiral symmetry breaking * Other strongly interacting theories (what if we don't find the Higgs?) * Quark-gluon matter at high temeratures (RHIC, LHC, early uni- verse) or high densities (neutron stars) HEP theory projects at NERSC now: * Production and analysis of QCD configurations with dynamical quarks, (Doug Toussaint) (MILC collaboration) * Heavy quarks, using

285

Intel Compilers at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Intel Intel Intel (Fortran, C, and C++) Description The Intel® compiler suite offers C, C++ and Fortran compilers with optimization features and multithreading capabilities, highly optimized performance libraries, and error-checking, security, and profiling tools. NERSC Intel compiler bugs are listed at Intel bugs. Availability The Intel compiler suite is available on Edison, Carver, and Hopper. It is the default compiler on Edison. Using the Intel Compilers on Edison The Intel compiler suite is the default on Edison. When you use the Cray ftn, cc, and CC wrappers, they will call the Intel compilers. Using the Intel Compilers on Carver To use the Intel compilers you must swap both the compiler and the OpenMPI modulefiles. Do this in the following way: % module unload pgi openmpi

286

NERSC In the News  

NLE Websites -- All DOE Office Websites (Extended Search)

Supernova to be visible for 2 nights Supernova to be visible for 2 nights September 8, 2011 | Author(s): David Perlman | Source: SF Gate | PTF11kly.png 'Instant Cosmic Classic' Supernova Discovered August 26, 2011 | Source: Slashdot | 3D Map To Compute Matter Distribution In Universe - SDSS Aftermath January 12, 2012 | Source: CrazyEngineers VoiCE | A Stellar Explosion In The Big Dipper's Handle September 3, 2011 | Author(s): Scott Simon | Source: NPR | fireball-particles.jpg Anti-Helium Discovered in the Heart of STAR April 25, 2011 | Source: Red Orbit | Eighteen examples of the heaviest antiparticle ever found, the nucleus of antihelium-4, have been made in the STAR experiment at RHIC, the Relativistic Heavy Ion Collider at the U.S. Department of Energy's Brookhaven National Laboratory. NERSC provides computing resources to the

287

NERSC System Reports  

NLE Websites -- All DOE Office Websites (Extended Search)

Reports Reports Usage Reports Batch Job Statistics See queue wait times, hours used, top users and other summary statistics for jobs run at NERSC (login required). Read More » Parallel Job Statistics (Cray aprun) $RestfulQuery4... Read More » Hopper Hours Used Hours used per day on Hopper. Read More » Edison Hours Used Hours used per day on Hopper. Read More » Carver Hours Used Hours used per day on Carver. Read More » Historical Data Hopper Job Size Charts This charts shows the fraction of hours used on Hopper in each of 5 job-core-size bins. 2013 2012 . 2011 . This chart shows the fraction of hours used on Hopper by jobs using greater than 16,384 cores. 2013 2012 ... Read More » Edison Job Size Charts This charts shows the fraction of hours used on Edison in each of 5

288

FFTW Library at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

FFTW FFTW FFTW Description FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, i.e. the discrete cosine/sine transforms). Both FFTW Version 2 and FFTW Version 3 are available on NERSC systems but there are significant differences between these two versions, as shown below. NOTE: As of July, 2013, Version 3 is the default on Hopper! Version 3 will be the default on Edison Phase 2. Version 2 remains the default on Carver. Differences between Version 2 and Version: Different names for include files Different names for FFTW routines Different arguments for FFTW routines Different data structure methods for fftw_complex Different approach to plan preparation and execution

289

Mathematica at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Mathematica Mathematica Mathematica Description and Overview Mathematica is a fully integrated environment for technical computing. It performs symbolic manipulation of equations, integrals, differential equations, and most other mathematical expressions. Numeric results can be evaluated as well. How to Use Mathematica To use the default version of Mathematica, use % module load mathematica Sometimes the Mathematica module will not load and gives this error message: % module load mathematica mathematica/5.2(31):ERROR:102: Tcl command execution failed: if [ module-info mode load ] { if [ llength [ array get env DISPLAY ] ] { exec xset fp+ tcp/jacquard.nersc.gov:7100 } set usgsbin /usr/common/usg/sbin exec $usgsbin/libdate -m \ -f /usr/common/usg/spool/modules.log \ [ exec $usgsbin/safelogname ] MATHEMATICA_$usgversion}

290

AMBER at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

AMBER AMBER AMBER Description AMBER (Assisted Model Building with Energy Refinement) is the collective name for a suite of programs designed to carry out molecular mechanical force field simulations, particularly on biomolecules. See Amber force fields, AMBER consists of about 50 programs. Two major ones are: sander: Simulated annealing with NMR-derived energy restraints pmemd: This is an extensively-modified version of sander, optimized for periodic, PME simulations, and for GB simulations. It is faster than sander and scales better on parallel machines. How to Access AMBER NERSC uses modules to manage access to software. To use the default version of AMBER, type: % module load amber To see where the amber executables reside (the bin directory) and what environment variables it defines, type

291

Accessing HPSS at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Accessing HPSS Accessing HPSS Accessing HPSS Once you have successfully generated an HPSS token you can access NERSC's HPSS in the different ways listed below. HSI and HTAR are usually the best ways to transfer data in and out of HPSS but other methods are also included. Access Method When to use this method Features Limitations HSI When a full-featured unix-like interface is desired high performance (parallel); unix-like user interface; firewall mode client is specific to HPSS version and might not work at other sites; HTAR When you have large collections of smaller (~10MB or less) files high performance (parallel); creates a tar file directly in HPSS along with an index file; more efficient for large collections of file same client limitations as HSI; also does not have firewall mode so using on a remote site with a firewall will require modification of firewall rules

292

Workforce Pipeline | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Diversity Diversity Message from the Lab Director Diversity & Inclusion Advisory Council Workforce Pipeline Mentoring Leadership Development Policies & Practices Business Diversity Outreach & Education In the News High school workshop invites girls to explore STEM possibilities Daily Herald EcoCAR 2 competition drives auto engineers to excel Yuma (Ariz.) Sun Mississippi universities collaborate with national labs Mississippi Public Radio Workforce Pipeline Argonne seeks to attract, hire and retain a diverse set of talent in order to meet the laboratory's mission of excellence in science, engineering and technology. In order for Argonne to continue to carry out world-class science, the lab needs to seek out the best talent. Today, that talent is increasingly diverse. Argonne fosters an environment that welcomes and values a diverse

293

Life at Argonne | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Benefits Apply for a Job Connect with Argonne LinkedIn Facebook Twitter YouTube Google+ More Social Media Life at Argonne What's it like to work at Argonne? You've come...

294

Ian T. Foster | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Ian T. Foster Ian T. Foster Director of the Computation Institute & Argonne Distinguished Fellow Ian Foster is Director of the Computation Institute, a joint institute of the University of Chicago and Argonne National Laboratory. He is also an Argonne Senior Scientist and Distinguished Fellow and the Arthur Holly Compton Distinguished Service Professor of Computer Science. Methods and software developed under his leadership underpin many large national and international cyberinfrastructures. Foster's awards include the Global Information Infrastructure (GII) Next Generation award, the British Computer Society's Lovelace Medal, R&D Magazine's Innovator of the Year, and an honorary doctorate from the University of Canterbury, New Zealand. He is a Fellow of the American

295

Diversity & Inclusion | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Diversity Diversity Message from the Lab Director Diversity & Inclusion Advisory Council Workforce Pipeline Mentoring Leadership Development Policies & Practices Business Diversity Outreach & Education The African American/Black Club promotes the richness and diversity of African American/Black cultures and provides professional networking opportunities for the African American/Black community and all Argonne employees. The Hispanic Latino Club promotes the richness and diversity of Hispanic/Latino cultures and provides professional networking opportunities that benefit the Hispanic/Latino community and all Argonne employees. Women in Science and Technology (WIST) aims to promote the success of women in scientific and technical positions at Argonne. Diversity & Inclusion

296

NERSC-ScienceHighlightsJuly2013.ppt  

NLE Websites -- All DOE Office Websites (Extended Search)

July 2013 July 2013 NERSC Science Highlights --- 1 --- NERSC User Science Highlights Materials Model is able to predict which of a million or so potential materials might be best for carbon capture (B. Smit, LBNL) Materials NERSC collaboration yields software that is a key enabler in the high- throughput computational materials science initiative (S. Ong, MIT) Climate NERSC simulations contribute to a study finding that emission regulations reduced soot and climate change impact in California W. Collins (LBNL) Climate Independent confirmation of global land warming without the use of land thermometers (G. Compo, U. Colorado) Nuclear Physics NERSC resources aid worldwide collaboration that discovers neutrinos of unprecedented energy (L. Gerhardt, LBNL) Chemistry

297

Argonne User Facility Agreements | Advanced Photon Source  

NLE Websites -- All DOE Office Websites (Extended Search)

Master proprietary agreement sample (pdf) Master proprietary agreement sample (pdf) Master non-proprietary agreement sample (pdf) Differences between non-proprietary and proprietary Opens in a new window Argonne's National User Facilities Advanced Leadership Computing Facility (ALCF) Advanced Photon Source (APS) Argonne Tandem Linear Accelerator System (ATLAS) Center for Nanoscale Materials (CNM) Electron Microscopy Center (EMC) Argonne User Facility Agreements About User Agreements If you are not an Argonne National Laboratory employee, a user agreement signed by your home institution is a prerequisite for experimental work at any of Argonne's user facilities. The Department of Energy recently formulated master agreements that cover liability, intellectual property, and financial issues (access templates from the links in the left

298

Argonne TDC: Contact Us - Argonne National Laboratory  

How to Contact Us. For industrial inquiries such as information about working with Argonne, and the availability of Argonne technologies, please contact:

299

Argonne TDC: Caterpillar - Argonne National Laboratory  

A new facility at Argonne, ... Argonne has many types of contractual agreements to meet the needs and interests of industry, state and local governments, ...

300

NERSC Information Management (NIM) Guide for Users  

NLE Websites -- All DOE Office Websites (Extended Search)

Users Users NIM Guide for Users Getting Started The NERSC Information Management (NIM) system is a web portal that contains user, login name, usage, and allocations information. First Time Login The first time you log in to NIM, you will have to change your NIM password. Your NIM password is also your NERSC password and all NERSC passwords must abide by certain requirements. See Passwords. Logging In To log into the NIM system, point your web browser to the URL http://nim.nersc.gov. Please use this URL as a bookmark for NIM. If you have bookmarked a different URL, you may have to log in twice. Enter your NERSC username and NIM password. If you are having problems with your NIM password contact the NERSC Account Support Office, 1-800-66NERSC, option 2. You can only have one active NIM session on your local computer. If you try

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


301

NERSC Strategic Plan Is Now Online  

NLE Websites -- All DOE Office Websites (Extended Search)

Strategic Plan Strategic Plan Is Now Online NERSC Strategic Plan Is Now Online June 3, 2013 The NERSC Strategic Plan for FY2014-2023 is now available for download (PDF | 3.2MB). Requested by the DOE Office of Advanced Scientific Computing Research as input for ASCR's long-term planning, the strategic plan discusses NERSC's mission, goals, science drivers, planned initiatives, and technology strategy, among other topics. ≫More NERSC publications and reports. About NERSC and Berkeley Lab The National Energy Research Scientific Computing Center (NERSC) is the primary high-performance computing facility for scientific research sponsored by the U.S. Department of Energy's Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than

302

2004 NERSC User Survey Results  

NLE Websites -- All DOE Office Websites (Extended Search)

4 User Survey Results 4 User Survey Results Show All | 1 2 3 4 5 ... 13 | Next » 2004 User Survey Results Table of Contents Response Summary Respondent Demographics Overall Satisfaction and Importance All Satisfaction, Importance and Usefulness Ratings Hardware Resources Software Security and One Time Passwords Visualization and Data Analysis HPC Consulting Services and Communications Web Interfaces Training Comments about NERSC Response Summary Many thanks to the 209 users who responded to this year's User Survey. The respondents represent all six DOE Science Offices and a variety of home institutions: see Respondent Demographics. The survey responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve. The

303

2005 NERSC User Survey Results  

NLE Websites -- All DOE Office Websites (Extended Search)

5 User Survey Results 5 User Survey Results Show All | 1 2 3 4 5 ... 10 | Next » 2005 User Survey Results Table of Contents Response Summary Respondent Demographics All Satisfaction, Importance and Usefulness Ratings Hardware Resources Software Visualization and Data Analysis Services and Communications Web Interfaces Training Comments about NERSC Response Summary Many thanks to the 201 users who responded to this year's User Survey. The respondents represent all six DOE Science Offices and a variety of home institutions: see Respondent Demographics. The survey responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve. The survey results are listed below.

304

2000 NERSC User Survey Results  

NLE Websites -- All DOE Office Websites (Extended Search)

0 User Survey Results 0 User Survey Results Show All | 1 2 3 4 5 ... 10 | Next » 2000 User Survey Results Table of Contents Response Summary User Information Overall Satisfaction and Importance All Satisfaction Questions and FY 1999 to FY 2000 Changes Consulting and Account Support Web and Communications Hardware Resources Software Resources Training User Comments Response Summary NERSC extends its thanks to all the users who participated in this year's survey. Your responses provide feedback about every aspect of NERSC's operation, help us judge the quality of our services, give DOE information on how well NERSC is doing, and point us to areas we can improve. Every year we institute changes based on the survey; the FY 1999 survey resulted in the following changes: We created a long-running queue (12 hours maximum) for jobs using up

305

NERSC 2004 Initial Allocation Awards  

NLE Websites -- All DOE Office Websites (Extended Search)

4 Awards 4 Awards 2004 Initial Allocation Awards The following table lists the allocation awards for NERSC for the extended 2004 fiscal year (Oct 1, 2003 through Nov 30, 2004). The list is in alphabetical order by the last name of the Principal Investigator. Note - Letters following the repository name indicate the following: 'I' DOE ASCR INCITE award 'S' NERSC Startup award A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Principal Site Request Repo SP HPSS Project Title Investigator Id Hours SRUs Agarwal, Deborah Berkeley Lab 80586 mpdsd 5,000 40,000 NERSC Distributed Systems Department Guest Accounts Ahmed, Musa Berkeley Lab 80530 m252 S 20,000 100 Chemical Dynamics and

306

Chemistry and Material Sciences Applications Training at NERSC April 5,  

NLE Websites -- All DOE Office Websites (Extended Search)

User Feedback JGI Intro to NERSC Data Transfer and Archiving Using the Cray XE6 Joint NERSC/OLCF/NICS Cray XT5 Workshop NERSC User Group Training Remote Setup Online Tutorials Courses NERSC Training Accounts Request Form Training Links OSF HPC Seminiars Software Accounts & Allocations Policies Data Analytics & Visualization Data Management Policies Science Gateways User Surveys NERSC Users Group User Announcements Help Operations for: Passwords & Off-Hours Status 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or 510-486-8612 Consulting http://help.nersc.gov consult@nersc.gov 1-800-66-NERSC, option 3 or 510-486-8611 Home » For Users » Training & Tutorials » Training Events » Chemistry

307

Storing and Retrieving Data using HPSS at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

http:help.nersc.gov consult@nersc.gov 1-800-66-NERSC, option 3 or 510-486-8611 Home For Users Data & File Systems HPSS Data Archive Storing and Retrieving...

308

An Analysis of Node Asymmetries on seaborg.nersc.gov  

Science Conference Proceedings (OSTI)

A description of work completed at NERSC over the past 6 months to identify and remedy asymmetries in the batch compute resources provided by NERSC's IBM SP seaborg.nersc.gov.

Skinner, David; Cardo, Nicholas

2003-11-01T23:59:59.000Z

309

NERSC Users Group meeting June 3-5, 2002  

NLE Websites -- All DOE Office Websites (Extended Search)

Charter User Announcements Help Operations for: Passwords & Off-Hours Status 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https:nim.nersc.gov accounts@nersc.gov...

310

NERSC Users Group Meeting October 7-8, 2009 Agenda  

NLE Websites -- All DOE Office Websites (Extended Search)

8:30 Late Registration 8:50 Welcome and Introductions Stephane Ethier, NUG Chair 9:00 NERSC Accomplishments and Plans Kathy Yelick, NERSC 9:40 DOE Update Yukiko Sekine, NERSC...

311

NERSC Users Group Meeting April 8-9, 1998 Agenda  

NLE Websites -- All DOE Office Websites (Extended Search)

Breakfast 08:30 Welcome Ricky Kendall 08:35 View From Washington Tom Kitchens 08:55 New NERSC Staff Introductions Horst Simon 09:00 NERSC Research Efforts (Leverage to the NERSC...

312

NERSC Users Group Meeting June 12-13, 2006 Agenda  

NLE Websites -- All DOE Office Websites (Extended Search)

Stephane Ethier, PPPL 9:15 DOE Update Barbara Helland, DOE MICS 9:45 BREAK 10:00 NERSC Status and Five Year Plan 2006-2010 Bill Kramer, NERSC 11:15 BREAK 11:30 NERSC Metrics...

313

NERSC-ScienceHighlightSlidesSeptember2011v2.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

September, 2 011 NERSC U ser S cien.fic H ighlights S eptember 2 011 NERSC U ser S cien2fic Accomplishments, Q 3CY2011 2 Astrophysics NERSC played a key role in the discovery that...

314

NERSC Users Group Meeting October 7-8, 2009  

NLE Websites -- All DOE Office Websites (Extended Search)

9:40 DOE Update Yukiko Sekine, NERSC Program Manager, DOE Office of Advanced Scientific Computing Research 10:00 Hopper, the new NERSC-6 System Jonathan Carter, NERSC 10:30...

315

Postdoctoral Society of Argonne - Mission  

NLE Websites -- All DOE Office Websites (Extended Search)

Argonne National Laboratory Educational Programs Search Argonne ... Search Argonne Home > Educational Programs > Welcome Type of Appointments Postdoctoral Newsletters Postdoctoral...

316

Postdoctoral Society of Argonne - Meetings  

NLE Websites -- All DOE Office Websites (Extended Search)

Argonne National Laboratory Educational Programs Search Argonne ... Search Argonne Home > Educational Programs > Welcome Type of Appointments Postdoctoral Newsletters Postdoctoral...

317

Energy Efficient Computing at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Green Flash Exascale Computing Performance & Monitoring Tools Petascale Initiative Science Gateway Development Storage and I/O Technologies Design Forward Home » R & D » Energy Efficient Computing Energy Efficient Computing Energy Monitoring to Improve Efficiency NERSC has instrumented its machine room with state-of-the-art wireless monitoring technology from SynapSense. To date, the center has installed 834 sensors, which gather information on variables important to machine room operation, including air temperature, pressure, and humidity. The sensors signal temperature changes and NERSC has already seen benefits in center reliability. For example, after cabinets of a large, decommissioned system were shut down and removed, cold air pockets developed near

318

NERSC-ProcessHarvey.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

Nuclear Physics Research Nuclear Physics Research NP / ASCR / NERSC Workshop May 26-27, 2011 Logistics: Schedule * Agenda on workshop web page - https://www.nersc.gov/science/requirements-workshops/nuclear-physics/ agenda - Need your presentation slides * Mid-morning / afternoon break, lunch * Self-organization for dinner Friday Morning Schedule * Some time available for case studies that don't finish on Thursday * Richard & Harvey summary * Further discussion case studies: how many (if needed); content - Initial table entries * Richard & Harvey available to help * Lunch available? Workshop Content * 6 "science areas," one workshop - Science-focused but cross-science discussion; Explore areas of common need - Low energy nuclear physics - Lattice QCD

319

NERSC Featured Announcements  

NLE Websites -- All DOE Office Websites (Extended Search)

OpenCL now Available on Dirac GPU Cluster OpenCL now Available on Dirac GPU Cluster October 30, 2013 by Francesca Verdier | 0 Comments As requested by many Dirac users, NERSC is migrating Dirac to a newer Linux version to enable capabilities such as OpenCL. We aim to convert the whole Dirac system to Scientific Linux 6.3 by Mid-November. We have migrated the beta nodes to Scientific Linux 6.3. Please run some test jobs to make sure your code will work in the new system. In particular, please test your MPI+Cuda codes in a multi-node environment. You can get to one of these nodes with: qsub -I -V -q dirac_int -l nodes=1:ppn=8:beta Or to get 2 nodes: qsub -I -V -q dirac_reg -l nodes=2:ppn=8:beta Inside the job you need to load the latest cuda and SL6 flavor of gcc: module unload cuda module load cuda/5.5

320

Richard Gerber! NERSC Senior Science Advisor! NERSC Training...  

NLE Websites -- All DOE Office Websites (Extended Search)

10, 2013 --- 2 --- The m ission o f t he N a.onal E nergy Research S cien.fic C ompu.ng C enter (NERSC) i s t o a ccelerate s cien.fic discovery a t t he D OE O ffice o f S...

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


321

Argonne's Vulnerability  

NLE Websites -- All DOE Office Websites (Extended Search)

finding finding and fixing security flaws Argonne's Vulnerability assessment Team VAT researchers spend their workdays devising and demonstrating ways to defeat a wide variety of security devices, systems, and programs, ranging from electronic voting machines and global positioning systems (GPS) to nuclear safeguards programs and biometrics-based access control. This involves analyzing the security features, reverse-engineering the technology or

322

EFRC Carbon Capture and Sequestration Activities at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

EFRC Carbon Capture and Sequestration Activities at NERSC EFRC Carbon Capture and Sequestration Activities at NERSC Why it Matters: Carbon dioxide (CO2) gas is considered to be...

323

Nominations Open for 2013 NERSC HPC Achievement Awards  

NLE Websites -- All DOE Office Websites (Extended Search)

Nominations Open for 2013 NERSC HPC Achievement Awards Nominations Open for 2013 NERSC HPC Achievement Awards January 1, 2013 by Richard Gerber (0 Comments) Nominations are open...

324

NERSC Supercomputers Help Explain the Last Big Freeze  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Supercomputers Help Explain the Last Big Freeze NERSC Supercomputers Help Explain the Last Big Freeze January 31, 2013 | Tags: Biological and Environmental Research (BER),...

325

NERSC Supercomputers to Analyze Hurricane Coastal Surges, Help...  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Supercomputers to Analyze Hurricane Coastal Surges, Help Plan Rebuilding in Louisiana, Gulf Coast NERSC Supercomputers to Analyze Hurricane Coastal Surges, Help Plan...

326

NERSC Initiative for Scientific Exploration (NISE) 2012 Awards  

NLE Websites -- All DOE Office Websites (Extended Search)

Awards NERSC Initiative for Scientific Exploration (NISE) 2012 Awards NISE is a mechanism used for allocating the NERSC reserve (10% of the total allocation). It is a competitive...

327

NERSC/DOE HPC Requirements Workshops Case Study FAQ  

NLE Websites -- All DOE Office Websites (Extended Search)

Case Study FAQs Case Study FAQ General Questions What is NERSC? NERSC is the National Energy Research Scientific Computing Center, the high-end scientific computing facility for...

328

NERSC to Provide Resources to INCITE Projects Studying Combustion...  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC to Provide Resources to INCITE Projects Studying Combustion, Fusion Energy, Materials and Accelerator Design NERSC to Provide Resources to INCITE Projects Studying...

329

NERSC Uses Stimulus Funds to Overcome Software Challenges for...  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Uses Stimulus Funds to Overcome Software Challenges for Scientific Computing NERSC Uses Stimulus Funds to Overcome Software Challenges for Scientific Computing October 30,...

330

NERSC Users Group Meeting January 12 - 13, 1995 Presentations  

NLE Websites -- All DOE Office Websites (Extended Search)

interface through which a user can display information independent of its location, NERSC will present a service interface through which NERSC users request computing services...

331

NERSC Releases Software Test for Its Next Supercomputer  

NLE Websites -- All DOE Office Websites (Extended Search)

Home News & Publications News Center News NERSC Releases Software Test for Its Next Supercomputer NERSC Releases Software Test for Its Next Supercomputer September 12,...

332

NERSC Users Group meeting June 3-5, 2002 Presentations  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC "Visualization Greenbook": Future Visualization Needs of the DOE Computational Science Community Hosted at NERSC October 1, 2002 | Author(s): Bernd Hamann, E. Wes Bethel,...

333

NERSC Users Group Meeting January 12 - 13, 1995 Attendee List  

NLE Websites -- All DOE Office Websites (Extended Search)

Attendee List Attendee List ERSUG - Attendees John Allen - NERSC Bas Braams - NYU Jack Byers - LLNL Bruce Curtis - NERSC David Feller - PNL Judith Giarrusso - PPPL Brent Gorda -...

334

NERSC-ScienceHighlightSlidesSeptember2010.ppt  

NLE Websites -- All DOE Office Websites (Extended Search)

September, 2010 NERSC Science Highlights NERSC Scientific Accomplishments, Q3CY2010 2 Energy Resources State-of-the-art electronic structure and first-principles molecular-...

335

NERSC Played Key Role in Nobel Laureate's Discovery  

NLE Websites -- All DOE Office Websites (Extended Search)

Played Key Role in Nobel Laureate's Discovery NERSC Played Key Role in Nobel Laureate's Discovery NERSC, Berkeley Lab Now Centers for Computational Cosmology Community October 4,...

336

NERSC Launches Data-intensive Science Pilot Program  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Launches Data-intensive Science Pilot Program NERSC Launches Data-intensive Science Pilot Program DOE Researchers Eligible to Apply for Resources, Expertise April 12, 2012...

337

PISCEES-NERSC-2012.ppt  

NLE Websites -- All DOE Office Websites (Extended Search)

* Hours u sed i n 2 012*: - NERSC: 9 00e3 h rs ( ice s heet), 1 400e3 h rs ( ocean) - OLCF: 3 ,275e3 h rs ( ice s heet) total: 4 .1 m illion ( ice o nly) + 1 .4 m illion ( ocean)...

338

New Ultra-High Speed Network Connection for Researchers and Educators...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

supercomputing centers: the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab, Oak Ridge Leadership Computing Facility (OLCF), and Argonne...

339

Using SCP and SFTP at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

SCP/SFTP SCP/SFTP Using SCP/SFTP At NERSC Overview Secure Copy (SCP) and Secure FTP (SFTP) are used to securely transfer files between two hosts using the Secure Shell (SSH) protocol. Suggested for smaller files (<~10GB) Availibility SCP and SFTP are available on all NERSC systems. Requirements To transfer files into/out of NERSC using SCP or SFTP, you need a SSH client: Linux/Unix/Cygwin: command ssh, scp or sftp Windows: Many GUI tools such as WinSCP MacOS: Many GUI tools such as Fugu Usage All example commands below are executed on your local machine, not the NERSC machine: The scp command Get a file from Data Transfer Node scp user_name@dtn01.nersc.gov:/remote/path/myfile.txt /local/path Send a file to Data Transfer Node scp /loca/path/myfile.txt user_name@dtn01.nersc.gov:/remote/path

340

NERSC 2011: High Performance Computing Facility Operational Assessment for the National Energy Research Scientific Computing Center  

E-Print Network (OSTI)

5 of 6) Section 7. Is NERSC effectively managing risk? (continued) Recommendation NERSC Response DOEProgram Manager Response NERSC should consider developing

Antypas, Katie

2013-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


341

A survey of codes and algorithms used in NERSC material science allocations  

E-Print Network (OSTI)

used in Material Science on NERSC machines. N_user is theand algorithms used in NERSC material science allocationsLin-Wang Wang NERSC System Architecture Team Lawrence

Wang, Lin-Wang

2006-01-01T23:59:59.000Z

342

Assessment of Applying the PMaC Prediction Framework to NERSC-5 SSP Benchmarks  

E-Print Network (OSTI)

PMaC Prediction Framework to NERSC-5 SSP Benchmarks SummerAuthor: Noel Keen Introduction NERSC procurement depends onbenchmarks, in particular the NERSC SSP. Machine vendors are

Keen, Noel

2008-01-01T23:59:59.000Z

343

NERSC-BES-Requirements-Yelick10.ppt  

NLE Websites -- All DOE Office Websites (Extended Search)

Basic Energy Basic Energy Science Research Katherine Yelick NERSC Director Requirements Workshop NERSC Mission Accelerate the pace of scientific discovery for all DOE Office of Science (SC) research through computing and data systems and services. Efficient algorithms + flexible software + effective machines great computational science. 2 2010 Allocations NERSC is the Production Facility for DOE Office of Science * NERSC serves a large population Approximately 3000 users, 400 projects, 500 code instances * Focus on "unique" resources - Expert consulting and other services - High end computing systems - High end storage systems - Interface to high speed networking * Science-driven - Machine procured competitively using

344

2011 Call for Proposals for NERSC Resources  

NLE Websites -- All DOE Office Websites (Extended Search)

allocations of high performance computing resources at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Proposals must...

345

NERSC: National Energy Research Scientific Computing Center  

NLE Websites -- All DOE Office Websites (Extended Search)

and share massive bio-imaging datasets. Read More National Energy Research Scientific Computing Center Computing at NERSC OURSYSTEMS GETTINGSTARTED DOCUMENTATIONFOR USERS...

346

Chemistry and Material Sciences Codes at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Chemistry and Material Sciences Codes Chemistry and Material Sciences Codes at NERSC April 6, 2011 & ast edited: 2012-02-24 15:12:59...

347

NERSC/DOE FES Requirements Workshop Participants  

NLE Websites -- All DOE Office Websites (Extended Search)

Simulations; FES HPC Allocations Yukiko Sekine ASCR NERSC Program Manager Lee Berry ORNL Magnetic Fusion Energy Jeff Candy General Atomics Magnetic Fusion Energy CS Chang NYU...

348

NERSC-ScienceHighlightsSept2013.ppt  

NLE Websites -- All DOE Office Websites (Extended Search)

Science Highlights --- 1 --- NERSC User Science Highlights Materials Simulation takes solar power in a new direction: world's thinnest solar cell (J. Grossman, MIT) Geoscience...

349

NERSC Users Group Meeting Nov. 15, 1999  

NLE Websites -- All DOE Office Websites (Extended Search)

some highlights from the discussions (excepting the items contributed by ERSUG Chair, Bas Bramms below): During the state of NERSC presentation by Jim Craw a primary topic of...

350

NERSC Strategic Plan Is Now Online  

NLE Websites -- All DOE Office Websites (Extended Search)

of Advanced Scientific Computing Research as input for ASCR's long-term planning, the strategic plan discusses NERSC's mission, goals, science drivers, planned initiatives, and...

351

NERSC/DOE HEP Requirements Workshop Logistics  

NLE Websites -- All DOE Office Websites (Extended Search)

at NERSC HPC Requirements Reviews Requirements for Science: Target 2014 High Energy Physics (HEP) Logistics Workshop Logistics Workshop Location Hilton Washington...

352

National Energy Research Scientific Computing Center (NERSC)...  

NLE Websites -- All DOE Office Websites (Extended Search)

Contract to Cray August 5, 2009 BERKELEY, CA - The Department of Energy's (DOE) National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National...

353

NERSC Strategic Implementation Plan 2002-2006  

E-Print Network (OSTI)

Strategic Implementation Plan 20022006 Abstract NERSC, the National Energy Research Scientific Computing Center, is DOEStrategic Proposal approved by DOE in November 2001. The plan

2002-01-01T23:59:59.000Z

354

2009/2010 NERSC User Survey Results  

NLE Websites -- All DOE Office Websites (Extended Search)

Demographics Overall Satisfaction All Satisfaction and Importance Ratings HPC Resources NERSC Software Services Comments Survey Text Response Summary Many thanks to the 395 users...

355

NERSC/DOE HPC Requirements Workshops Overview  

NLE Websites -- All DOE Office Websites (Extended Search)

Overview NERSC HPC Program Requirements Reviews Overview Scope These workshops are focused on determining the computational challenges facing research teams and the computational...

356

2007/2008 NERSC User Survey Results  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Analysis HPC Consulting Services and Communications Web Interfaces Comments about NERSC Response Summary Many thanks to the 467 users who responded to this year's User...

357

NERSC-Intro2013Wasserman.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

3, 2 013 Harvey Wasserman User Services Group NERSC Overview * Naonal Energy Research Scienfic Compung Center - Established 1 974, fi rst u nclassified supercomputer c enter -...

358

NERSC/DOE ASCR Requirements Workshop Logistics  

NLE Websites -- All DOE Office Websites (Extended Search)

Workshop Logistics Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research January 5-6, 2011 Location The workshop will be held at NERSC's...

359

NERSC/DOE ASCR Requirements Workshop Agenda  

NLE Websites -- All DOE Office Websites (Extended Search)

Workshop Agenda Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research ASCR NERSC Workshop January 5-6, 2011 >> Download and View these...

360

NERSC/DOE ASCR Requirements Workshop Presentations  

NLE Websites -- All DOE Office Websites (Extended Search)

Presentations Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research An ASCR NERSC Workshop January 5-6, 2011 Sort by: Default | Name |...

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


361

NERSC/DOE ASCR Requirements Workshop Participants  

NLE Websites -- All DOE Office Websites (Extended Search)

Participants Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research An ASCR NERSC Workshop January 5-6, 2011 On-Site Participants Name...

362

Mentoring | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

and career development, to formal sponsorships. Resources are provided by Argonne's Gender Diversity Specialist and are shared via Argonne's Mentoring Blog. Excellence in...

363

Argonne Accelerator Institute  

NLE Websites -- All DOE Office Websites (Extended Search)

Accelerators at Argonne Argonne has a long and continuing history of participation in accelerator based, and user oriented facilities. The Zero-Gradient Synchrotron, which began...

364

Argonne Accelerator Institute  

NLE Websites -- All DOE Office Websites (Extended Search)

Welcome Accelerators at Argonne Mission Organization History Document Collection Conferences & Workshops Beams and Applications Seminar Argonne-Fermilab Collaboration Lee Teng...

365

Outreach | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

labs Mississippi Public Radio Featured Multimedia Argonne OutLoud Public Lecture Series: Nuclear Energy Argonne OutLoud Public Lecture Series 3: Unraveling the Higgs Boson...

366

Argonne Accelerator Institute  

NLE Websites -- All DOE Office Websites (Extended Search)

Argonne Accelerator Institute: Mission The mission of the Argonne Accelerator Institute is centered upon the following related goals: Locate next generation accelerator facilities...

367

Procurement | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Video "Doing business with Argonne and Fermi national labs" - Aug. 21, 2013 Procurement Argonne spends approximately 300,000,000 annually through procurements to a diverse group...

368

Careers | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

for a Job External Applicants Internal Applicants Postdoctoral Applicants Students Why Argonne Your Career Life at Argonne Benefits Apply for a Job FAQs Answers to frequently asked...

369

Argonne Accelerator Institute  

NLE Websites -- All DOE Office Websites (Extended Search)

Fermilab Collaboration Lee Teng Scholarship Program Useful Links Argonne Accelerator Institute: For Industrial Collaborators -- Working with Argonne This link is addressed to...

370

Argonne Accelerator Institute  

NLE Websites -- All DOE Office Websites (Extended Search)

HEARTHFIRE - Inertial Confinement Fusion (1974 - 1980 at Argonne) At Argonne, the concept of using intense pulsed proton or deuteron beams for inertial confinement fusion (ICF) of...

371

Argonne Software Shop  

Argonne Software Shop. Argonne's researchers have created a wealth of powerful software and models with broad-ranging applications. In addition to ...

372

Argonne Transportation Current News  

NLE Websites -- All DOE Office Websites (Extended Search)

Jeff Chamberlain Argonne's Jeff Chamberlain testifies before Congress on grid technology Power grid Argonne's George Crabtree co-chairs new APS study Integrating Renewable...

373

Andrew Siegel | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

Siegel Siegel Computational Scientist & Project Lead - Nuclear Simulation Andrew Siegel has led several major projects in the past decade. For example, he was lead software architect with the FLASH project at the University of Chiciago; he built the applications team for the Blue Gene Consortium; he spearheaded the establishiment of a nuclear engineering simulation program at Argonne; and he was director of SHARP (Simulation for High Accuracy Reactor Program) at Argonne. Siegel received his B.A. in philosophy at the University of Chicago and his Ph.D. in the Department of Astrophysical, Palnetary, and Atmospheric Sciences at the University of Colorado. Research Interests Large-scale computational science simulations Parallel code for leadership-class computers

374

NERSC-InitialSummary.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

Day-1 Summary Day-1 Summary Large Scale Computing and Storage Requirements for Biological and Environmental Research Joint BER / ASCR / NERSC Workshop NERSC Lawrence Berkeley National Laboratory May 7-8, 2009 Summary * Users need for more resources for DOE SC computing - But we need more concrete, science-based justification * Need for predictable throughput - Microbial Genomics and GFDL ESM - Need to differentiate between real-time needs and higher desired batch turnaround * Slow batch turnaround time may be because of queue policy or because of insufficient resources Summary * Data management issues transcend all science areas, multiple projects - Exponentially increasing - Some disagreement on volume at a single site Summary * Users see need for guidance / help in

375

NERSC 2007 Initial Allocation Awards  

NLE Websites -- All DOE Office Websites (Extended Search)

7 Awards 7 Awards 2007 Initial Allocation Awards The following table lists the initial allocation awards for NERSC for the 2007 allocation year (Jan 9, 2007 through Jan 7, 2008). The list is in alphabetical order by the last name of the Principal Investigator. Note - Letters following the repository name indicate the following: 'I' DOE ASCR INCITE award 'S' NERSC Startup award A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Principal Site Request Repo SP HPSS Project Title Investigator Id Hours SRUs Agarwal, Deborah Berkeley Lab 81665 bwc m510 S 50,000 5,000 Berkeley Water Center (formerly National Center for Hydrology Synthesis) Agarwal, Deborah Berkeley Lab 82063 mpdsd 10,000 100,000 CRD Distributed Systems

376

CP2K at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

CP2K CP2K CP2K Description CP2K performs atomistic and molecular simulations of solid state, liquid, molecular and biological systems. It provides a general framework for different methods such as e.g. density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW), and classical pair and many-body potentials. How to Access CP2K This program is currently available at NERSC on Carver, Hopper, and Edison. NERSC uses modules to manage access to software. To use the default version of CP2K, type: % module load cp2k Using CP2K on Hopper There are two ways of running CP2K on Hopper: submitting a batch job, or running interactively in an interactive batch session. Sample Batch Script for CP2K on Hopper #PBS -N myjob #PBS -q regular #PBS -l mppwidth=24

377

NetCDF at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

NetCDF NetCDF NetCDF Description and Overview NetCDF (Network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. This includes the libnetcdf.a library as well as the NetCDF Operators (NCO), Climate Data Operators (CDO), NCCMP, and NCVIEW packages. Files written with previous versions can be read or written with the current version. Using NetCDF on Cray System There are separate NetCDF installations provided by Cray and by NERSC. On Hopper and Edison, Cray installations are recommended because they are simpler to use. To see the available Cray installations and versions use the following command: module avail cray-netcdf To see the NERSC installations and versions use the following command:

378

NERSC Training at SC12  

NLE Websites -- All DOE Office Websites (Extended Search)

PGAS and Hybrid PGAS and Hybrid Intro to PGAS (UPC and CAF) and Hybrid for Multicore Programming Monday, Nov. 12 8:30-5:00 Alice Koniges - NERSC / Lawrence Berkeley National Laboratory Katherine Yelick - NERSC / Lawrence Berkeley National Laboratory Rolf Rabenseifner - High Performance Computing Center Stuttgart Reinhold Bader - Leibniz Supercomputing Centre David Eder - Lawrence Livermore National Laboratory ABSTRACT: PGAS (Partitioned Global Address Space) languages offer both an alternative to traditional parallelization approaches (MPI and OpenMP), and the possibility of being combined with MPI for a multicore hybrid programming model. In this tutorial we cover PGAS concepts and two commonly used PGAS languages, Coarray Fortran (CAF, as specified in the Fortran standard) and

379

NERSC 2009 Initial Allocation Awards  

NLE Websites -- All DOE Office Websites (Extended Search)

9 Awards 9 Awards 2009 Awards 2009 Initial Allocations Awards The following table lists the initial allocation awards for NERSC for the 2009 allocation year (Jan 13, 2009 through Jan 11, 2010). The list is in alphabetical order by the last name of the Principal Investigator. Note - Letters following the repository name indicate the following: 'S' NERSC Startup award 'I' INCITE award A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Principal Site Request Repo MPP HPSS Project Title Investigator Id 'Hours' SRUs Adams, Paul Berkeley Lab 83193 m811 80,000 200,000 The Joint Bioenergy Institute (JBEI) Adams, Paul Berkeley Lab 83306 jbei 600,000 JBEI Archive

380

BES_NERSC_Wang.ppt  

NLE Websites -- All DOE Office Websites (Extended Search)

A
DFT
nanostructure
calcula0on
 A
DFT
nanostructure
calcula0on
 case
study
 Lin-Wang
Wang
 Lawrence
Berkeley
Na0onal
Lab
 For BES/NERSC large scale simulation workshop A summary of mp304 NERSC account  Allocated and used computer time in 2009: ~ 1 M hours  Total number of users: ~ 7 active users  Main codes used: VASP, LAMMP, Petot, Escan, LS3DF  Number of topics (number of published papers): ~ 15  Number of processors for typical jobs: 16 to 1000, sometimes 10,000  Duration of the jobs: 20 minutes, to several hours, to a few days  The main considerations which determine the jobs we run: the physics problem, queue time and computer time.  Other facilities: no group cluster, INCITE project at NCCS and ALCF (but not discussed here).

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


381

Open Science Grid User Request Form to Use NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Debugging and Profiling Debugging and Profiling Visualization and Analytics Grid Software and Services Globus Online at NERSC Grid Certificates Grid Data Transfer Running Grid Jobs Client Tools Open Science Grid NERSC Software Downloads Accounts & Allocations Policies Data Analytics & Visualization Data Management Policies Science Gateways User Surveys NERSC Users Group User Announcements Help Operations for: Passwords & Off-Hours Status 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or 510-486-8612 Consulting http://help.nersc.gov consult@nersc.gov 1-800-66-NERSC, option 3 or 510-486-8611 Home » For Users » Software » Grid Software and Services » Open Science Grid » OSG New User Form OSG New User Form

382

Microsoft PowerPoint - NERSC-Science-NUG08  

NLE Websites -- All DOE Office Websites (Extended Search)

at NERSC at NERSC Katherine Yelick NERSC Director NERSC Mission The mission of the National Energy Research Scientific Computing Center (NERSC) is to accelerate the pace of scientific discovery by providing high performance computing, information, data, and communications services for all DOE Office of Science (SC) research. 2 NERSC is the Production Facility for DOE SC * NERSC serves a large population of users ~3000 users, ~400 projects, ~500 codes * Allocations managed by DOE - 10% INCITE awards: * Created at NERSC; now used throughout SC * Open to all of science, not just DOE or DOE/SC mission * Large allocations, extra service - 70% Production (ERCAP) awards: * From 10K hour (startup) to 5M hour; Only at NERSC, not LCFs - 10% each NERSC and DOE/SC reserve * Award mixture offers

383

Argonne TDC: Ceramicrete - Argonne National Laboratory  

Ceramicrete: Chemically Bonded Ceramic. Argonne National Laboratory has developed a novel, versatile phosphate ceramic, called Ceramicrete, with many different ...

384

Making Effective User of Compilers at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Effective Use of Compilers Effective Use of Compilers at NERSC Michael Stewart NERSC User Services Group August 15, 2012 Introduction ● Description of the Hopper compiling environment. ● Strengths and weaknesses of each compiler. ● Advice on choosing the most appropriate compiler for your work. ● Comparative results on benchmarks and other codes. ● How to use the compilers effectively. ● Carver compiling environment.

385

NERSC Users Group Meeting June 24-25, 2004 Attendee List  

NLE Websites -- All DOE Office Websites (Extended Search)

Bethel 510-486-7353 ewbethel@lbl.gov LBNL here here Greg Butler 510-486-8691 gbutler@nersc.gov NERSC here Tina Butler 510-495-2379 tbutler@nersc.gov NERSC here Paolo Calafiura...

386

Argonne's 2013 Summer Training Program a Success | Argonne Leadership...  

NLE Websites -- All DOE Office Websites (Extended Search)

will help us develop technologies that solve real problems faced by the HPC community." Jordan Musser Jordan works as research engineer in the DOE's National Energy Technology...

387

Argonne Lea Computing F A  

NLE Websites -- All DOE Office Websites (Extended Search)

Lea Lea Computing F A r g o n n e L e a d e r s h i p C o m p u t i n g FA c i l i t y 2 0 1 3 S c i e n c e H i g H l i g H t S Argonne leadership computing Facility C O N T E N T S About ALCF ...............................................................................................................................2 MirA...............................................................................................................................................3 SCienCe DireCtor'S MeSSAge ..........................................................................................4 ALLoCAtion ProgrAMS .......................................................................................................5 eArLy SCienCe ProgrAM ....................................................................................................

388

NERSC HPC Program Requirements Review Reports  

NLE Websites -- All DOE Office Websites (Extended Search)

Published Reports Published Reports NERSC HPC Program Requirements Review Reports These publications comprise the final reports from the HPC requirements reviews presented to the Department of Energy. Downloads NERSC-PRR-HEP-2017.pdf | Adobe Acrobat PDF file Large Scale Computing and Storage Requirements for High Energy Physics - Target 2017 BER2017FinalJune7.pdf | Adobe Acrobat PDF file Large Scale Computing and Storage Requirements for Biological and Environmental Research - Target 2017 NERSC-ASCR-WorkshopReport.pdf | Adobe Acrobat PDF file Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research NERSC-NP-WorkshopReport.pdf | Adobe Acrobat PDF file Large Scale Computing and Storage Requirements for Nuclear Physics Research NERSC-FES-WorkshopReport.pdf | Adobe Acrobat PDF file

389

Argonne TDC: Licensing Intellectual Property from Argonne National...  

NLE Websites -- All DOE Office Websites (Extended Search)

Property from Argonne National Laboratory Argonne's licensing program provides companies with opportunities to acquire rights in Argonne inventions and copyrights. Licenses...

390

NERSC8_Mission_Need_Final  

NLE Websites -- All DOE Office Websites (Extended Search)

Mission Need Statement Mission Need Statement NERSC-8 Page 1 Mission Need Statement for the Next Generation High Performance Production Computing System Project (NERSC-8) (Non-major acquisition project) Office of Advanced Scientific Computing Research Office of Science U.S. Department of Energy Date Approved: Month / Year Mission Need Statement NERSC-8 Page 2 Submitted by: David Goodwin, Program Manager Date Advanced Scientific Computing Research, Office of Science, DOE Concurrence: Daniel Lehman, Director, Date Office of Project Assessment, Office of Science, DOE Approval: Daniel Hitchcock, Acquisition Executive, Associate Director, Date Advanced Scientific Computing Research, Office of Science, DOE Mission Need Statement

391

NERSC-ScienceHighlightsMarch2013.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

March 2013 March 2013 NERSC Science Highlights --- 1 --- NERSC User Science Highlights Materials High-temp superconductivity findings net researchers the first NERSC Award for High Impact Scientific Achievement (T. Das, LANL) Fusion Simulations show for the first time intrinsic stochasticity in magnetically confined toroidal plasma edges (L. Sugiyama, MIT) Fusion Direct simulation of freely decaying turbulence in 2-D electrostatic gyrokinetics (W. Dorland, U. Maryland) Fusion NIMROD simulations explain DIII-D shot variability (V. Izzo, General Atomics) Materials Semiconductor exciton binding energy variation explained (Z. Wu, Colo. Sch. Mines) Chemistry Study points the way toward more efficient catalysts (S. Chen, PNNL) January 2 013 Origin of the Variation of Exciton Binding

392

Climate and Earth Science at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Engineering Science Engineering Science Environmental Science Fusion Science Math & Computer Science Nuclear Science Science Highlights NERSC Citations HPC Requirements Reviews Home » Science at NERSC » Climate & Earth Science Climate & Earth Science NERSC users have made significant and long-lasting improvements to the scientific basis for assessing the potential consequences of climatic changes and costs of alternative response options. Efforts using higher resolution, improved physical, chemical, and biological process representations, and more precise uncertainty estimates continue to explore potential ecological, social, and economic implications of climatic change. There has has also been a significant increase in the number of computational studies involving the application of molecular dynamics in

393

Argonne TDC: South Bay Technologies - Argonne National Laboratory  

... transmission, scanning ... "The bottom line is that the Argonne Technical Services Program made it possible for us to transfer Argonne ...

394

Debugging & Profiling | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Debugging & Profiling Debugging & Profiling Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor TotalView on BG/Q Systems Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Debugging & Profiling Initial setups Core file settings - this page contains some environment variables that allow you to control code file creation and contents. Using VNC with a Debugger - when displaying an X11 client (e.g. Totalview) remotely over the network, interactive response is typically slow. Using the VNC server can often help you improve the situation.

395

IBM References | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Transfer Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] IBM References Contents IBM Redbooks A2 Processor Manual QPX Vector Instruction Set Architecture XL Compiler Documentation MASS Documentation Back to top IBM Redbooks IBM System Blue Gene Solution: Blue Gene/Q Application Development Manual - Application and library developers wants this one. This documents options for MPI, OpenMP, and other features of interest to most users. IBM System Blue Gene Solution: Blue Gene/Q Code Development and Tools Interface - Low-level tools; developers may find this useful.

396

Petascale Adaptive Computational Fluid Dynamics | Argonne Leadership  

NLE Websites -- All DOE Office Websites (Extended Search)

Petascale Adaptive Computational Fluid Dynamics Petascale Adaptive Computational Fluid Dynamics PI Name: Kenneth Jansen PI Email: jansen@rpi.edu Institution: Rensselaer Polytechnic Institute The specific aim of this request for resources is to examine scalability and robustness of our code on BG/P. We have confirmed that, during the flow solve phase, our CFD flow solver does exhibit perfect strong scaling to the full 32k cores on our local machine (CCNI-BG/L at RPI) but this will be our first access to BG/P. We are also eager to study the performance of the adaptive phase of our code. Some aspects have scaled well on BG/L (e.g., refinement has produced adaptive meshes that take a 17 million element mesh and perform local adaptivity on 16k cores to match a requested size field to produce a mesh exceeding 1 billion elements) but other aspects (e.g.,

397

Public Informational Materials | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

News & Events News & Events Web Articles In the News Upcoming Events Past Events Informational Materials Photo Galleries Public Informational Materials Annual Reports ALCF 2010 Annual Report ALCF 2010 Annual Report May 2011 ALCF 2010 Annual Report 2011 annual report ALCF 2011 Annual Report May 2012 2011 ALCF Annual Report 2012 ALCF Annual Report ALCF 2012 Annual Report July 2013 2012 ALCF Annual Report Fact Sheets ALCF Fact Sheet ALCF Fact Sheet September 2013 ALCF Fact Sheet Blue Gene/Q Systems and Supporting Resources Blue Gene/Q Systems and Supporting Resources June 2013 Blue Gene/Q Systems and Supporting Resources Early Science Program Projects Early Science Program Projects July 2011 Early Science Program Projects Promotional Brochures INCITE in Review INCITE in Review March 2012 INCITE in Review

398

bgq_stack | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

bgq_stack bgq_stack Location The bgq_stack script is located at: /soft/debuggers/scripts/bin/bgq_stack or for user convenience, the SoftEnv key +utility_paths (that you get in your default environment by putting @default in your ~/.soft file) allows you to use directly: bgq_stack List the possible options using -h: > bgq_stack -h Using bgq_stack on BG/Q to decode core files When a Blue Gene/Q program terminates abnormally, the system generates multiple core files, plain text files that can be viewed with the vi editor. Most of the detailed information provided in the core file is not of immediate use for determining why a program failed. But the core file does contain a function call stack record that can help identify what line of what routine was executing when the error occurred. The call stack

399

BGP: Eureka / Gadzooks | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

File Systems Compiling and Linking Scheduling Software and Libraries Using Data Analytics & Visualization Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] BGP: Eureka / Gadzooks Researchers often analyze the huge data sets generated on Intrepid by converting it into visual representations on ALCF's Eureka, a system featuring a large installation of NVIDIA Quadro Plex S4 external graphics processing unit (GPUs). The ALCF also operates Gadzooks, a visualization test and deployment system. Eureka 100 compute nodes: each with (2) 2.0 GHz quad-core Xeon servers with 32 GB RAM 200 NVIDIA Quadro FX5600 GPUs in 50 S4s Memory: More than 3.2 terabytes of RAM

400

Introducing Challenger | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Introducing Challenger Quick Reference Guide System Overview Data Transfer Data Storage & File Systems Compiling and Linking Queueing and Running Jobs Debugging and Profiling Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Introducing Challenger The Blue Gene/P resource, Challenger, is the new home for the prod-devel job submission queue. Moving the prod-devel queue to Challenger clears the way for more capability jobs on Intrepid. Challenger shares the same environment as Intrepid and is intended for small, short, interactive debugging and test runs. Production jobs are not

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


401

Software Policy | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Intrepid/Challenger/Surveyor Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy Job Scheduling Policy on BG/P Job Scheduling Policy on BG/Q Refund Policy Software Policy User Authentication Policy Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Software Policy ALCF Resource Software Use All software used on ALCF computers must be appropriately acquired and used according to the appropriate licensing. Possession or use of illegally copied software is prohibited. Likewise, users shall not copy copyrighted software, except as permitted by the owner of the copyright. Currently,

402

BGP: GAMESS | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

GAMESS GAMESS What is GAMESS? The General Atomic and Molecular Electronic Structure System (GAMESS) is a general ab initio quantum chemistry package. Obtaining GAMESS Follow the instructions at the Gordon research group website: http://www.msg.chem.iastate.edu/gamess/ Building GAMESS for Blue Gene /P A number of modifications were necessary: comp.actvte - Builds the actvte.x binary; mostly for convenience compall.bgp, comp.bgp, ddi/compddi.bgp - Replacement comp scripts lked.bgp - Replacement lked script source/zunix.c - Minor modification to #include directives ddi/src/ddi_bgp.c - Completely new file gms.bgp.py - Python script for running GAMESS You will need to modify the $GMSPATH environment variable in those scripts. These modifications are available here Media:Gamess.bgp.tar.

403

BGP: Code Saturne | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Documentation Feedback Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] BGP: Code Saturne What is Code_Saturne? Code Saturne is the EDF's general purpose Computational Fluid Dynamics (CFD) software. EDF stands for Électricité de France, one of the world's largest producers of electricity. Obtaining Code_Saturne Code_Saturne is an open source code, freely available for the CFD practitioners and other scientists too. You can download the latest version from the Code_Saturne Official Forum Web Page and you can also follow the Forum with interesting questions about installation problems, general usage, examples, etc. Building Code_Saturne for Blue Gene/P The version currently available on Intrepid is the last official stable

404

Accounts & Access | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Account Information Accounts and Access FAQ Connect & Log In Using CRYPTOCards SSH Keys on Surveyor Disk Space Quota Management Allocations Decommissioning of BG/P Systems and Resources Blue Gene/Q Versus Blue Gene/P Mira/Cetus/Vesta Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Accounts & Access Account Information Account Information: All computing carried out on the ALCF systems is associated with a user "account." This account is used to log onto the login servers and run jobs on the resources. Using CRYPTOcards Using CRYPTOCards: Useful information to guide you in using and troubleshooting your CRYPTOcard.

405

Data Transfer | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Data Transfer Data Transfer The Blue Gene/P connects to other research institutions using a total of 20 GBs of public network connectivity. This allows scientists to transfer datasets to and from other institutions over fast research networks such as the Energy Science Network (ESNet) and the Metropolitan Research and Education Network (MREN). Data Transfer Node Overview Two data transfer nodes are available to all Intrepid users, that provide the ability to perform wide and local area data transfers. dtn01.intrepid.alcf.anl.gov (alias for gs1.intrepid.alcf.anl.gov) dtn02.intrepid.alcf.anl.gov (alias for gs2.intrepid.alcf.anl.gov) Data Transfer Utilities HSI/HTAR HSI and HTAR allow users to transfer data to and from HPSS Using HPSS on Intrepid GridFTP GridFTP provides the ability to transfer data between trusted sites such

406

System Overview | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

System Overview BG/Q Drivers Status Machine Overview Machine Partitions Torus Network Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] System Overview Machine Overview Machine Overview is a reference for the login and compile nodes, I/O nodes, and compute nodes of the BG/Q system. Machine Partitions Machine Partitions is a reference for the way that Mira, Vesta and Cetus are partitioned and discusses the network topology of the partitions.

407

bgp_stack | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

bgp_stack bgp_stack Decoding Core Files The core files on the Blue Gene/P system are text files and may be read using the 'more' command. Most of the detailed information provided in the core file is not of immediate use for determining why a program failed. But the core file does contain a function call stack record that can help you identify what line of what routine was executing when the error occurred. In the core file the call stack record is at the end of the file bracketed by +++STACK and ---STACK. The call stack contains a list of instruction addresses, for these to be useful the addresses need to be translated back to a source file and line. This may be done with the bgp_stack utility: bgp_stack [executable] [corefile] The lightweight core files produced by the runtime system do not contain

408

Visualization Clusters | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Eureka Eureka Analytics and Visualization Visualization Clusters Tukey Tukey is the ALCF's newest analysis and visualization cluster. Equipped with state-of-the-art graphics processing units (GPUs), Tukey converts computational data from Mira into high-resolution visual representations. The resulting images, videos, and animations help users to better analyze and understand the data generated by Mira. Tukey can also be used for statistical analysis, helping to pinpoint trends in the simulation data. Additionally, the system is capable of preprocessing efforts, such as meshing, to assist users preparing for Mira simulations. Tukey shares the Mira network and parallel file system, enabling direct access to Mira-generated results. Configuration Two 2 GHz 8-core AMD Opteron CPUs per node

409

Running Jobs | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Reservations Cobalt Job Control How to Queue a Job Running Jobs FAQs Queuing and Running on BG/Q Systems Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Running Jobs Contents Job Submission Submitting a Script Job Sub-block Script Jobs Multiple Consecutive Runs within a Script Job Settings Environment Variables Script Environment Program and Argument Length Limit Job Dependencies Thread Stack Size Verbose Setting for Runjob How do I get each line of the output labeled with the MPI rank that it came from? Mapping of MPI Tasks to Cores

410

Petascale, Adaptive CFD | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Petascale, Adaptive CFD Petascale, Adaptive CFD Petascale, Adaptive CFD PI Name: Kenneth Jansen PI Email: jansenke@colorado.edu Institution: U. Colorado-Boulder Allocation Program: ESP Allocation Hours at ALCF: 150 Million Year: 2010 to 2013 Research Domain: Engineering The aerodynamic simulations proposed will involve modeling of active flow control based on synthetic jet actuation that has been shown experimentally to produce large-scale flow changes (e.g., re-attachment of separated flow or virtual aerodynamic shaping of lifting surfaces) from micro-scale input (e.g., a 0.1 W piezoelectric disk resonating in a cavity alternately pushes/pulls out/in the fluid through a small slit to create small-scale vortical structures that interact with, and thereby dramatically alter, the cross flow). This is a process that has yet to be understood fundamentally.

411

BGP: GPAW | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Documentation Feedback Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] BGP: GPAW What is GPAW? GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method. It uses real-space uniform grids and multi-grid methods or atom-centered basis-functions. Obtaining GPAW GPAW is an open-source code which can be download at https://wiki.fysik.dtu.dk/gpaw/ It relies on the following Python libraries: Atomic Simulation Environment (ASE) - https://wiki.fysik.dtu.dk/ase/ NumPy - http://numpy.scipy.org/ Plus the standard math libraries: BLAS LAPACK ScaLAPACK Building GPAW for Blue Gene/P Build instructions for GPAW can be found here: https://wiki.fysik.dtu.dk/gpaw/install/BGP/surveyor.html

412

Pullback Policy | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Eureka / Gadzooks Eureka / Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy Job Scheduling Policy on BG/P Job Scheduling Policy on BG/Q Refund Policy Software Policy User Authentication Policy Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Pullback Policy In an effort to ensure that valuable ALCF computing resources are used judiciously, a pullback policy has been instituted. Projects granted allocations under the INCITE and ALCC programs that have not used a significant amount of their allocation will be evaluated and adjusted during the year following the policies outlined on this page.

413

BGP: MADNESS | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

IBM References IBM References Software and Libraries BGP: GROMACS BGP: MADNESS Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] BGP: MADNESS Overview MADNESS Project on Google Code MADNESS Project at ORNL Downloading MADNESS MADNESS is available under the GNU General Public License v2. You can download the source from Google Code like this: svn checkout http://m-a-d-n-e-s-s.googlecode.com/svn/local/trunk m-a-d-n-e-s-s-read-only MADNESS on BlueGene/P MADNESS currently provokes many shortcomings in the IBM XL C++ compiler and thus can only be compiled with the GNU C++ compiler. Please see below for the XL C++ compiler error for current and past problems.

414

Machine Overview | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

BG/Q Drivers Status Machine Overview Machine Partitions Torus Network Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Machine Overview Blue Gene /Q systems are composed of login nodes, I/O nodes, and compute nodes. Login Nodes Login and compile nodes are IBM Power 7-based systems running Red Hat Linux and are the user's interface to a Blue Gene/Q system. This is where users login, edit files, compile, and submit jobs. These are shared resources

415

Machine Partitions | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

BG/Q Drivers Status Machine Overview Machine Partitions Torus Network Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Machine Partitions Mira As our production machine, the queues on Mira are very similar to those on our BG/P system Intrepid. In the prod-capability queue, partition sizes of 8192, 12288, 16384, 32768, and 49152 nodes are available. All partitions have a full torus network. The max runtime of jobs submitted to the

416

gprof Profiling Tools | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] gprof Profiling Tools Contents Introduction Profiling on the Blue Gene Enabling Profiling Collecting Profile Information Profiling Threaded Applications Using gprof Routine Level Flat Profile Line Level Flat Profile Call Graph Analysis Routine Execution Count List Annotated Source Listing Issues in Interpreting Profile Data Profiling Concepts Programs in Memory

417

ALCF Acknowledgment Policy | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

User Guides User Guides How to Get an Allocation New User Guide Accounts & Access Allocations Decommissioning of BG/P Systems and Resources Blue Gene/Q Versus Blue Gene/P Mira/Cetus/Vesta Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Pullback Policy ALCF Acknowledgment Policy Account Sponsorship & Retention Policy Accounts Policy Data Policy INCITE Quarterly Report Policy Job Scheduling Policy on BG/P Job Scheduling Policy on BG/Q Refund Policy Software Policy User Authentication Policy Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] ALCF Acknowledgment Policy By applying for ALCF resources you agree to acknowledge ALCF in all publications based on work done with those resources. Following is a sample

418

Staff Directory | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Administration 630-252-0212 jstover@alcf.anl.gov Tony Tolbert Consultant - Business Intelligence AIG 630-252-6027 wtolbert@alcf.anl.gov Brian Toonen ConsultantSoftware...

419

Discretionary Allocation Request | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

Physics Physics, Condensed Matter Physics Physics, High Energy Physics Physics, Nuclear Physics Physics, Space Physics Physics, Particle Physics Physics, Plasma Physics...

420

New User Guide | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. Feedback Form New User Guide Step 1. Request...

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


421

User Guides | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Allocations Decommissioning of BGP Systems and Resources Blue GeneQ Versus Blue GeneP MiraCetusVesta IntrepidChallengerSurveyor Tukey Eureka Gadzooks Policies...

422

CONTACT ? Argonne Leadership Computing Facility | industry...  

NLE Websites -- All DOE Office Websites (Extended Search)

will enable: Unprecedented reductions in emissions and noise in aircraft engines and wind turbines, Improved consumer products through a better understanding of how suds...

423

Allinea DDT | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

under Queue Submission Parameters. DDT Run DDT Queue Submission Parameters The Change... button allows users to modify some information set by default on System Settings and Job...

424

Nuclear Structure and Nuclear Reactions | Argonne Leadership...  

NLE Websites -- All DOE Office Websites (Extended Search)

value of 92.16 MeV and the point rms radius is 2.35 fm vs 2.33 from experiment. Nuclear Structure and Nuclear Reactions PI Name: James Vary PI Email: jvary@iastate.edu...

425

Nuclear Structure and Nuclear Reactions | Argonne Leadership...  

NLE Websites -- All DOE Office Websites (Extended Search)

the ab initio no-core full configuration approach," Phys. Rev. C 86, 034325 (2012) Nuclear Structure and Nuclear Reactions PI Name: James Vary PI Email: jvary@iastate.edu...

426

Machine Status | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Machine Status Science on Mira Cores Core Hours Atomistic Simulations of Nanoscale Oxides and Oxide Interfaces 65536 1297201.9256901 Lattice QCD 16384 28881.396622179 MockBOSS :...

427

Contact Us | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Contact Us Michael Papka Division Director (630) 252-1556 Susan Coghlan Deputy Division Director (630) 252-1637 Richard Coffey Strategic Advisor to the Director (630) 252-2725...

428

Maricris Lodriguito Mayes | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Ab Initio Molecular Dynamics, Computational Material ScienceNanoscience, High Performance Computing, Reaction Mechanism and Dynamics, Theoretical and Computational Chemistry...

429

Getting Started Videoconferences | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

Getting Started Videoconferences Start Date: Jan 23 2014 - 3:16pm Event Website: http:www.alcf.anl.govworkshopsgetting-started-videoconference-2014 Register for one of eight...

430

INCITE Program | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary Program INCITE Program Innovative and Novel Computational Impact on...

431

Early Science Program | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Early Science Program The goals of the Early Science Program (ESP) were to prepare key applications for the architecture and scale of Mira, and to solidify libraries and...

432

Our Teams | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Need Help? support@alcf.anl.gov 630-252-3111 866-508-9181 Expert Teams World-Class Expertise and Project Lifecycle Assistance To maximize your research, the ALCF has assembled a...

433

User Advisory Council | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Resources & Expertise Mira Cetus Vesta Intrepid Challenger Surveyor Visualization Clusters Data and Networking Our Teams User Advisory Council User Advisory Council The User...

434

Data and Networking | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Networking Data and Networking Data Storage The ALCF's data storage system is used to retain the data generated by simulations and visualizations. Disk storage provides...

435

Surveyor / Gadzooks File Systems | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

IntrepidChallengerSurveyor Decommissioning of BGP Systems and Resources Introducing Challenger Quick Reference Guide System Overview Data Transfer Data Storage & File Systems...

436

Intrepid / Challenger / Eureka File Systems | Argonne Leadership...  

NLE Websites -- All DOE Office Websites (Extended Search)

IntrepidChallengerSurveyor Decommissioning of BGP Systems and Resources Introducing Challenger Quick Reference Guide System Overview Data Transfer Data Storage & File Systems...

437

Determining Memory Use | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgqstack gdb Coreprocessor TotalView on BGQ Systems Performance Tools & APIs Software & Libraries...

438

Director's Discretionary (DD) Program | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

is not required. Review Process: Projects must demonstrate a need for high-performance computing resources. Reviewed by ALCF. Application Period: ongoing (available year...

439

GLEANing Scientific Insights More Quickly | Argonne Leadership...  

NLE Websites -- All DOE Office Websites (Extended Search)

data movement between the compute, analysis, and storage resources of high-performance computing systems. This speeds the computer's ability to read and write data, also...

440

Yuri Alexeev | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

for using and enabling computational methods in chemistry and biology for high-performance computing on next-generation high-performance computers. Yuri is particularly...

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


441

Nichols Romero | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Laboratory (2005-2007) and later worked as a Computational Scientist in the High-Performance Computing Modernization Program (2007-2008) for the Department of Defense. Romero was...

442

Accerelate Your Vision | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

world's most capable resources available to researchers. The ALCF has the high performance computing resources and expertise to enable major research breakthroughs leading to...

443

About ALCF | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

and the largest-scale systems. Accelerating Transitional Discovery High-performance computing is becoming increasingly important as more scientists and engineers use...

444

Powering Research | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

and disk storage capacity. As we move towards the next major challenge in high-performance computing-exascale levels of computation-no doubt the benefits of partnering will grow...

445

Software and Libraries | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Allinea DDT +ddt L Multithreaded, multiprocess source code debugger for high performance computing. bgqstack @default I A tool to debug and provide postmortem analysis of...

446

mpiP | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

mpiP Introduction mpiP is an easy to use library that when linked with an application reports information on MPI usage and performance. Information reported includes the MPI...

447

Using CRYPTOCards | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Token Enabling Your CRYPTOCard Token Logging in with Your CRYPTOCard Token Troubleshooting Your CRYPTOCard Resetting CRYPTOCard PIN CRYPTOCard Return Back to top CRYPTOCard...

448

INCITE Quarterly Report Policy | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

late: The ability to submit jobs for the PI and users of the late project will be disabled. If a report is more than 90 days late: The PI and users of the late project will...

449

Account Information | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

to interact with the ALCF Login Servers. This is the normal state for all accounts. Disabled: An account that still exists on the system (that is, the account continues to be...

450

Shaping Future Supercomputing Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

A second mesh independency validation was performed, but this time it used a simpler, two-phase-flow single burner with three levels of refinement (4-, 8-, and 16-million...

451

Mark Hereld | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

programming models and application performance analysis. Hereld is a principal architect of distributed analysis environments that support collaborative creation of...

452

Querying Allocations Using cbank | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

apply to the resource (machine) to which a user is currently logged in. For example, a query on Surveyor about PROJECTNAME will return information about the Surveyor allocation...

453

Allocation Management | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

if there is an active allocation, check Running Jobs. For information to run the query or email support, check alcf.anl.gov and ask for all active allocations. Determining...

454

BGP: Repast HPC | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

(TIME) -n (NUMBEROFPROCESSES) zombiexl.exe .config.props .model.props Modifying the Model An important aspect of the Agent-Based Modeling paradigm is the flexibility with...

455

Computational Studies of Nucleosome Stability | Argonne Leadership...  

NLE Websites -- All DOE Office Websites (Extended Search)

structure studies of the nucleosomes, which are complexes of DNA and proteins in chromatin and account for 75-90% of the packaging of the genomic DNA in mammalian cells....

456

Bash Shell | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Bash Shell A popular shell provided with the operating system, bash is available as a login shell. To change your login shell use your ALCF personal account page. Version(s):...

457

Early Science Program Investigators Meeting | Argonne Leadership...  

NLE Websites -- All DOE Office Websites (Extended Search)

through ambitious scientific computations enabled by the ALCF's Early Science Program (ESP). Investigators from each of the 16 ESP projects will overview their simulation...

458

Science at ALCF | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Hours: 40 Million more science Science at ALCF Allocation Program - Any - INCITE ALCC ESP Director's Discretionary Year -Year 2008 2009 2010 2011 2012 2013 Research Domain - Any...

459

Cobalt Job Control | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

& File Systems Compiling & Linking Queueing & Running Jobs Reservations Cobalt Job Control How to Queue a Job Running Jobs FAQs Queuing and Running on BGQ Systems Data...

460

Core File Settings | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Core File Settings Core File Settings The following environment variables control core file creation and contents. Specify regular (non-script) jobs using the qsub argument --env (Note: two dashes). Specify script jobs (--mode script) using the --envs (Note: two dashes) or --exp_env (Note: two dashes) options of runjob. For additional information about setting environment variables in your job, visit http://www.alcf.anl.gov/user-guides/running-jobs#environment-variables. Generation The following environment variables control conditions of core file generation and naming: BG_COREDUMPONEXIT=1 Creates a core file when the application exits. This is useful when the application performed an exit() operation and the cause and location of the exit() is not known. BG_COREDUMPONERROR=1

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


461

Data Transfer | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Using Globus Online Using GridFTP Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Data Transfer The Blue Gene/Q will connect to other research institutions using a total of 100 Gbit/s of public network connectivity (10 Gbit/s during early access). This allows scientists to transfer datasets to and from other institutions over fast research networks such as the Energy Science Network (ESNet) and the Metropolitan Research and Education Network (MREN). Data Transfer Node Overview A total of 12 data transfer nodes (DTNs) will be available to all Mira

462

Frederico Fiuza receives 2013 ASCR Leadership Computing Challenge...  

NLE Websites -- All DOE Office Websites (Extended Search)

Ignition of Fusion Targets." The award will provide 19.5 million CPU hours at the Argonne Leadership Computing Facility using the IBM Blue GeneQ computer "Mira." Mira has 786,432...

463

Argonne Accelerator Institute  

NLE Websites -- All DOE Office Websites (Extended Search)

AAI Home AAI Home Welcome Accelerators at Argonne Mission Organization History Document Collection Conferences & Workshops Beams and Applications Seminar Argonne-Fermilab Collaboration Lee Teng Scholarship Program Useful Links Argonne Accelerator Institute In 2006, Argonne Laboratory Director Robert Rosner formed the AAI as a focal point for accelerator initiatives. The institute works to utilize Argonne's extensive accelerator resources, to enhance existing facilities, to determine the future of accelerator development and construction, and to oversee a dynamic and acclaimed accelerator physics portfolio. More Information for: Members * Students Industrial Collaborators - Working with Argonne Link to: Accelerators for America's Future Upcoming Events and News 4th International Particle Accelerator Conference (IPAC'13)

464

Argonne TDC: PCx Overview - Argonne National Laboratory  

More Information. More technical information. . . To license PCx from Argonne, print a copy of the agreement, arrange for signature by your organization, and send the ...

465

Argonne TDC: Packer Engineering - Argonne National Laboratory  

Simulation work with small business leads to 250 jobs in Illinois. Packer Engineering Naperville, IL. Packer Engineering worked with Argonne to share advanced ...

466

Argonne TDC: Phase Metrics - Argonne National Laboratory  

A magneto-optical imaging capability developed by Argonne, Phase Metrics, and the Institute of Solid State Physics (Moscow, Russia) could be the key to developing ...

467

Argonne TDC: Magneco Metrel - Argonne National Laboratory  

Castable ceramic allows small business to add jobs, increase sales. Magneco/Metrel Addison, IL. Magneco/Metrel, Inc., and Argonne have worked together to make better ...

468

Argonne TDC: Ombudsman - Argonne National Laboratory  

Ombudsman. The ombudsman assigned to Argonne National Laboratory will: Serve as a point of contact for the public as an independent and impartial person that can ...

469

Argonne TDC: Blake Industries - Argonne National Laboratory  

New configuration leads to popular new instrument. Blake Industries Scotch Plain, NJ. Argonne materials scientists needed a new type of two-circle diffractometer ...

470

Argonne's computing Zen | Argonne National Laboratory  

NLE Websites -- All DOE Office Websites (Extended Search)

is dedicated to large-scale computation and builds on Argonne's strengths in high-performance computing software, advanced hardware architectures and applications expertise....

471

Argonne TDC: Ionwerks Corporation - Argonne National Laboratory  

CRADA gives small business a new market. Ionwerks Corporation Houston, TX. Joint research by Ionwerks Corporation and Argonne has given this small business new market ...

472

Downloads - Nuclear Engineering Division (Argonne)  

NLE Websites -- All DOE Office Websites (Extended Search)

Assessment Team (VAT) Argonne's National Security Information Systems Argonne's Facility Decommissioning Training Course Reactor Safety Experimentation Nuclear Energy Advanced...

473

Reliability Results of NERSC Systems  

SciTech Connect

In order to address the needs of future scientific applications for storing and accessing large amounts of data in an efficient way, one needs to understand the limitations of current technologies and how they may cause systeminstability or unavailability. A number of factors can impact system availability ranging from facility-wide power outage to a single point of failure such as network switches or global file systems. In addition, individual component failure in a system can degrade the performance of that system. This paper focuses on analyzing both of these factors and their impacts on the computational and storage systems at NERSC. Component failure data presented in this report primarily focuses on disk drive in on of the computational system and tape drive failure in HPSS. NERSC collected available component failure data and system-wide outages for its computational and storage systems over a six-year period and made them available to the HPC community through the Petascale Data Storage Institute.

Petascale Data Storage Institute (PDSI); Mokhtarani, Akbar; Mokhtarani, Akbar; Kramer, William; Hick, Jason

2008-05-27T23:59:59.000Z

474

2011 NERSC User Survey (Read Only)  

NLE Websites -- All DOE Office Websites (Extended Search)

Results » Survey Text Results » Survey Text 2011 NERSC User Survey (Read Only) The survey is closed. Section 1: Overall Satisfaction with NERSC When you are finished with this page click "Save & Go to Next Section" or your responses will be lost. Please do not answer a specific question or rate a specific item if you have no opinion on it. For each item you use, please indicate both your satisfaction and its importance to you. Please rate: How satisfied are you? How important is this to you? Overall satisfaction with NERSC Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This Not Answered Very Important Somewhat Important Not Important NERSC services Not Answered Very Satisfied Mostly Satisfied Somewhat Sat. Neutral Somewhat Dissat. Mostly Dissat. Very Dissatisfied I Do Not Use This Not Answered Very Important Somewhat Important Not Important

475

Microsoft Word - NERSC_Results.doc  

NLE Websites -- All DOE Office Websites (Extended Search)

Reliability Results of NERSC Systems Reliability Results of NERSC Systems Akbar Mokhtarani, William Kramer, Jason Hick NERSC - LBNL Abstract In order to address the needs of future scientific applications for storing and accessing large amounts of data in an efficient way, one needs to understand the limitations of current technologies and how they may cause system instability or unavailability. A number of factors can impact system availability ranging from facility-wide power outage to a single point of failure such as network switches or global file systems. In addition, individual component failure in a system can degrade the performance of that system. This paper focuses on analyzing both of these factors and their impacts on the computational and storage systems at NERSC. Component failure data

476

Richard Gerber! NERSC User Services NUG Teleconference  

NLE Websites -- All DOE Office Websites (Extended Search)

PIN: 4 866820 Agenda * Alloca1on Y ear C hangeover i ssues * NUG 2013 annual mee1ng - hSp:www.nersc.govusersNUGannual---mee*ngs2013 - Registra*on O pen - Schedule -...

477

NERSC-ScienceHighlightsDecember2011.pptx  

NLE Websites -- All DOE Office Websites (Extended Search)

cars. (D. Mei, PNNL, L-W. Wang, LBNL) Energy NERSC resources were used to model a real coal gasifier with a Large Eddy Simulation code. (P. Smith, U. Utah) Climate New techniques...

478

Jack Deslippe! NERSC User Services Group  

NLE Websites -- All DOE Office Websites (Extended Search)

& Jack Deslippe NERSC User Services Group Using Python on Hopper, Carver and Edison --- 1 --- February 1 5, 2 013 Python up to 2.7.2 * All---in---one i nstalla7on a...

479

NERSC/DOE FES Requirements Workshop Presentations  

NLE Websites -- All DOE Office Websites (Extended Search)

Fusion Energy Sciences An FES ASCR NERSC Workshop August 3-4, 2010 Sort by: Default | Name | Date (low-high) | Date (high-low) FESNERSC Requirements Gathering Workshop August...

480

2008/2009 NERSC User Survey Results  

NLE Websites -- All DOE Office Websites (Extended Search)

13.4%. The MPP hours used by the survey respondents represents 70.2 percent of total NERSC MPP usage as of the end of the survey period. The PDSF hours used by the PDSF survey...

Note: This page contains sample records for the topic "nersc argonne leadership" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


481

NERSC Cited for HPC Innovations Excellence  

NLE Websites -- All DOE Office Websites (Extended Search)

Honored for HPC Innovations Excellence NERSC Honored for HPC Innovations Excellence 20th Century Reanalysis Project Support Cited June 20, 2011 Linda Vu, lvu@lbl.gov, +1 510 486...

482

BER-NERSC-YS invitation2  

NLE Websites -- All DOE Office Websites (Extended Search)

on the Red Line) in Maryland on May 7-8, 2009. Please respond to BER-workshop-committee@nersc.gov by COB March 9, 2009 (note the new due date), confirming your participation. The...

483

NERSC Releases Mobile Apps to Users  

NLE Websites -- All DOE Office Websites (Extended Search)

Releases Mobile Apps to Users NERSC Releases Mobile Apps to Users Job Status, MOTD and Pilot of VASP Submission Available with More to Come April 23, 2012 In an effort to make...

484

Friedman-FES_NERSC-2013casestudy  

NLE Websites -- All DOE Office Websites (Extended Search)

W. M. Sharp, LLNL and LBNL; I. D. Kaganovich, PPPL; A. E. Koniges and W. Liu, LBNLNERSC NERSC Repositories: MP42 (HIFS-VNL) and M74 (PPPL Non-neutral Plasmas) 1.1 Project D...

485

Barbara Helland Advanced Scientific Computing Research NERSC...  

NLE Websites -- All DOE Office Websites (Extended Search)

7-28, 2012 Barbara Helland Advanced Scientific Computing Research NERSC-HEP Requirements Review 1 Science C ase S tudies d rive d iscussions Program R equirements R eviews ...

486

NERSC-BER-Yelick.v5.ppt  

NLE Websites -- All DOE Office Websites (Extended Search)

Biological and Biological and Environmental Research Katherine Yelick NERSC Director Requirements Workshop NERSC is the Production Facility for DOE SC * NERSC serves a large population Approximately 3000 users, 400 projects, 500 code instances * Focus on "unique" resources - High end computing systems - High end storage systems * Large shared file system * Tape archive - Interface to high speed networking * ESNEt soon to be 100 Gb/s * Allocate time / storage - Current processor hours and tape storage 2 ASCR 7% BER 19% BES 29% FES 18% HEP 17% NP 10% 2009 Allocations What's Changed in DOE Priorities for NERSC? 0% 5% 10% 15% 20% 25% 30% 35% 2002 2003 2004 2005 2006 2007 2008 2009 Usage by Science Type as a Percent of Total Usage Accelerator Physics

487

Managing Your User Account at NERSC  

NLE Websites -- All DOE Office Websites (Extended Search)

Managing Managing Your Account Managing Your User Account Use the NERSC Information Management (NIM) system to customize your user account and keep your personal information up-to-date. See the NIM User's Guide, especially the "Managing Your User Account with NIM" section. Applying for your first NERSC Allocation Starting a new Allocation Request Renewing an Allocation Request Tips and Instructions for Filling out Allocation Request Form