National Library of Energy BETA

Sample records for interactive computer games

  1. Supporting collaborative computing and interaction

    SciTech Connect (OSTI)

    Agarwal, Deborah; McParland, Charles; Perry, Marcia

    2002-05-22

    To enable collaboration on the daily tasks involved in scientific research, collaborative frameworks should provide lightweight and ubiquitous components that support a wide variety of interaction modes. We envision a collaborative environment as one that provides a persistent space within which participants can locate each other, exchange synchronous and asynchronous messages, share documents and applications, share workflow, and hold videoconferences. We are developing the Pervasive Collaborative Computing Environment (PCCE) as such an environment. The PCCE will provide integrated tools to support shared computing and task control and monitoring. This paper describes the PCCE and the rationale for its design.

  2. Computer Defeats Video Game System in #EnergyFaceoff Round One | Department

    Office of Environmental Management (EM)

    of Energy Defeats Video Game System in #EnergyFaceoff Round One Computer Defeats Video Game System in #EnergyFaceoff Round One November 5, 2014 - 3:30pm Addthis The computer takes the efficiency title in round one of #EnergyFaceoff. | Graphic courtesy of Stacy Buchanan, National Renewable Energy Laboratory The computer takes the efficiency title in round one of #EnergyFaceoff. | Graphic courtesy of Stacy Buchanan, National Renewable Energy Laboratory Allison Casey Senior Communicator, NREL

  3. Computer vs. Video Game System: Ready to Rumble in the #EnergyFaceoff

    Office of Environmental Management (EM)

    Jungle | Department of Energy vs. Video Game System: Ready to Rumble in the #EnergyFaceoff Jungle Computer vs. Video Game System: Ready to Rumble in the #EnergyFaceoff Jungle November 4, 2014 - 10:20am Q&A Which appliance do you think is more efficient? Tell Us Addthis Round one of #EnergyFaceoff begins with the computer (CPU) vs. the video game system. Which is more energy efficient? | Graphic courtesy of Stacy Buchanan, National Renewable Energy Laboratory Round one of #EnergyFaceoff

  4. Gender, Lies and Video Games: the Truth about Females and Computing

    ScienceCinema (OSTI)

    Klawe, Maria M. [Princeton University, Princeton, New Jersey, United States

    2009-09-01

    This talk explores how girls and women differ from boys and men in their uses of and attitudes towards computers and computing. From playing computer games to pursuing computing careers, the participation of females tends to be very low compared to that of males. Why is this? Opinions range from girls wanting to avoid the math and/or the geek image of programming to girls having better things to do with their lives. We discuss research findings on this issue, as well as initiatives designed to increase the participation of females in computing.

  5. Computes Generalized Electromagnetic Interactions Between Structures

    Energy Science and Technology Software Center (OSTI)

    1999-02-20

    Object oriented software for computing generalized electromagnetic interactions between structures in the frequency domains. The software is based on integral equations. There is also a static integral equation capability.

  6. Computer vs. Video Game System: Ready to Rumble in the #EnergyFaceoff...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    And in the right corner we have a video game system and LCD television. Gamers use both to race cars, slay monsters, and rescue princesses, but games aside, if you were choosing ...

  7. Catalyst Support Interactions | Argonne Leadership Computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    on the reactivity of metal catalyst particles. The research team will also study the adhesion properties by simulating the interactions between metal particles of different sizes...

  8. Human-computer interface including haptically controlled interactions

    DOE Patents [OSTI]

    Anderson, Thomas G.

    2005-10-11

    The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.

  9. Semantic Interaction for Visual Analytics: Toward Coupling Cognition and Computation

    SciTech Connect (OSTI)

    Endert, Alexander

    2014-07-01

    The dissertation discussed in this article [1] was written in the midst of an era of digitization. The world is becoming increasingly instrumented with sensors, monitoring, and other methods for generating data describing social, physical, and natural phenomena. Thus, data exist with the potential of being analyzed to uncover, or discover, the phenomena from which it was created. However, as the analytic models leveraged to analyze these data continue to increase in complexity and computational capability, how can visualizations and user interaction methodologies adapt and evolve to continue to foster discovery and sensemaking?

  10. MapReduce SVM Game

    SciTech Connect (OSTI)

    Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; Aimone, James B.; Heileman, Gregory L.

    2015-08-10

    Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently and recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.

  11. MapReduce SVM Game

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; Aimone, James B.; Heileman, Gregory L.

    2015-08-10

    Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently andmore » recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.« less

  12. The Particle Beam Optics Interactive Computer Laboratory (Journal...

    Office of Scientific and Technical Information (OSTI)

    The primary computational engine is provided by the third-order TRANSPORT code. Augmenting TRANSPORT is the multiple ray tracing program TURTLE and a first-order matrix program ...

  13. University Prosperity Game. Final report

    SciTech Connect (OSTI)

    Boyack, K.W.; Berman, M.

    1996-03-01

    Prosperity Games are an outgrowth and adaptation of move/countermove and seminar War Games. Prosperity Games are simulations that explore complex issues in a variety of areas including economics, politics, sociology, environment, education and research. These issues can be examined from a variety of perspectives ranging from a global, macroeconomic and geopolitical viewpoint down to the details of customer/supplier/market interactions in specific industries. All Prosperity Games are unique in that both the game format and the player contributions vary from game to game. This report documents the University Prosperity Game conducted under the sponsorship of the Anderson Schools of Management at the University of New Mexico. This Prosperity Game was initially designed for the roadmap making effort of the National Electronics Manufacturing Initiative (NEMI) of the Electronics Subcommittee of the Civilian Industrial Technology Committee under the aegis of the National Science and Technology Council. The game was modified to support course material in MGT 508, Ethical, Political, and Social Environment of Business. Thirty-five students participated as role players. In this educational context the game`s main objectives were to: (1) introduce and teach global competitiveness and business cultures in an experiential classroom setting; (2) explore ethical, political, and social issues and address them in the context of global markets and competition; and (3) obtain non-government views regarding the technical and non-technical (i.e., policy) issues developed in the NEMI roadmap-making endeavor. The negotiations and agreements made during the game, along with the student journals detailing the players feelings and reactions to the gaming experience, provide valuable insight into the benefits of simulation as an advanced learning tool in higher education.

  14. Game Center | Jefferson Lab

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Game Center September 2, 2010 It's a feature of Thomas Jefferson National Accelerator Laboratory that it has at least two other names, including Jefferson Lab and JLab. Similarly, parts of our organization go by different names - the Theory group, the Theory Department, the Theory Center and the Center for Theoretical and Computational Physics. But a new name might be "Game Center." Let me explain. Large-scale computing has been a major deal for the Department of Energy for many years.

  15. Environmental Prosperity Game. Final report

    SciTech Connect (OSTI)

    Berman, M.; Boyack, K.; VanDevender, J.P.

    1995-12-01

    Prosperity Games are an outgrowth and adaptation of move/countermove and seminar War Games. Prosperity Games are simulations that explore complex issues in a variety of areas including economics, politics, sociology, environment, education and research. These issues can be examined from a variety of perspectives ranging from a global, macroeconomic and geopolitical viewpoint down to the details of customer/supplier/market interactions in specific industries. All Prosperity Games are unique in that both the game format and the player contributions vary from game to game. This report documents the Environmental Prosperity Game conducted under the sponsorship of the Silicon Valley Environmental Partnership. Players were drawn from all stakeholders involved in environmental technologies including small and large companies, government, national laboratories, universities, environmentalists, the legal profession, finance, and the media. The primary objectives of this game were to: investigate strategies for developing a multi-agency (national/state/regional), one-step regulatory approval process for certifying and implementing environmental technologies and evaluating the simulated results; identify the regulatory hurdles and requirements, and the best approaches for surmounting them; identify technical problems and potential resources (environmental consultants, labs, universities) for solving them. The deliberations and recommendations of these players provided valuable insights as to the views of this diverse group of decision makers concerning environmental issues, including the development, licensing, and commercialization of new technologies.

  16. Nuclear physicists use video gaming to build Hampton Roads' Fastest...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    0-10-24newsdp-nws-cp-fastest-computer-201010221computer-system-video-games-jeffers... Submitted: Friday, October 22, 2010 - 11:00pm...

  17. Other: First Video Game | ScienceCinema

    Office of Scientific and Technical Information (OSTI)

    First Video Game Citation Details Title: First Video Game

  18. Nuclear physicists use video gaming to build Hampton Roads' Fastest

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer (Daily Press) | Jefferson Lab 0-10-24/news/dp-nws-cp-fastest-computer-20101022_1_computer-system-video-games-jeffers... Submitted: Saturday, October 23, 2010 - 12

  19. Method and system for rendering and interacting with an adaptable computing environment

    DOE Patents [OSTI]

    Osbourn, Gordon Cecil; Bouchard, Ann Marie

    2012-06-12

    An adaptable computing environment is implemented with software entities termed "s-machines", which self-assemble into hierarchical data structures capable of rendering and interacting with the computing environment. A hierarchical data structure includes a first hierarchical s-machine bound to a second hierarchical s-machine. The first hierarchical s-machine is associated with a first layer of a rendering region on a display screen and the second hierarchical s-machine is associated with a second layer of the rendering region overlaying at least a portion of the first layer. A screen element s-machine is linked to the first hierarchical s-machine. The screen element s-machine manages data associated with a screen element rendered to the display screen within the rendering region at the first layer.

  20. Event heap: a coordination infrastructure for dynamic heterogeneous application interactions in ubiquitous computing environments

    DOE Patents [OSTI]

    Johanson, Bradley E.; Fox, Armando; Winograd, Terry A.; Hanrahan, Patrick M.

    2010-04-20

    An efficient and adaptive middleware infrastructure called the Event Heap system dynamically coordinates application interactions and communications in a ubiquitous computing environment, e.g., an interactive workspace, having heterogeneous software applications running on various machines and devices across different platforms. Applications exchange events via the Event Heap. Each event is characterized by a set of unordered, named fields. Events are routed by matching certain attributes in the fields. The source and target versions of each field are automatically set when an event is posted or used as a template. The Event Heap system implements a unique combination of features, both intrinsic to tuplespaces and specific to the Event Heap, including content based addressing, support for routing patterns, standard routing fields, limited data persistence, query persistence/registration, transparent communication, self-description, flexible typing, logical/physical centralization, portable client API, at most once per source first-in-first-out ordering, and modular restartability.

  1. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  2. Computational Model of Population Dynamics Based on the Cell Cycle and Local Interactions

    SciTech Connect (OSTI)

    Oprisan, Sorinel Adrian; Oprisan, Ana

    2005-03-31

    Our study bridges cellular (mesoscopic) level interactions and global population (macroscopic) dynamics of carcinoma. The morphological differences and transitions between well and smooth defined benign tumors and tentacular malignat tumors suggest a theoretical analysis of tumor invasion based on the development of mathematical models exhibiting bifurcations of spatial patterns in the density of tumor cells. Our computational model views the most representative and clinically relevant features of oncogenesis as a fight between two distinct sub-systems: the immune system of the host and the neoplastic system. We implemented the neoplastic sub-system using a three-stage cell cycle: active, dormant, and necrosis. The second considered sub-system consists of cytotoxic active (effector) cells -- EC, with a very broad phenotype ranging from NK cells to CTL cells, macrophages, etc. Based on extensive numerical simulations, we correlated the fractal dimensions for carcinoma, which could be obtained from tumor imaging, with the malignat stage. Our computational model was able to also simulate the effects of surgical, chemotherapeutical, and radiotherapeutical treatments.

  3. Thrusts in High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Exascale computers (1000x Hopper) in next decade: - Manycore processors using graphics, games, embedded cores, or other low power designs offer 100x in power efficiency -...

  4. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  5. Computer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    I. INTRODUCTION This paper presents several computational tools required for processing images of a heavy ion beam and estimating the magnetic field within a plasma. The...

  6. Computational Nanophotonics: modeling optical interactions and transport in tailored nanosystem architectures

    SciTech Connect (OSTI)

    Schatz, George; Ratner, Mark

    2014-02-27

    This report describes research by George Schatz and Mark Ratner that was done over the period 10/03-5/09 at Northwestern University. This research project was part of a larger research project with the same title led by Stephen Gray at Argonne. A significant amount of our work involved collaborations with Gray, and there were many joint publications as summarized later. In addition, a lot of this work involved collaborations with experimental groups at Northwestern, Argonne, and elsewhere. The research was primarily concerned with developing theory and computational methods that can be used to describe the interaction of light with noble metal nanoparticles (especially silver) that are capable of plasmon excitation. Classical electrodynamics provides a powerful approach for performing these studies, so much of this research project involved the development of methods for solving Maxwell’s equations, including both linear and nonlinear effects, and examining a wide range of nanostructures, including particles, particle arrays, metal films, films with holes, and combinations of metal nanostructures with polymers and other dielectrics. In addition, our work broke new ground in the development of quantum mechanical methods to describe plasmonic effects based on the use of time dependent density functional theory, and we developed new theory concerned with the coupling of plasmons to electrical transport in molecular wire structures. Applications of our technology were aimed at the development of plasmonic devices as components of optoelectronic circuits, plasmons for spectroscopy applications, and plasmons for energy-related applications.

  7. Biomedical technology prosperity game{trademark}

    SciTech Connect (OSTI)

    Berman, M.; Boyack, K.W.; Wesenberg, D.L.

    1996-07-01

    Prosperity Games{trademark} are an outgrowth and adaptation of move/countermove and seminar War Games. Prosperity Games{trademark} are simulations that explore complex issues in a variety of areas including economics, politics, sociology, environment, education and research. These issues can be examined from a variety of perspectives ranging from a global, macroeconomic and geopolitical viewpoint down to the details of customer/supplier/market interactions in specific industries. All Prosperity Games{trademark} are unique in that both the game format and the player contributions vary from game to game. This report documents the Biomedical Technology Prosperity Game{trademark} conducted under the sponsorship of Sandia National Laboratories, the Defense Advanced Research Projects Agency, and the Koop Foundation, Inc. Players were drawn from all stakeholders involved in biomedical technologies including patients, hospitals, doctors, insurance companies, legislators, suppliers/manufacturers, regulators, funding organizations, universities/laboratories, and the legal profession. The primary objectives of this game were to: (1) Identify advanced/critical technology issues that affect the cost and quality of health care. (2) Explore the development, patenting, manufacturing and licensing of needed technologies that would decrease costs while maintaining or improving quality. (3) Identify policy and regulatory changes that would reduce costs and improve quality and timeliness of health care delivery. (4) Identify and apply existing resources and facilities to develop and implement improved technologies and policies. (5) Begin to develop Biomedical Technology Roadmaps for industry and government cooperation. The deliberations and recommendations of these players provided valuable insights as to the views of this diverse group of decision makers concerning biomedical issues. Significant progress was made in the roadmapping of key areas in the biomedical technology field.

  8. Computational Nanophotonics: Model Optical Interactions and Transport in Tailored Nanosystem Architectures

    SciTech Connect (OSTI)

    Stockman, Mark; Gray, Steven

    2014-02-21

    The program is directed toward development of new computational approaches to photoprocesses in nanostructures whose geometry and composition are tailored to obtain desirable optical responses. The emphasis of this specific program is on the development of computational methods and prediction and computational theory of new phenomena of optical energy transfer and transformation on the extreme nanoscale (down to a few nanometers).

  9. Request queues for interactive clients in a shared file system of a parallel computing system

    DOE Patents [OSTI]

    Bent, John M.; Faibish, Sorin

    2015-08-18

    Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue; and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.

  10. CORCON-MOD3: An integrated computer model for analysis of molten core-concrete interactions. User`s manual

    SciTech Connect (OSTI)

    Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O.

    1993-10-01

    The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.

  11. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. ! Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13 Garching bei München, Germany, July 17, 2007 ScicomP 13, July 17, 2007, Garching Overview * About Bassi * Memory on Bassi * Large Page Memory (It's Great!) * System Configuration * Large Page

  12. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear

  13. Future{at}Labs.Prosperity Game{trademark}

    SciTech Connect (OSTI)

    Beck, D.F.; Boyack, K.W.; Berman, M.

    1996-10-01

    Prosperity Games{trademark} are an outgrowth and adaptation of move/countermove and seminar War Games, Prosperity Games{trademark} are simulations that explore complex issues in a variety of areas including economics, politics, sociology, environment, education, and research. These issues can be examined from a variety of perspectives ranging from global, macroeconomic and geopolitical viewpoint down to the details of customer/supplier/market interactions specific industries. All Prosperity Games{trademark} are unique in that both the game format and the player contributions vary from game to game. This report documents the Future{at}Labs.Prosperity Game{trademark} conducted under the sponsorship of the Industry Advisory Boards of the national labs, the national labs, Lockheed Martin Corporation, and the University of California. Players were drawn from all stakeholders involved including government, industry, labs, and academia. The primary objectives of this game were to: (1) explore ways to optimize the role of the multidisciplinary labs in serving national missions and needs; (2) explore ways to increase collaboration and partnerships among government, laboratories, universities, and industry; and (3) create a network of partnership champions to promote findings and policy options. The deliberations and recommendations of these players provided valuable insights as to the views of this diverse group of decision makers concerning the future of the labs.

  14. A combined experimental and computational study of the molecular interactions between anionic ibuprofen and water

    SciTech Connect (OSTI)

    Zapata-Escobar, Andy; Manrique-Moreno, Marcela; Guerra, Doris; Hadad, C. Z.; Restrepo, Albeiro

    2014-05-14

    In this work, we report a detailed study of the microsolvation of anionic ibuprofen, Ibu{sup ?}. Stochastic explorations of the configurational spaces for the interactions of Ibu{sup ?} with up to three water molecules at the DFT level lead to very rich and complex potential energy surfaces. Our results suggest that instead of only one preponderant structure, a collection of isomers with very similar energies would have significant contributions to the properties of the solvated drug. One of these properties is the shift on the vibrational frequencies of the asymmetric stretching band of the carboxylate group in hydrated Ibu{sup ?} with respect to the anhydrous drug, whose experimental values are nicely reproduced using the weighted contribution of the structures. We found at least three types of stabilizing interactions, including conventional CO {sub 2}{sup ?}?H{sub 2}O, H{sub 2}O?H{sub 2}O charge assisted hydrogen bonds (HBs), and less common H{sub 2}O?HC and H{sub 2}O?? interactions. Biological water molecules, those in direct contact with Ibu{sup ?}, prefer to cluster around the carboxylate oxygen atoms via cyclic or bridged charge assisted hydrogen bonds. Many of those interactions are strongly affected by the formal carboxylate charge, resulting in enhanced HBs with increased strengths and degree of covalency. We found striking similarities between this case and the microsolvation of dymethylphosphate, which lead us to hypothesize that since microsolvation of phosphatidylcholine depends mainly on the formal charge of its ionic PO {sub 2}{sup ?} group in the polar head, then microsolvation of anionic ibuprofen and interactions of water molecules with eukaryotic cell membranes are governed by the same types of physical interactions.

  15. About the Game

    Broader source: Energy.gov [DOE]

    Terrachanics is a puzzle game developed for the public by the Department of Energy. It is designed as a recruitment tool to help drive interest in careers with the department, and inspire the...

  16. Computer simulation of the probability that endangered whales will interact with oil spills

    SciTech Connect (OSTI)

    Reed, M.; Jayko, K.; Bowles, A.; Anderson, E.; Leatherwood, S.

    1987-03-01

    A numerical model system was developed to assess quantitatively the probability that endangered bowhead and gray whales will encounter spilled oil in Alaskan waters. Bowhead and gray whale migration and diving-surfacing models, and an oil-spill trajectory model comprise the system. The migration models were developed from conceptual considerations, then calibrated with and tested against observations. The movement of a whale point is governed by a random walk algorithm which stochastically follows a migratory pathway. The oil-spill model, developed under a series of other contracts, accounts for transport and spreading behavior in open water and in the presence of sea ice. Historical wind records and heavy, normal, or light ice cover data sets are selected at random to provide stochastic oil-spill scenarios for whale-oil interaction simulations.

  17. ALCF summer students gain experience with high-performance computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of computing that my textbooks couldn't keep up with," said Brown, who is majoring in computer science and computer game design. "Getting exposed to many-core machines and...

  18. Computing Videos

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Videos Computing

  19. How Energy Works: Explaining Game-Changing Energy Technologies | Department

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of Energy Works: Explaining Game-Changing Energy Technologies How Energy Works: Explaining Game-Changing Energy Technologies June 16, 2014 - 10:50am Q&A What How Energy Works topic should we cover next? Vote Now! Addthis What How Energy Works topic should we cover next? <a href="/node/919166">Vote now</a> using our interactive voting tool. | Graphic by Sarah Gerrity, Energy Department. What How Energy Works topic should we cover next? Vote now using our interactive

  20. Energy Simulation Games Lesson

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Ken Walz Unit Title: Energy Efficiency and Renewable Energy (EERE) Subject: Physical, Env, and Social Sciences Lesson Title: Energy Simulation Games Grade Level(s): 6-12 Lesson Length: 1 hours (+ optional time outside class) Date(s): 7/14/2014 * Learning Goal(s) By the end of this lesson, students will have a deeper understanding of Energy Management, Policy, and Decision Making. * Connection to Energy/ Renewable Energy In this assignment you will be using two different energy simulation tools

    1. Industrial ecology Prosperity Game{trademark}

      SciTech Connect (OSTI)

      Beck, D.; Boyack, K.; Berman, M.

      1998-03-01

      Industrial ecology (IE) is an emerging scientific field that views industrial activities and the environment as an interactive whole. The IE approach simultaneously optimizes activities with respect to cost, performance, and environmental impact. Industrial Ecology provides a dynamic systems-based framework that enables management of human activity on a sustainable basis by: minimizing energy and materials usage; insuring acceptable quality of life for people; minimizing the ecological impact of human activity to levels that natural systems can sustain; and maintaining the economic viability of systems for industry, trade and commerce. Industrial ecology applies systems science to industrial systems, defining the system boundary to incorporate the natural world. Its overall goal is to optimize industrial activities within the constraints imposed by ecological viability, globally and locally. In this context, Industrial systems applies not just to private sector manufacturing and services but also to government operations, including provision of infrastructure. Sandia conducted its seventeenth Prosperity Game{trademark} on May 23--25, 1997, at the Hyatt Dulles Hotel in Herndon, Virginia. The primary sponsors of the event were Sandia National Laboratories and Los Alamos National Laboratory, who were interested in using the format of a Prosperity Game to address some of the issues surrounding Industrial Ecology. Honorary game sponsors were: The National Science Foundation; the Committee on Environmental Improvement, American Chemical Society; the Industrial and Engineering Chemistry Division, American Chemical Society; the US EPA--The Smart Growth Network, Office of Policy Development; and the US DOE-Center of Excellence for Sustainable Development.

    2. Free energy of RNA-counterion interactions in a tight-binding model computed by a discrete space mapping

      SciTech Connect (OSTI)

      Henke, Paul S.; Mak, Chi H.

      2014-08-14

      The thermodynamic stability of a folded RNA is intricately tied to the counterions and the free energy of this interaction must be accounted for in any realistic RNA simulations. Extending a tight-binding model published previously, in this paper we investigate the fundamental structure of charges arising from the interaction between small functional RNA molecules and divalent ions such as Mg{sup 2+} that are especially conducive to stabilizing folded conformations. The characteristic nature of these charges is utilized to construct a discretely connected energy landscape that is then traversed via a novel application of a deterministic graph search technique. This search method can be incorporated into larger simulations of small RNA molecules and provides a fast and accurate way to calculate the free energy arising from the interactions between an RNA and divalent counterions. The utility of this algorithm is demonstrated within a fully atomistic Monte Carlo simulation of the P4-P6 domain of the Tetrahymena group I intron, in which it is shown that the counterion-mediated free energy conclusively directs folding into a compact structure.

    3. Los Alamos to study future computing technology capabilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      "The highest return we can expect on any investment this early in the evolution of a game-changing idea such as quantum annealing computing is to facilitate exploration by a...

    4. Interactive Jobs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Interactive Jobs Interactive Jobs To run an interactive job on Hopper's compute nodes you must request the number of nodes you want and have the system allocate resources from the pool of free nodes. The following command requests 2 nodes using the interactive queue. hopper% qsub -I -q debug -l mppwidth=48 The -I flag specifies an interactive job. The -q flag specifies the name of the queue and -l mppwidth determines the number of nodes to allocate for your job, but not as you might expect. The

    5. Idaho Department of Fish & Game | Open Energy Information

      Open Energy Info (EERE)

      Fish & Game Jump to: navigation, search Logo: Idaho Department of Fish and Game Name: Idaho Department of Fish and Game Address: 600 S. Walnut Place: Boise, Idaho Zip: 83712...

    6. Alaska Department of Fish and Game | Open Energy Information

      Open Energy Info (EERE)

      Game Jump to: navigation, search Logo: Alaska Department of Fish and Game Name: Alaska Department of Fish and Game Address: 1255 W. 8th Street Place: Juneau, Alaska Zip: 99811-5526...

    7. New Mexico Department of Game and Fish | Open Energy Information

      Open Energy Info (EERE)

      Game and Fish Jump to: navigation, search Logo: New Mexico Department of Game and Fish Name: New Mexico Department of Game and Fish Abbreviation: NMDGF Address: 1 Wildlife Way...

    8. Video Games - Did They Begin at Brookhaven

      Office of Scientific and Technical Information (OSTI)

      Video Games – Did They Begin at Brookhaven? Additional Web Pages The following account, written in 1981, tells how a Department of Energy research and development program led to the pioneering development of video games. William Higinbotham William Higinbotham First Pong, now Space Invaders, next Star Castle – video games have mesmerized children of at all ages across the country and around the world. Where did it all begin? Possibly at Brookhaven National Laboratory. In 1958, William

    9. Wyoming Game and Fish Department | Open Energy Information

      Open Energy Info (EERE)

      Game and Fish Department Jump to: navigation, search Name: Wyoming Game and Fish Department Abbreviation: WGFD Address: 5400 Bishop Boulevard Place: Cheyenne, Wyoming Zip: 82006...

    10. Electrolyte Genome Could Be Battery Game-Changer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Electrolyte Genome Could Be Battery Game-Changer Electrolyte Genome Could Be Battery Game-Changer The Materials Project screens molecules to accelerate electrolyte discovery April...

    11. Technology for Increasing Geothermal Energy Productivity. Computer Models to Characterize the Chemical Interactions of Goethermal Fluids and Injectates with Reservoir Rocks, Wells, Surface Equiptment

      SciTech Connect (OSTI)

      Nancy Moller Weare

      2006-07-25

      This final report describes the results of a research program we carried out over a five-year (3/1999-9/2004) period with funding from a Department of Energy geothermal FDP grant (DE-FG07-99ID13745) and from other agencies. The goal of research projects in this program were to develop modeling technologies that can increase the understanding of geothermal reservoir chemistry and chemistry-related energy production processes. The ability of computer models to handle many chemical variables and complex interactions makes them an essential tool for building a fundamental understanding of a wide variety of complex geothermal resource and production chemistry. With careful choice of methodology and parameterization, research objectives were to show that chemical models can correctly simulate behavior for the ranges of fluid compositions, formation minerals, temperature and pressure associated with present and near future geothermal systems as well as for the very high PT chemistry of deep resources that is intractable with traditional experimental methods. Our research results successfully met these objectives. We demonstrated that advances in physical chemistry theory can be used to accurately describe the thermodynamics of solid-liquid-gas systems via their free energies for wide ranges of composition (X), temperature and pressure. Eight articles on this work were published in peer-reviewed journals and in conference proceedings. Four are in preparation. Our work has been presented at many workshops and conferences. We also considerably improved our interactive web site (geotherm.ucsd.edu), which was in preliminary form prior to the grant. This site, which includes several model codes treating different XPT conditions, is an effective means to transfer our technologies and is used by the geothermal community and other researchers worldwide. Our models have wide application to many energy related and other important problems (e.g., scaling prediction in petroleum production systems, stripping towers for mineral production processes, nuclear waste storage, CO2 sequestration strategies, global warming). Although funding decreases cut short completion of several research activities, we made significant progress on these abbreviated projects.

    12. Interactive Jobs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Interactive Jobs Interactive Jobs Serial Code or Commands Franklin is a massively parallel high-performance computing platform and is intended and designed to run large parallel codes. While it is possible to run serial jobs on Franklin, it is discouraged. Any code or command that is not preceeded by the aprun command will execute serially on a service (usually login) node. The login nodes are for executing general UNIX shell commands, building code, and submitting jobs intended to run on the

    13. Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach

      SciTech Connect (OSTI)

      Rao, Nageswara S; He, Fei; Ma, Chris Y. T.; Yao, David K. Y.; Zhuang, Jun

      2014-01-01

      In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resilience problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.

    14. Interactive Simulation of Nuclear Materials Safeguards and Security

      Energy Science and Technology Software Center (OSTI)

      1994-03-14

      THIEF is an interactive computer simulation or computer game of the safeguards and security (S&S) systems of a nuclear facility. The user is placed in the role of a non-violent insider attempting to remove special nuclear material from the facility. All portions of the S&S system that are relevant to the non-violent insider threat are included. The computer operates the S&S systems and attempts to detect the loss of the nuclear material. Both the physicalmore » protection system and the materials control and accounting system are modeled. The description of the facility and its S&S systems are defined by the user with the aid of an input module. All aspects of the facility description are provided by the user. The program has a custom graphical user interface to facilitate its use by people with limited computer experience. The custom interface also allows it to run on relatively small computer systems.« less

    15. Indian Gaming 2012 Tradeshow and Convention

      Broader source: Energy.gov [DOE]

      The National Indian Gaming Association (NIGA) 2012 tradeshow and convention will take place April 1-4, 2012, in San Diego, California. The event features seminars and trainings and other activities...

    16. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    17. Computation & Simulation > Theory & Computation > Research >...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...

    18. Innovative Computational Tools for Reducing Exploration Risk...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      of Water-Rock Interactions and Magnetotelluric Surveys Innovative Computational Tools for Reducing Exploration Risk Through Integration of Water-Rock Interactions and ...

    19. New Mexico Department of Fish and Game webpage | Open Energy...

      Open Energy Info (EERE)

      New Mexico Department of Fish and Game webpage Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: New Mexico Department of Fish and Game webpage Author...

    20. California Fish and Game Code Section 86 | Open Energy Information

      Open Energy Info (EERE)

      California Fish and Game Code Section 86 Jump to: navigation, search OpenEI Reference LibraryAdd to library Legal Document- StatuteStatute: California Fish and Game Code Section...

    1. New Mexico Department of Fish and Game Mining Guidelines webpage...

      Open Energy Info (EERE)

      New Mexico Department of Fish and Game Mining Guidelines webpage Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: New Mexico Department of Fish and Game...

    2. Wyoming Game and Fish Department Geospatial Data | Open Energy...

      Open Energy Info (EERE)

      Wyoming Game and Fish Department Geospatial Data Jump to: navigation, search OpenEI Reference LibraryAdd to library Map: Wyoming Game and Fish Department Geospatial DataInfo...

    3. Game Changing Technology | GE Global Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      I Want to See Game-Changing Technology Click to email this to a friend (Opens in new window) Share on Facebook (Opens in new window) Click to share (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Tumblr (Opens in new window) I Want to See Game-Changing Technology In launching our new website, we asked some of our scientists and engineers what they would want the world to see. Check out this short video to see what they said. You Might Also Like direct

    4. Non-covalent interactions of nitrous oxide with aromatic compounds: Spectroscopic and computational evidence for the formation of 1:1 complexes

      SciTech Connect (OSTI)

      Cao, Qian; School of Chemistry and Chemical Engineering, Sun Yat-Sen University, Guangzhou 510275 ; Gor, Gennady Y.; Krogh-Jespersen, Karsten; Khriachtchev, Leonid

      2014-04-14

      We present the first study of intermolecular interactions between nitrous oxide (N{sub 2}O) and three representative aromatic compounds (ACs): phenol, cresol, and toluene. The infrared spectroscopic experiments were performed in a Ne matrix and were supported by high-level quantum chemical calculations. Comparisons of the calculated and experimental vibrational spectra provide direct identification and characterization of the 1:1 N{sub 2}O-AC complexes. Our results show that N{sub 2}O is capable of forming non-covalently bonded complexes with ACs. Complex formation is dominated by dispersion forces, and the interaction energies are relatively low (about ?3 kcal mol{sup ?1}); however, the complexes are clearly detected by frequency shifts of the characteristic bands. These results suggest that N{sub 2}O can be bound to the amino-acid residues tyrosine or phenylalanine in the form of ? complexes.

    5. Computed solid phases limiting the concentration of dissolved constituents in basalt aquifers of the Columbia Plateau in eastern Washington. Geochemical modeling and nuclide/rock/groundwater interaction studies

      SciTech Connect (OSTI)

      Deutsch, W.J.; Jenne, E.A.; Krupka, K.M.

      1982-08-01

      A speciation-solubility geochemical model, WATEQ2, was used to analyze geographically-diverse, ground-water samples from the aquifers of the Columbia Plateau basalts in eastern Washington. The ground-water samples compute to be at equilibrium with calcite, which provides both a solubility control for dissolved calcium and a pH buffer. Amorphic ferric hydroxide, Fe(OH)/sub 3/(A), is at saturation or modestly oversaturated in the few water samples with measured redox potentials. Most of the ground-water samples compute to be at equilibrium with amorphic silica (glass) and wairakite, a zeolite, and are saturated to oversaturated with respect to allophane, an amorphic aluminosilicate. The water samples are saturated to undersaturated with halloysite, a clay, and are variably oversaturated with regard to other secondary clay minerals. Equilibrium between the ground water and amorphic silica presumably results from the dissolution of the glassy matrix of the basalt. The oversaturation of the clay minerals other than halloysite indicates that their rate of formation lags the dissolution rate of the basaltic glass. The modeling results indicate that metastable amorphic solids limit the concentration of dissolved silicon and suggest the same possibility for aluminum and iron, and that the processes of dissolution of basaltic glass and formation of metastable secondary minerals are continuing even though the basalts are of Miocene age. The computed solubility relations are found to agree with the known assemblages of alteration minerals in the basalt fractures and vesicles. Because the chemical reactivity of the bedrock will influence the transport of solutes in ground water, the observed solubility equilibria are important factors with regard to chemical-retention processes associated with the possible migration of nuclear waste stored in the earth's crust.

    6. Game Imaging Meets Nuclear Reality

      SciTech Connect (OSTI)

      Michel, Kelly; Watkins, Adam

      2011-03-21

      At Los Alamos National Laboratory, a team of artists and animators, nuclear engineers and computer scientists is teaming to provide 3-D models of nuclear facilities to train IAEA safeguards inspectors and others who need fast familiarity with specific nuclear sites.

    7. Game Imaging Meets Nuclear Reality

      ScienceCinema (OSTI)

      Michel, Kelly; Watkins, Adam

      2014-08-12

      At Los Alamos National Laboratory, a team of artists and animators, nuclear engineers and computer scientists is teaming to provide 3-D models of nuclear facilities to train IAEA safeguards inspectors and others who need fast familiarity with specific nuclear sites.

    8. Density functional theory study of the interaction of vinyl radical, ethyne, and ethene with benzene, aimed to define an affordable computational level to investigate stability trends in large van der Waals complexes

      SciTech Connect (OSTI)

      Maranzana, Andrea E-mail: anna.giordana@hotmail.com E-mail: mauro.causa@unina.it Giordana, Anna E-mail: anna.giordana@hotmail.com E-mail: mauro.causa@unina.it Indarto, Antonius Tonachini, Glauco; Barone, Vincenzo E-mail: anna.giordana@hotmail.com E-mail: mauro.causa@unina.it; Causà, Mauro E-mail: anna.giordana@hotmail.com E-mail: mauro.causa@unina.it; Pavone, Michele E-mail: anna.giordana@hotmail.com E-mail: mauro.causa@unina.it

      2013-12-28

      Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔE{sub AB}. Counterpoise-corrected interaction energies ΔE{sub AB} are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A−B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [E{sub MP2/CBS}] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔE{sub CC-MP}, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔE{sub AB} with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational reference of less than 1 kcal mol{sup −1}. The zero-point vibrational energy corrected estimates Δ(E{sub AB}+ZPE), obtained with the three functionals and the 6-31G(d) and N07T basis sets, are compared with experimental D{sub 0} measures, when available. In particular, this comparison is finally extended to the naphthalene and coronene dimers and to three π−π associations of different PAHs (R, made by 10, 16, or 24 C atoms) and P (80 C atoms)

    9. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2016-02-01 08:07:08

    10. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop....

    11. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    12. Human-computer interface

      DOE Patents [OSTI]

      Anderson, Thomas G.

      2004-12-21

      The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

    13. Powerpedia Games Encourages Employees to Enhance Internal Wiki | Department

      Energy Savers [EERE]

      of Energy Games Encourages Employees to Enhance Internal Wiki Powerpedia Games Encourages Employees to Enhance Internal Wiki November 20, 2014 - 4:28pm Addthis Powerpedia Games Encourages Employees to Enhance Internal Wiki Tom O'Neill Tom O'Neill Lead Powerpedia Curator and Ambassador The Department's internal wiki, Powerpedia, is holding an editing competition as part of its 5th Birthday celebration. Users will collect points by making edits to the wiki. The points will be used to determine

    14. Compute Reservation Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Reservation Request Form Compute Reservation Request Form Users can request a scheduled reservation of machine resources if their jobs have special needs that cannot be accommodated through the regular batch system. A reservation brings some portion of the machine to a specific user or project for an agreed upon duration. Typically this is used for interactive debugging at scale or real time processing linked to some experiment or event. It is not intended to be used to guarantee fast

    15. New Mexico Department of Fish and Game Powerline Project Guidelines...

      Open Energy Info (EERE)

      New Mexico Department of Fish and Game Powerline Project Guidelines Jump to: navigation, search OpenEI Reference LibraryAdd to library PermittingRegulatory Guidance - Guide...

    16. Innovative Computational Tools for Reducing Exploration Risk Through

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Integration of Water-Rock Interactions and Magnetotelluric Surveys | Department of Energy Computational Tools for Reducing Exploration Risk Through Integration of Water-Rock Interactions and Magnetotelluric Surveys Innovative Computational Tools for Reducing Exploration Risk Through Integration of Water-Rock Interactions and Magnetotelluric Surveys Innovative Computational Tools for Reducing Exploration Risk Through Integration of Water-Rock Interactions and Magnetotelluric Surveys

    17. Computing and Computational Sciences Directorate - Computer Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      AWARD Winners: Jess Gehin; Jackie Isaacs; Douglas Kothe; Debbie McCoy; Bonnie Nestor; John Turner; Gilbert Weigand Organization(s): Nuclear Technology Program; Computing and...

    18. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and...

    19. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    20. The name of the game | Jefferson Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The name of the game April 24, 2014 Over the past several months, we have had reasons to discuss the way we have handled difficulties, such as transitions from shutdown to commissioning, the potential for future physics, and a visit from the Secretary of Energy. But there is one subject that perhaps surpasses all others in terms of what we are expected to deliver. Physics! When I came to the lab in 2008, it did not take me long to hear the name Q-weak. At the time, the experimental apparatus was

    1. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    2. Computes Generalized Electromagnetic Interactions Between Structures

      Energy Science and Technology Software Center (OSTI)

      2006-05-18

      Eiger is primarily in integral equation code for both frequency-domain electromagnetics and electrostatics. There is also some finiate element capability. In the frequency-domain version there are different Green's functions in the code, 2D, 3D free space, symmetry-plane Green's functions, periodic Green's functions, and layered media Green's functions. There are thin slot models for coupling into cavities. There is a thin wire algorithm as well as junction basis functions for attachment of a wire to amore » conducting surface. The code is written in Fortran 90 using object oriented design. The code has the capability to run both in parallel and serial modes. The code is a suite consisting of pre-processor (Jungfrau), the physics code (EIGER), and post processor (Moench).« less

    3. Computational Nanophotonics: modeling optical interactions and...

      Office of Scientific and Technical Information (OSTI)

      This research project was part of a larger research project with the same title led by Stephen Gray at Argonne. A significant amount of our work involved collaborations with Gray,...

    4. Quantum steady computation

      SciTech Connect (OSTI)

      Castagnoli, G. )

      1991-08-10

      This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

    5. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    6. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved within B 174. Use

    7. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center for Computational Sciences

    8. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math » Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs in the next generation of supercomputers. Get Expertise Tim Germann Physics and Chemistry of Materials Email Allen McPherson Energy and Infrastructure Analysis Email Turab Lookman Physics and Condensed Matter and Complex Systems Email Computational co-design involves developing the interacting components of a

    9. System for Analysis of Soil-Structure Interaction (SASSI) Verification...

      Broader source: Energy.gov (indexed) [DOE]

      the System for Analysis of Soil-Structure Interaction, a computer code for performing finite element analyses of soil-structure interaction during seismic ground motions. It was...

    10. Title 16 Alaska Statutes Chapter 20 Fish and Game Conservation...

      Open Energy Info (EERE)

      Title 16 Alaska Statutes Chapter 20 Fish and Game Conservation Jump to: navigation, search OpenEI Reference LibraryAdd to library Legal Document- StatuteStatute: Title 16 Alaska...

    11. It's Your Career, What's Your Game Plan? | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      It's Your Career, What's Your Game Plan? January 20, 2016 11:45AM to 1:30PM Presenter Philip Clifford, University of Illinois at Chicago Location Building 241, Room D172 Type...

    12. Video Game Device Haptic Interface for Robotic Arc Welding

      SciTech Connect (OSTI)

      Corrie I. Nichol; Milos Manic

      2009-05-01

      Recent advances in technology for video games have made a broad array of haptic feedback devices available at low cost. This paper presents a bi-manual haptic system to enable an operator to weld remotely using the a commercially available haptic feedback video game device for the user interface. The system showed good performance in initial tests, demonstrating the utility of low cost input devices for remote haptic operations.

    13. Electrolyte Genome Could Be Battery Game-Changer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Electrolyte Genome Could Be Battery Game-Changer Electrolyte Genome Could Be Battery Game-Changer The Materials Project screens molecules to accelerate electrolyte discovery April 15, 2015 Julie Chao, JHChao@lbl.gov, +1 510 486 6491 Persson Electrolyte Genome 628x465 Berkeley Lab scientist Kristin Persson (right) and her Electrolyte Genome team, Nav Nidhi Rajput and Xiaohui Qu. (Roy Kaltschmidt, Berkeley Lab) A new breakthrough battery-one that has significantly higher energy, lasts longer, and

    14. Lab hosts multi-lab cyber security games

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lab hosts multi-lab cyber security games Lab hosts multi-lab cyber security games Eventide brought together cyber and IT leaders from 20 sites to develop recommendations on resources they need from the Joint Cyber Coordination Center. April 12, 2012 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy sources, to plasma physics and

    15. Research Lab Wins Prestigious Award for Changing the Energy Game |

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Department of Energy Research Lab Wins Prestigious Award for Changing the Energy Game Research Lab Wins Prestigious Award for Changing the Energy Game March 28, 2014 - 11:38am Addthis Energy Systems Integration Facility 1 of 7 Energy Systems Integration Facility The Energy Department's Energy Systems Integration Facility (ESIF) at the National Renewable Energy Laboratory in Golden, Colorado. The 182,500-square-foot facility houses 15 experimental laboratories and several outdoor test beds.

    16. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes There are currently 2632 nodes available on PDSF. The compute (batch) nodes at PDSF are heterogenous, reflecting the periodic procurement of new nodes (and the eventual retirement of old nodes). From the user's perspective they are essentially all equivalent except that some have more memory per job slot. If your jobs have memory requirements beyond the default maximum of 1.1GB you should specify that in your job submission and the batch system will run your job on an

    17. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    18. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Exascale Computing CoDEx Project: A Hardware/Software Codesign Environment for the Exascale Era The next decade will see a rapid evolution of HPC node architectures as power and cooling constraints are limiting increases in microprocessor clock speeds and constraining data movement. Applications and algorithms will need to change and adapt as node architectures evolve. A key element of the strategy as we move forward is the co-design of applications, architectures and programming

    19. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    20. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    1. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home › About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and Computational Sciences Directorate Michael Bartell Chief Information Officer Information Technologies Services Division Jim Hack Director, Climate Science Institute National Center for Computational Sciences Shaun Gleason Division Director Computational Sciences and Engineering Barney Maccabe Division Director Computer Science

    2. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes MC-proc.png Compute Node Configuration 6,384 nodes 2 twelve-core AMD 'MagnyCours' 2.1-GHz processors per node (see die image to the right and schematic below) 24 cores per node (153,216 total cores) 32 GB DDR3 1333-MHz memory per node (6,000 nodes) 64 GB DDR3 1333-MHz memory per node (384 nodes) Peak Gflop/s rate: 8.4 Gflops/core 201.6 Gflops/node 1.28 Peta-flops for the entire machine Each core has its own L1 and L2 caches, with 64 KB and 512KB respectively One 6-MB

    3. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Resources This page is the repository for sundry items of information relevant to general computing on BooNE. If you have a question or problem that isn't answered here, or a suggestion for improving this page or the information on it, please mail boone-computing@fnal.gov and we'll do our best to address any issues. Note about this page Some links on this page point to www.everything2.com, and are meant to give an idea about a concept or thing without necessarily wading through a whole website

    4. NMAC 19.34.3 Wildlife Habitat and Lands Use of State Game Commission...

      Open Energy Info (EERE)

      Mexico state game commission with the authority to acquire lands, to provide for use of game and fish for use and development for public recreation. Published NA Year Signed or...

    5. EERE Success Story-Oregon: DOE Advances Game-Changing EGS Geothermal...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Oregon: DOE Advances Game-Changing EGS Geothermal Technology at the Newberry Volcano EERE Success Story-Oregon: DOE Advances Game-Changing EGS Geothermal Technology at the Newberry ...

    6. Interactive Jobs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Interactive Jobs Interactive Jobs Interactive Batch Jobs The login nodes on Genepool should not be used for heavy interactive work. These login nodes are shared amoungst all Genepool users so heavy CPU or memory usage will affect other Genepool users. 10 nodes have been reserved on Genepool for high priority and interactive work. Each user can use up to 2 slots at a time in the high priority queue. Use the qlogin command to run jobs interactively. The example below shows how to request an

    7. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model,...

    8. Sandia Energy - Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science Home Energy Research Advanced Scientific Computing Research (ASCR) Computational Science Computational Sciencecwdd2015-03-26T13:35:2...

    9. Interactive Jobs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The following command requests 2 nodes using the interactive queue. hopper% qsub -I -q debug -l mppwidth48 The -I flag specifies an interactive job. The -q flag specifies the...

    10. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    11. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Events Computing Events Spotlighting the most advanced scientific and technical applications in the world! Featuring exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies will be seen for the first time in Denver. Supercomputing Conference 13 Denver, Colorado November 17-22, 2013 Spotlighting the most advanced scientific and technical applications in the world, SC13 will bring together the international

    12. Game-Changing Advancements in Solar Energy | Department of Energy

      Office of Environmental Management (EM)

      Game-Changing Advancements in Solar Energy Game-Changing Advancements in Solar Energy Addthis Record-Breaking Solar 1 of 5 Record-Breaking Solar This concentrating photovoltaic (CPV) cell -- which uses a focused lens to magnify light to 418 times the intensity of the sun -- earned an R&D100 Award and set a new world record of 43.5 percent for solar cell conversion efficiency. The technology is based on high-efficiency multijunction research pioneered by the National Renewable Energy

    13. Changing the Advanced Energy Manufacturing Game in America's Heartland |

      Office of Environmental Management (EM)

      Department of Energy Advanced Energy Manufacturing Game in America's Heartland Changing the Advanced Energy Manufacturing Game in America's Heartland December 16, 2010 - 9:32am Addthis Andy Oare Andy Oare Former New Media Strategist, Office of Public Affairs What does this mean for me? Clean energy manufacturing is expanding across the Midwest. This was spurred in large part by the Advanced Energy Manufacturing Tax Credit, also known as 48C, which was part of the Recovery Act. The $2.3

    14. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost® 3 on the TRACC Cluster Oct. 21-22, 2010 Argonne TRACC Dr. Cezary Bojanowski Dr. Ronald F. Kulak This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small The LS-PrePost Introductory Course was held October 21-22, 2010 at TRACC in West Chicago with interactive participation on-site as well as remotely via the Internet. Intended primarily for finite element analysts with

    15. Computing and Computational Sciences Directorate - Computer Science and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics Division Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Our mission includes basic research in computational sciences and application of advanced computing systems, computational, mathematical and analysis techniques to the solution of scientific problems of national importance. We seek to work

    16. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Jefferson Lab Jefferson Lab Home Search Contact JLab Computing at JLab ---------------------- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org

    17. Radiological Worker Computer Based Training

      Energy Science and Technology Software Center (OSTI)

      2003-02-06

      Argonne National Laboratory has developed an interactive computer based training (CBT) version of the standardized DOE Radiological Worker training program. This CD-ROM based program utilizes graphics, animation, photographs, sound and video to train users in ten topical areas: radiological fundamentals, biological effects, dose limits, ALARA, personnel monitoring, controls and postings, emergency response, contamination controls, high radiation areas, and lessons learned.

    18. Interactive Jobs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Interactive Jobs Interactive Jobs Overview To run interactive jobs on Cori, you must request the number of nodes you want and have the system allocate resources from the pool of free nodes. To request an interactive session, the salloc command must be issued. For example, the following command requests 2 nodes in the debug partition for 30 min. % salloc -N 2 -p debug -t 00:30:00 salloc may be issued with several options. For the complete list of all options for salloc, refer to the SLURM salloc

    19. Computing Resources | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Computing Resources Theory and Computing Sciences Building Argonne's Theory and Computing Sciences (TCS) building houses a wide variety of computing systems including some of the most powerful supercomputers in the world. The facility has 25,000 square feet of raised computer floor space and a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. The building also

    20. Prototype prosperity-diversity game for the Laboratory Development Division of Sandia National Laboratories

      SciTech Connect (OSTI)

      VanDevender, P.; Berman, M.; Savage, K.

      1996-02-01

      The Prosperity Game conducted for the Laboratory Development Division of National Laboratories on May 24--25, 1995, focused on the individual and organizational autonomy plaguing the Department of Energy (DOE)-Congress-Laboratories` ability to manage the wrenching change of declining budgets. Prosperity Games are an outgrowth and adaptation of move/countermove and seminar War Games. Each Prosperity Game is unique in that both the game format and the player contributions vary from game to game. This particular Prosperity Game was played by volunteers from Sandia National Laboratories, Eastman Kodak, IBM, and AT&T. Since the participants fully control the content of the games, the specific outcomes will be different when the team for each laboratory, Congress, DOE, and the Laboratory Operating Board (now Laboratory Operations Board) is composed of executives from those respective organizations. Nevertheless, the strategies and implementing agreements suggest that the Prosperity Games stimulate cooperative behaviors and may permit the executives of the institutions to safely explore the consequences of a family of DOE concert.

    1. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    2. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      User Defined Images Archive APEX Home R & D Exascale Computing CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in...

    3. Computational and experimental techniques for coupled acoustic/structure

      Office of Scientific and Technical Information (OSTI)

      interactions. (Technical Report) | SciTech Connect Computational and experimental techniques for coupled acoustic/structure interactions. Citation Details In-Document Search Title: Computational and experimental techniques for coupled acoustic/structure interactions. This report documents the results obtained during a one-year Laboratory Directed Research and Development (LDRD) initiative aimed at investigating coupled structural acoustic interactions by means of algorithm development and

    4. James Osborn | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Osborn Computational Scientist / Catalyst Team Lead James Osborn Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 2126 Argonne, IL 60439 630-252-6327 osborn@alcf.anl.gov James Osborn is a Computational Scientist at the ALCF and a Fellow of the Computation Institute at The University of Chicago and Argonne. He specializes in the application of Lattice Field Theory, Random Matrix Theory, and cluster algorithms to strongly interacting systems. He is also interested in

    5. Measuring the Monitoring User Interactive Experiences on Franklin Interactive Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Interactive Node Responsiveness Richard Gerber User Services Group National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory Berkeley, CA June 9, 2008 Introduction Anecdotal reports of slow interactive response on Franklin's login nodes have been documented via comments on the 2007 NERSC User Survey. Users report that sluggish command-line response at times makes it difficult to work. The cause, or causes, of the poor response time is unknown. In an attempt to

    6. Surprising Quasiparticle Interactions in Graphene

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Surprising Quasiparticle Interactions in Graphene Surprising Quasiparticle Interactions in Graphene Print Wednesday, 31 October 2007 00:00 Until now, the world's electronics have been dominated by silicon, whose properties, while excellent, significantly limit the size and power consumption of today's computer chips. In order to develop ever smaller and more efficient devices, scientists have turned their attention to carbon, which can be formed into nanostructures like nanotubes, whose

    7. Idealization, uncertainty and heterogeneity : game frameworks defined with formal concept analysis.

      SciTech Connect (OSTI)

      Racovitan, M. T.; Sallach, D. L.; Decision and Information Sciences; Northern Illinois Univ.

      2006-01-01

      The present study begins with Formal Concept Analysis, and undertakes to demonstrate how a succession of game frameworks may, by design, address increasingly complex and interesting social phenomena. We develop a series of multi-agent exchange games, each of which incorporates an additional dimension of complexity. All games are based on coalition patterns in exchanges where diverse cultural markers provide a basis for trust and reciprocity. The first game is characterized by an idealized concept of trust. A second game framework introduces uncertainty regarding the reciprocity of prospective transactions. A third game framework retains idealized trust and uncertainty, and adds additional agent heterogeneity. Cultural markers are not equally salient in conferring or withholding trust, and the result is a richer transactional process.

    8. 2013 JSA Postdoctoral Research Grant Winner to Compute Quarks | Jefferson

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lab 3 JSA Postdoctoral Research Grant Winner to Compute Quarks Chris Monahan Chris Monahan NEWPORT NEWS, Va., March 27 - Scientists have long puzzled over how the smallest bits of matter add up to the world around us. Now, Chris Monahan will use the power of a video gaming system to attempt a new method of exploring those bits. Monahan is the recipient of the 2013 JSA Postdoctoral Research Grant at the U.S. Department of Energy's Thomas Jefferson National Accelerator Facility, which will

    9. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      In the early 2000s, members of Fermilab's Computing Division looked ahead to experiments like those at the Large Hadron Collider, which would collect more data than any computing ...

    10. Mira Computational Readiness Assessment | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Mira Computational Readiness Assessment Assess your project's computational readiness for Mira A review of the following computational readiness points in relation to scaling, porting, I/O, memory

    11. Certain irregularities in the use of computer facilities at Sandia Laboratory

      SciTech Connect (OSTI)

      Not Available

      1980-10-22

      This report concerns irregularities in the use of computer systems at Sandia Laboratories (Sandia) in Albuquerque, New Mexico. Our interest in this subject was triggered when we learned late last year that the Federal Bureau of Investigation (FBI) was planning to undertake an investigation into possible misuse of the computer systems at Sandia. That investigation, which was carried out with the assistance of our staff, disclosed that an employee of Sandia was apparently using the Sandia computer system to assist in running a bookmaking operation for local gamblers. As a result of that investigation, we decided to conduct a separate review of Sandia's computer systems to determine the extent of computer misuse at Sandia. We found that over 200 employees of Sandia had stored games, personal items, classified material, and otherwise sensitive material on their computer files.

    12. Sandia Energy - Computations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computations Home Transportation Energy Predictive Simulation of Engines Reacting Flow Applied Math & Software Computations ComputationsAshley Otero2015-10-30T02:18:51+00:00...

    13. 2011 Computation Directorate Annual Report

      SciTech Connect (OSTI)

      Crawford, D L

      2012-04-11

      From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilities and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry, the inaccessibility of software to run the powerful systems, and the years it takes to grow the expertise to develop codes and run them in an optimal way. LLNL is helping industry better compete in the global market place by providing access to some of the world's most powerful computing systems, the tools to run them, and the experts who are adept at using them. Our scientists are collaborating side by side with industrial partners to develop solutions to some of industry's toughest problems. The goal of the Livermore Valley Open Campus High Performance Computing Innovation Center is to allow American industry the opportunity to harness the power of supercomputing by leveraging the scientific and computational expertise at LLNL in order to gain a competitive advantage in the global economy.

    14. Computational and experimental techniques for coupled acoustic/structure

      Office of Scientific and Technical Information (OSTI)

      interactions. (Technical Report) | SciTech Connect Computational and experimental techniques for coupled acoustic/structure interactions. Citation Details In-Document Search Title: Computational and experimental techniques for coupled acoustic/structure interactions. × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to

    15. Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation

      SciTech Connect (OSTI)

      Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

      2014-01-01

      Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From these five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.

    16. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    17. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    18. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

    19. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    20. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or...

    1. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zrich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    2. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    3. Computing for Finance

      SciTech Connect (OSTI)

      2010-03-24

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    4. MaRIE theory, modeling and computation roadmap executive summary

      Office of Scientific and Technical Information (OSTI)

      (Conference) | SciTech Connect Conference: MaRIE theory, modeling and computation roadmap executive summary Citation Details In-Document Search Title: MaRIE theory, modeling and computation roadmap executive summary The confluence of MaRIE (Matter-Radiation Interactions in Extreme) and extreme (exascale) computing timelines offers a unique opportunity in co-designing the elements of materials discovery, with theory and high performance computing, itself co-designed by constrained

    5. Cosmic Reionization On Computers | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      its Cosmic Reionization On Computers (CROC) project, using the Adaptive Refinement Tree (ART) code as its main simulation tool. An important objective of this research is to make...

    6. Computing and Computational Sciences Directorate - Information...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      cost-effective, state-of-the-art computing capabilities for research and development. ... communicates and manages strategy, policy and finance across the portfolio of IT assets. ...

    7. Parallel computing works

      SciTech Connect (OSTI)

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    8. Computers-BSA.ppt

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers! Boy Scout Troop 405! What is a computer?! Is this a computer?! Charles Babbage: Father of the Computer! 1830s Designed mechanical calculators to reduce human error. *Input device *Memory to store instructions and results *A processors *Output device! Vacuum Tube! Edison 1883 & Lee de Forest 1906 discovered that "vacuum tubes" could serve as electrical switches and amplifiers A switch can be ON (1)" or OFF (0) Electronic computers use Boolean (George Bool 1850) logic

    9. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    10. Theory & Computation > Research > The Energy Materials Center...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Theory & Computation In This Section Computation & Simulation Theory & Computation Computation & Simulation...

    11. Game-Changing Process Mitigates CO2 Emissions Using Renewable Energy |

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Department of Energy Game-Changing Process Mitigates CO2 Emissions Using Renewable Energy Game-Changing Process Mitigates CO2 Emissions Using Renewable Energy October 21, 2015 - 7:58am Addthis Game-Changing Process Mitigates CO2 Emissions Using Renewable Energy Gold nanoparticles are at the heart of a new process conceived and developed by researchers at the U.S. Department of Energy's National Energy Technology Laboratory (NETL) that can efficiently convert carbon dioxide (CO2) into usable

    12. Lab Game-Changers in Our Past and Future | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Game-Changers in Our Past and Future Lab Game-Changers in Our Past and Future March 20, 2012 - 1:17pm Addthis A researcher at the Joint Bioenergy Institute at Berkeley National Lab chooses bacteria colonies in their efforts to create a game-changing biofuel from sustainable, energy-dense plants, such as switchgrass. The JBEI is one example of the ability for Energy Department labs to form scientific partnerships designed to hurdle an energy barrier with transformative technology. | Photo

    13. Modeling attacker-defender interactions in information networks.

      SciTech Connect (OSTI)

      Collins, Michael Joseph

      2010-09-01

      The simplest conceptual model of cybersecurity implicitly views attackers and defenders as acting in isolation from one another: an attacker seeks to penetrate or disrupt a system that has been protected to a given level, while a defender attempts to thwart particular attacks. Such a model also views all non-malicious parties as having the same goal of preventing all attacks. But in fact, attackers and defenders are interacting parts of the same system, and different defenders have their own individual interests: defenders may be willing to accept some risk of successful attack if the cost of defense is too high. We have used game theory to develop models of how non-cooperative but non-malicious players in a network interact when there is a substantial cost associated with effective defensive measures. Although game theory has been applied in this area before, we have introduced some novel aspects of player behavior in our work, including: (1) A model of how players attempt to avoid the costs of defense and force others to assume these costs; (2) A model of how players interact when the cost of defending one node can be shared by other nodes; and (3) A model of the incentives for a defender to choose less expensive, but less effective, defensive actions.

    14. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw (Los Alamos, NM); Gokhale, Maya B. (Los Alamos, NM); McCabe, Kevin Peter (Los Alamos, NM)

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    15. An Arbitrary Precision Computation Package

      Energy Science and Technology Software Center (OSTI)

      2003-06-14

      This package permits a scientist to perform computations using an arbitrarily high level of numeric precision (the equivalent of hundreds or even thousands of digits), by making only minor changes to conventional C++ or Fortran-90 soruce code. This software takes advantage of certain properties of IEEE floating-point arithmetic, together with advanced numeric algorithms, custom data types and operator overloading. Also included in this package is the "Experimental Mathematician's Toolkit", which incorporates many of these facilitiesmore » into an easy-to-use interactive program.« less

    16. Surprising Quasiparticle Interactions in Graphene

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Surprising Quasiparticle Interactions in Graphene Print Until now, the world's electronics have been dominated by silicon, whose properties, while excellent, significantly limit the size and power consumption of today's computer chips. In order to develop ever smaller and more efficient devices, scientists have turned their attention to carbon, which can be formed into nanostructures like nanotubes, whose properties can be tuned from metallic to semiconducting. However, using carbon nanotubes

    17. Surprising Quasiparticle Interactions in Graphene

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Surprising Quasiparticle Interactions in Graphene Print Until now, the world's electronics have been dominated by silicon, whose properties, while excellent, significantly limit the size and power consumption of today's computer chips. In order to develop ever smaller and more efficient devices, scientists have turned their attention to carbon, which can be formed into nanostructures like nanotubes, whose properties can be tuned from metallic to semiconducting. However, using carbon nanotubes

    18. Surprising Quasiparticle Interactions in Graphene

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Surprising Quasiparticle Interactions in Graphene Print Until now, the world's electronics have been dominated by silicon, whose properties, while excellent, significantly limit the size and power consumption of today's computer chips. In order to develop ever smaller and more efficient devices, scientists have turned their attention to carbon, which can be formed into nanostructures like nanotubes, whose properties can be tuned from metallic to semiconducting. However, using carbon nanotubes

    19. Surprising Quasiparticle Interactions in Graphene

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Surprising Quasiparticle Interactions in Graphene Print Until now, the world's electronics have been dominated by silicon, whose properties, while excellent, significantly limit the size and power consumption of today's computer chips. In order to develop ever smaller and more efficient devices, scientists have turned their attention to carbon, which can be formed into nanostructures like nanotubes, whose properties can be tuned from metallic to semiconducting. However, using carbon nanotubes

    20. Surprising Quasiparticle Interactions in Graphene

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Surprising Quasiparticle Interactions in Graphene Print Until now, the world's electronics have been dominated by silicon, whose properties, while excellent, significantly limit the size and power consumption of today's computer chips. In order to develop ever smaller and more efficient devices, scientists have turned their attention to carbon, which can be formed into nanostructures like nanotubes, whose properties can be tuned from metallic to semiconducting. However, using carbon nanotubes

    1. Fermilab | Science at Fermilab | Computing | High-performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lattice QCD Farm at the Grid Computing Center at Fermilab. Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer can perform billions of multiplication and addition operations each second. High-performance parallel computing becomes necessary when computations become too large or too long to complete on a single such machine. In parallel computing, computations are divided up so that many computers can work on the same problem at

    2. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    3. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    4. Cognitive Computing for Security.

      SciTech Connect (OSTI)

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    5. Getting Computer Accounts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Accounts When you first arrive at the lab, you will be presented with lots of forms that must be read and signed in order to get an ID and computer access. You must ensure...

    6. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    7. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    8. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences and Engineering The Computational Sciences and Engineering Division (CSED) is ORNL's premier source of basic and applied research in the field of data sciences and knowledge discovery. CSED's science agenda is focused on research and development related to knowledge discovery enabled by the explosive growth in the availability, size, and variability of dynamic and disparate data sources. This science agenda encompasses data sciences as well as advanced modeling and

    9. Title 16 Alaska Statutes Chapter 5 Fish and Game | Open Energy...

      Open Energy Info (EERE)

      Page Edit with form History Title 16 Alaska Statutes Chapter 5 Fish and Game Jump to: navigation, search OpenEI Reference LibraryAdd to library Legal Document- StatuteStatute:...

    10. BNL ATLAS Grid Computing

      ScienceCinema (OSTI)

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    11. Computing environment logbook

      DOE Patents [OSTI]

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    12. Mathematical and Computational Epidemiology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    13. Stories of Discovery & Innovation: Beating Nature at her Own Game? | U.S.

      Office of Science (SC) Website

      DOE Office of Science (SC) Beating Nature at her Own Game? Energy Frontier Research Centers (EFRCs) EFRCs Home Centers Research Science Highlights News & Events EFRC News EFRC Events DOE Announcements Publications History Contact BES Home 08.24.11 Stories of Discovery & Innovation: Beating Nature at her Own Game? Print Text Size: A A A Subscribe FeedbackShare Page New catalyst speeds conversion of electricity to hydrogen fuel. This work, featured in the Office of Science's Stories of

    14. The emerging multi-polar world and China's grand game (Journal Article) |

      Office of Scientific and Technical Information (OSTI)

      SciTech Connect Journal Article: The emerging multi-polar world and China's grand game Citation Details In-Document Search Title: The emerging multi-polar world and China's grand game This talk outlines a scenario describing an emerging multipolar world that is aligned with geographical regions. The stability and security of this multipolar world is examined with respect to demographics, trade (economics), resource constraints, and development. In particular I focus on Asia which has two

    15. 3 Ways Our Manufacturing Institutes Are Changing the Clean Energy Game |

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Department of Energy Our Manufacturing Institutes Are Changing the Clean Energy Game 3 Ways Our Manufacturing Institutes Are Changing the Clean Energy Game September 24, 2015 - 3:30pm Addthis National Network for Manufacturing Innovation institute Power America focuses on advanced power electronics based on wide bandgap semiconductors. Learn how wide bandgap semiconductors could impact clean energy technology and our daily lives. | Video by Sarah Gerrity and Matty Greene, Energy Department.

    16. Oregon: DOE Advances Game-Changing EGS Geothermal Technology at the

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Newberry Volcano | Department of Energy DOE Advances Game-Changing EGS Geothermal Technology at the Newberry Volcano Oregon: DOE Advances Game-Changing EGS Geothermal Technology at the Newberry Volcano April 9, 2013 - 12:00am Addthis The AltaRock Enhanced Geothermal Systems (EGS) demonstration project, at Newberry Volcano near Bend, Oregon, represents a key step in geothermal energy development, demonstrating that an engineered geothermal reservoir can be developed at a greenfield site.

    17. Lab-Corps Pilot Accelerates Private-Sector Adoption of Game-Changing

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Technologies | Department of Energy Pilot Accelerates Private-Sector Adoption of Game-Changing Technologies Lab-Corps Pilot Accelerates Private-Sector Adoption of Game-Changing Technologies November 20, 2015 - 4:29pm Addthis Energy Department investments in the Lab-Corps initiative are teaming innovative scientists with entrepreneurs to bring latebreaking technologies to market. Energy Department investments in the Lab-Corps initiative are teaming innovative scientists with entrepreneurs to

    18. EERE Success Story-Oregon: DOE Advances Game-Changing EGS Geothermal

      Office of Environmental Management (EM)

      Technology at the Newberry Volcano | Department of Energy Oregon: DOE Advances Game-Changing EGS Geothermal Technology at the Newberry Volcano EERE Success Story-Oregon: DOE Advances Game-Changing EGS Geothermal Technology at the Newberry Volcano April 9, 2013 - 12:00am Addthis The AltaRock Enhanced Geothermal Systems (EGS) demonstration project, at Newberry Volcano near Bend, Oregon, represents a key step in geothermal energy development, demonstrating that an engineered geothermal

    19. UFO (UnFold Operator) computer program abstract

      SciTech Connect (OSTI)

      Kissel, L.; Biggs, F.

      1982-11-01

      UFO (UnFold Operator) is an interactive user-oriented computer program designed to solve a wide range of problems commonly encountered in physical measurements. This document provides a summary of the capabilities of version 3A of UFO.

    20. National labs offer computing time to Japanese physicists | Jefferson...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Japanese physicists in their quest to understand the interactions that lie at the heart of matter. From now until the end of 2011, while computing facilities in eastern Japan...

    1. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    2. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    3. Sandia Energy - High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    4. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.

    5. Nuclear Forces and High-Performance Computing: The Perfect Match

      Office of Scientific and Technical Information (OSTI)

      (Conference) | SciTech Connect Conference: Nuclear Forces and High-Performance Computing: The Perfect Match Citation Details In-Document Search Title: Nuclear Forces and High-Performance Computing: The Perfect Match High-performance computing is now enabling the calculation of certain nuclear interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. We

    6. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    7. Energy Aware Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy required to compute is to reduce the power usage on a node. One way to accomplish this is by lowering the frequency at which the CPU operates. However, reducing the clock speed increases the time to solution, creating a potential tradeoff. NERSC continues to examine how such methods impact its operations and its

    8. Using Light to Control How X Rays Interact with Matter

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Using Light to Control How X Rays Interact with Matter Using Light to Control How X Rays Interact with Matter Print Wednesday, 27 January 2010 00:00 Schemes that use one light pulse to manipulate interactions of another with matter are well developed in the visible-light regime where an optical control pulse influences how an optical probe pulse interacts with a medium. This approach has opened new research directions in fields like quantum computing and nonlinear optics, while also spawning

    9. Personal Computer Inventory System

      Energy Science and Technology Software Center (OSTI)

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    10. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and ...

    11. Announcement of Computer Software

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      All Other Editions Are Obsolete UNITED STATES DEPARTMENT OF ENERGY ANNOUNCEMENT OF COMPUTER SOFTWARE OMB Control Number 1910-1400 (OMB Burden Disclosure Statement is on last...

    12. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      60 Years of Computing 60 Years of Computing

    13. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Capabilities Information Science, Computing, Applied Math science-innovationassetsimagesicon-science.jpg Information Science, Computing, Applied Math National security ...

    14. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    15. SC e-journals, Computer Science

      Office of Scientific and Technical Information (OSTI)

      & Mathematical Organization Theory Computational Complexity Computational Economics Computational Management ... Technology EURASIP Journal on Information Security ...

    16. L3 Interactive Data Language

      Energy Science and Technology Software Center (OSTI)

      2006-09-05

      The L3 system is a computational steering environment for image processing and scientific computing. It consists of an interactive graphical language and interface. Its purpose is to help advanced users in controlling their computational software and assist in the management of data accumulated during numerical experiments. L3 provides a combination of features not found in other environments; these are: - textual and graphical construction of programs - persistence of programs and associated data - directmore » mapping between the scripts, the parameters, and the produced data - implicit hierarchial data organization - full programmability, including conditionals and functions - incremental execution of programs The software includes the l3 language and the graphical environment. The language is a single-assignment functional language; the implementation consists of lexer, parser, interpreter, storage handler, and editing support, The graphical environment is an event-driven nested list viewer/editor providing graphical elements corresponding to the language. These elements are both the represenation of a users program and active interfaces to the values computed by that program.« less

    17. Decision support models for solid waste management: Review and game-theoretic approaches

      SciTech Connect (OSTI)

      Karmperis, Athanasios C.; Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios

      2013-05-15

      Highlights: ? The mainly used decision support frameworks for solid waste management are reviewed. ? The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ? The game-theoretic approach in a solid waste management context is presented. ? The waste management bargaining game is introduced as a specific decision support framework. ? Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the costbenefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed.

    18. DockingShop: A Tool for Interactive Molecular Docking Ting-Cheng...

      Office of Scientific and Technical Information (OSTI)

      Genetics; I.3.6 Computer Graphics: Methodology and Techniques-Interaction Techniques; ... Institute for Quantitative Biomedical Research, Berkeley, California, tlu@lbl.gov, ...

    19. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    20. Applications of Parallel Computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course offered at the University of California, Berkeley. The course is being taught by UC Berkeley professor and LBNL Faculty Scientist Jim Demmel. CS267 is broadcast live over the internet and all NERSC users are invited to monitor the broadcast course, but course credit is available only to student registered for the

    1. Cloud computing security.

      SciTech Connect (OSTI)

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    2. Theory, Modeling and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme...

    3. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      a n n u a l r e p o r t 2 0 1 2 Argonne Leadership Computing Facility Director's Message .............................................................................................................................1 About ALCF ......................................................................................................................................... 2 IntroDuCIng MIrA Introducing Mira

    4. Materials Frontiers to Empower Quantum Computing

      SciTech Connect (OSTI)

      Taylor, Antoinette Jane; Sarrao, John Louis; Richardson, Christopher

      2015-06-11

      This is an exciting time at the nexus of quantum computing and materials research. The materials frontiers described in this report represent a significant advance in electronic materials and our understanding of the interactions between the local material and a manufactured quantum state. Simultaneously, directed efforts to solve materials issues related to quantum computing provide an opportunity to control and probe the fundamental arrangement of matter that will impact all electronic materials. An opportunity exists to extend our understanding of materials functionality from electronic-grade to quantum-grade by achieving a predictive understanding of noise and decoherence in qubits and their origins in materials defects and environmental coupling. Realizing this vision systematically and predictively will be transformative for quantum computing and will represent a qualitative step forward in materials prediction and control.

    5. Ohio State Develops Game-Changing CO2 Capture Membranes in DOE-Funded

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Project | Department of Energy Ohio State Develops Game-Changing CO2 Capture Membranes in DOE-Funded Project Ohio State Develops Game-Changing CO2 Capture Membranes in DOE-Funded Project November 15, 2012 - 12:00pm Addthis Washington, DC - In a project funded by the U.S. Department of Energy's Office of Fossil Energy (FE), researchers at The Ohio State University have developed a groundbreaking new hybrid membrane that combines the separation performance of inorganic membranes with the

    6. Altran and GE Announce Intention to Form an Alliance to Drive Game-Changing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Outcomes Across Industry | GE Global Research Altran and GE Announce Intention to Form an Alliance to Drive Game-Changing Outcomes Across Industry Click to email this to a friend (Opens in new window) Share on Facebook (Opens in new window) Click to share (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Tumblr (Opens in new window) Altran and GE Announce Intention to Form an Alliance to Drive Game-Changing Outcomes Across Industry Both companies to

    7. Beating Nature at her Own Game? | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Beating Nature at her Own Game? News News Home Featured Articles 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 Science Headlines Science Highlights Presentations & Testimony News Archives Communications and Public Affairs Contact Information Office of Science U.S. Department of Energy 1000 Independence Ave., SW Washington, DC 20585 P: (202) 586-5430 08.24.11 Beating Nature at her Own Game? New catalyst speeds conversion of electricity to hydrogen fuel. Print Text Size: A A A

    8. Energy Department Support Brings Game-Changing Advancements in Solar Energy

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      | Department of Energy Support Brings Game-Changing Advancements in Solar Energy Energy Department Support Brings Game-Changing Advancements in Solar Energy November 29, 2012 - 10:37am Addthis Record-Breaking Solar 1 of 5 Record-Breaking Solar This concentrating photovoltaic (CPV) cell -- which uses a focused lens to magnify light to 418 times the intensity of the sun -- earned an R&D100 Award and set a new world record of 43.5 percent for solar cell conversion efficiency. The technology

    9. A Look Inside 1366 and Sun Catalytix, Two "Game-Changing" Innovation

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Projects | Department of Energy Inside 1366 and Sun Catalytix, Two "Game-Changing" Innovation Projects A Look Inside 1366 and Sun Catalytix, Two "Game-Changing" Innovation Projects February 3, 2011 - 1:53pm Addthis Andy Oare Andy Oare Former New Media Strategist, Office of Public Affairs What will the project do? These six innovative technology recipients received a combined $23.6 million in funding from the Recovery Act, an iIn a little over a year, they have generated

    10. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ADTSC » CCS » CCS-7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These

    11. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental health, cleaner energy, and national security. Contact Us Group Leader Carl Gable Deputy Group Leader Gilles Bussod Email Profile pages header Search our Profile pages Hari Viswanathan inspects a microfluidic cell used to study the extraction of hydrocarbon fuels from a complex fracture network. EES-16's Subsurface Flow

    12. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    13. New TRACC Cluster Computer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      TRACC Cluster Computer With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD 16 core, 2.3 GHz, 32 GB processors. See also Computing Resources.

    14. Advanced Simulation and Computing

      National Nuclear Security Administration (NNSA)

      NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction

    15. Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computing Computing Fun fact: Most systems require air conditioning or chilled water to cool super powerful supercomputers, but the Olympus supercomputer at Pacific Northwest National Laboratory is cooled by the location's 65 degree groundwater. Traditional cooling systems could cost up to $61,000 in electricity each year, but this more efficient setup uses 70 percent less energy. | Photo courtesy of PNNL. Fun fact: Most systems require air conditioning or chilled water to cool super powerful

    16. Can Cloud Computing Address the Scientific Computing Requirements for DOE

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe January 30, 2012 Jon Bashor, Jbashor@lbl.gov, +1 510-486-5849 Magellan1.jpg Magellan at NERSC After a two-year study of the feasibility of cloud computing systems for meeting the ever-increasing computational needs of scientists,

    17. Computing and Computational Sciences Directorate - Joint Institute for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences Joint Institute for Computational Sciences To help realize the full potential of new-generation computers for advancing scientific discovery, the University of Tennessee (UT) and Oak Ridge National Laboratory (ORNL) have created the Joint Institute for Computational Sciences (JICS). JICS combines the experience and expertise in theoretical and computational science and engineering, computer science, and mathematics in these two institutions and focuses these skills on

    18. Computing and Computational Sciences Directorate - National Center for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences Home National Center for Computational Sciences The National Center for Computational Sciences (NCCS), formed in 1992, is home to two of Oak Ridge National Laboratory's (ORNL's) high-performance computing projects-the Oak Ridge Leadership Computing Facility (OLCF) and the National Climate-Computing Research Center (NCRC). The OLCF (www.olcf.ornl.gov) was established at ORNL in 2004 with the mission of standing up a supercomputer 100 times more powerful than the leading

    19. in High Performance Computing Computer System, Cluster, and Networking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

    20. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    1. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Capabilities » Information Science, Computing, Applied Math /science-innovation/_assets/images/icon-science.jpg Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los Alamos National Laboratory for the best of both. No place on Earth pursues a broader array of world-class scientific endeavors. Computer, Computational, and Statistical Sciences (CCS)» High Performance Computing (HPC)» Extreme Scale Computing, Co-design»

    2. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse...

    3. Computer simulation | Open Energy Information

      Open Energy Info (EERE)

      Computer simulation Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Computer simulation Author wikipedia Published wikipedia, 2013 DOI Not Provided...

    4. NREL: Computational Science Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      high-performance computing, computational science, applied mathematics, scientific data management, visualization, and informatics. NREL is home to the largest high performance...

    5. SCC: The Strategic Computing Complex

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computer room, which is an open room about three-fourths the size of a football field. The Strategic Computing Complex (SCC) at the Los Alamos National Laboratory...

    6. LHC INTERACTION REGION CORRECTION IN HEAVY ION OPERATION

      SciTech Connect (OSTI)

      PTITSIN,V.; FISCHER,W.; WEI,J.

      1999-09-07

      In heavy ion operation the LHC interaction region at IP2 will have a low-{beta} optics for collisions. The dynamic aperture is therefore sensitive to magnetic field errors in the interaction region quadrupoles and dipoles. The authors investigate the effect of the magnetic field errors on the dynamic aperture and evaluate the effectiveness of local interaction region correctors. The dynamic aperture and the tune space are computed for different crossing angles.

    7. Using Light to Control How X Rays Interact with Matter

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Using Light to Control How X Rays Interact with Matter Print Schemes that use one light pulse to manipulate interactions of another with matter are well developed in the visible-light regime where an optical control pulse influences how an optical probe pulse interacts with a medium. This approach has opened new research directions in fields like quantum computing and nonlinear optics, while also spawning entirely new research areas, such as electromagnetically induced transparency and slow

    8. Using Light to Control How X Rays Interact with Matter

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Using Light to Control How X Rays Interact with Matter Print Schemes that use one light pulse to manipulate interactions of another with matter are well developed in the visible-light regime where an optical control pulse influences how an optical probe pulse interacts with a medium. This approach has opened new research directions in fields like quantum computing and nonlinear optics, while also spawning entirely new research areas, such as electromagnetically induced transparency and slow

    9. Using Light to Control How X Rays Interact with Matter

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Using Light to Control How X Rays Interact with Matter Print Schemes that use one light pulse to manipulate interactions of another with matter are well developed in the visible-light regime where an optical control pulse influences how an optical probe pulse interacts with a medium. This approach has opened new research directions in fields like quantum computing and nonlinear optics, while also spawning entirely new research areas, such as electromagnetically induced transparency and slow

    10. Using Light to Control How X Rays Interact with Matter

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Using Light to Control How X Rays Interact with Matter Print Schemes that use one light pulse to manipulate interactions of another with matter are well developed in the visible-light regime where an optical control pulse influences how an optical probe pulse interacts with a medium. This approach has opened new research directions in fields like quantum computing and nonlinear optics, while also spawning entirely new research areas, such as electromagnetically induced transparency and slow

    11. Using Light to Control How X Rays Interact with Matter

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Using Light to Control How X Rays Interact with Matter Print Schemes that use one light pulse to manipulate interactions of another with matter are well developed in the visible-light regime where an optical control pulse influences how an optical probe pulse interacts with a medium. This approach has opened new research directions in fields like quantum computing and nonlinear optics, while also spawning entirely new research areas, such as electromagnetically induced transparency and slow

    12. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2014-12-30

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    13. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2015-01-27

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    14. Computer Security Risk Assessment

      Energy Science and Technology Software Center (OSTI)

      1992-02-11

      LAVA/CS (LAVA for Computer Security) is an application of the Los Alamos Vulnerability Assessment (LAVA) methodology specific to computer and information security. The software serves as a generic tool for identifying vulnerabilities in computer and information security safeguards systems. Although it does not perform a full risk assessment, the results from its analysis may provide valuable insights into security problems. LAVA/CS assumes that the system is exposed to both natural and environmental hazards and tomore » deliberate malevolent actions by either insiders or outsiders. The user in the process of answering the LAVA/CS questionnaire identifies missing safeguards in 34 areas ranging from password management to personnel security and internal audit practices. Specific safeguards protecting a generic set of assets (or targets) from a generic set of threats (or adversaries) are considered. There are four generic assets: the facility, the organization''s environment; the hardware, all computer-related hardware; the software, the information in machine-readable form stored both on-line or on transportable media; and the documents and displays, the information in human-readable form stored as hard-copy materials (manuals, reports, listings in full-size or microform), film, and screen displays. Two generic threats are considered: natural and environmental hazards, storms, fires, power abnormalities, water and accidental maintenance damage; and on-site human threats, both intentional and accidental acts attributable to a perpetrator on the facility''s premises.« less

    15. MHD computations for stellarators

      SciTech Connect (OSTI)

      Johnson, J.L.

      1985-12-01

      Considerable progress has been made in the development of computational techniques for studying the magnetohydrodynamic equilibrium and stability properties of three-dimensional configurations. Several different approaches have evolved to the point where comparison of results determined with different techniques shows good agreement. 55 refs., 7 figs.

    16. Running Interactive Batch Jobs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Interactive Batch Jobs Running Interactive Batch Jobs You cannot login to the PDSF batch nodes directly but you can run an interactive session on a batch node using either qlogin or qsh. This can be useful if you are doing something that is potentially disruptive or if the interactive nodes are overloaded. qlogin will give you an interactive session in the same window as your original session on PDSF, however, you must have your ssh keys in place. You can do this locally on PDSF by following

    17. Interactive (login) Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Interactive (login) Nodes Interactive (login) Nodes There are 3 interactive nodes at PDSF, pdsf[6-8].nersc.gov, that should be accessed via ssh to pdsf.nersc.gov. These are the gateways to accessing the rest of PDSF. Users can submit batch jobs as well as view and manipulate their files and directories from the interactive nodes. The configuration of the interactive nodes is shown in the table below. Processor Clock Speed (GHz) Architecture Cores Total Memory (GB) Scratch Space (GB) Intel Xeon

    18. Los Alamos computer simulation improves offshore drill rig safety

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computer simulation improves offshore drill rig safety Alumni Link: Opportunities, News and Resources for Former Employees Latest Issue:September 2015 all issues All Issues » submit Los Alamos computer simulation improves offshore drill rig safety Researchers focused on the motion of the floating structure resulting from complex fluid-structure interaction and vortex shedding from sea currents. May 1, 2015 A simulation of vortex induced motion shows how ocean currents affect offshore oil rigs.

    19. Solvate Structures and Computational/Spectroscopic Characterization of

      Office of Scientific and Technical Information (OSTI)

      LiPF6 Electrolytes (Journal Article) | SciTech Connect Solvate Structures and Computational/Spectroscopic Characterization of LiPF6 Electrolytes Citation Details In-Document Search Title: Solvate Structures and Computational/Spectroscopic Characterization of LiPF6 Electrolytes Raman spectroscopy is a powerful method for identifying ion-ion interactions, but only if the vibrational band signature for the anion coordination modes can be accurately deciphered. The present study characterizes

    20. Sandia National Laboratories: Advanced Simulation and Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ASC Advanced Simulation and Computing Computational Systems & Software Environment Crack Modeling The Computational Systems & Software Environment program builds integrated,...

    1. Gas-Alloy Interactions at Elevated Temperatures

      SciTech Connect (OSTI)

      Arroyave, Raymundo; Gao, Michael

      2012-12-01

      The understanding of the stability of metals and alloys against oxidation and other detrimental reactions, to the catalysis of important chemical reactions and the minimization of defects associated with processing and synthesis have one thing in common: At the most fundamental level, all these scientific/engineering problems involve interactions between metals and alloys (in the solid or liquid state) and gaseous atmospheres at elevated temperatures. In this special issue, we have collected a series of articles that illustrate the application of different theoretical, computational, and experimental techniques to investigate gas-alloy interactions.

    2. National Computational Infrastructure for Lattice Gauge Theory

      SciTech Connect (OSTI)

      Brower, Richard C.

      2014-04-15

      SciDAC-2 Project The Secret Life of Quarks: National Computational Infrastructure for Lattice Gauge Theory, from March 15, 2011 through March 14, 2012. The objective of this project is to construct the software needed to study quantum chromodynamics (QCD), the theory of the strong interactions of sub-atomic physics, and other strongly coupled gauge field theories anticipated to be of importance in the energy regime made accessible by the Large Hadron Collider (LHC). It builds upon the successful efforts of the SciDAC-1 project National Computational Infrastructure for Lattice Gauge Theory, in which a QCD Applications Programming Interface (QCD API) was developed that enables lattice gauge theorists to make effective use of a wide variety of massively parallel computers. This project serves the entire USQCD Collaboration, which consists of nearly all the high energy and nuclear physicists in the United States engaged in the numerical study of QCD and related strongly interacting quantum field theories. All software developed in it is publicly available, and can be downloaded from a link on the USQCD Collaboration web site, or directly from the github repositories with entrance linke http://usqcd-software.github.io

    3. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs ...

    4. Visitor Hanford Computer Access Request - Hanford Site

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Email Email Page |...

    5. Software and High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software and High Performance Computing Software and High Performance Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest Contact thumbnail of Kathleen McDonald Head of Intellectual Property, Business Development Executive Kathleen McDonald Richard P. Feynman Center for Innovation (505) 667-5844 Email Software Computational physics, computer science, applied mathematics, statistics and the

    6. Magellan: A Cloud Computing Testbed

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Magellan News & Announcements Archive Petascale Initiative Exascale Computing APEX Home » R & D » Archive » Magellan: A Cloud Computing Testbed Magellan: A Cloud Computing Testbed Cloud computing is gaining a foothold in the business world, but can clouds meet the specialized needs of scientists? That was one of the questions NERSC's Magellan cloud computing testbed explored between 2009 and 2011. The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Oce

    7. Momentum-space Argonne V18 interaction

      SciTech Connect (OSTI)

      Veerasamy, S.; Polyzou, W. N.

      2011-09-15

      This paper gives a momentum-space representation of the Argonne V18 potential as an expansion in products of spin-isospin operators with scalar coefficient functions of the momentum transfer. Two representations of the scalar coefficient functions for the strong part of the interaction are given. One is as an expansion in an orthonormal basis of rational functions and the other as an expansion in Chebyshev polynomials on different intervals. Both provide practical and efficient representations for computing the momentum-space potential that do not require integration or interpolation. Programs based on both expansions are available as supplementary material. Analytic expressions are given for the scalar coefficient functions of the Fourier transform of the electromagnetic part of the Argonne V18. A simple method for computing the partial-wave projections of these interactions from the operator expressions is also given.

    8. Computer Algebra System

      Energy Science and Technology Software Center (OSTI)

      1992-05-04

      DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less

    9. Exploratory Experimentation and Computation

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2010-02-25

      We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

    10. GPU Computational Screening

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      GPU Computational Screening of Carbon Capture Materials J. Kim 1 , A Koniges 1 , R. Martin 1 , M. Haranczyk 1 , J. Swisher 2 , and B. Smit 1,2 1 Lawrence Berkeley National Laboratory, Berkeley, CA 94720 2 Department of Chemical Engineering, University of California, Berkeley, Berkeley, CA 94720 E-mail: jihankim@lbl.gov Abstract. In order to reduce the current costs associated with carbon capture technologies, novel materials such as zeolites and metal-organic frameworks that are based on

    11. Cloud Computing Services

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Services - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    12. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Performance Computing - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    13. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Anti-HIV antibody Software optimized on Mira advances design of mini-proteins for medicines, materials Scientists at the University of Washington are using Mira to virtually design unique, artificial peptides, or short proteins. Read More Celebrating 10 years 10 science highlights celebrating 10 years of Argonne Leadership Computing Facility To celebrate our 10th anniversary, we're highlighting 10 science accomplishments since we opened our doors. Read More Bill Gropp works with students during

    14. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    15. From Federal Computer Week:

      National Nuclear Security Administration (NNSA)

      Federal Computer Week: Energy agency launches performance-based pay system By Richard W. Walker Published on March 27, 2008 The Energy Department's National Nuclear Security Administration has launched a new performance- based pay system involving about 2,000 of its 2,500 employees. NNSA officials described the effort as a pilot project that will test the feasibility of the new system, which collapses the traditional 15 General Schedule pay bands into broader pay bands. The new structure

    16. Computed Tomography Status

      DOE R&D Accomplishments [OSTI]

      Hansche, B. D.

      1983-01-01

      Computed tomography (CT) is a relatively new radiographic technique which has become widely used in the medical field, where it is better known as computerized axial tomographic (CAT) scanning. This technique is also being adopted by the industrial radiographic community, although the greater range of densities, variation in samples sizes, plus possible requirement for finer resolution make it difficult to duplicate the excellent results that the medical scanners have achieved.

    17. Development of computer graphics

      SciTech Connect (OSTI)

      Nuttall, H.E.

      1989-07-01

      The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

    18. 5 Checks & 5 Tips for INCITE | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      for INCITE Proposal Writers 1. Is your research groundbreaking? INCITE research is game changing: During the INCITE review, panelists assess the likelihood that your proposed...

    19. Learning from Semantic Interactions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Learning from Semantic Interactions Most machine learning tools used in geospatial mapping can only learn from labels. Learning from Semantic Interactions LANL's new machine learning tools can learn from semantic user interactions to produce more accurate mappings Point of Contact: Reid Porter, ISR Division, 665-7508, rporter@lanl.gov Current Phase - LDRD: * Develop theory and algorithms for tools and demonstrate impact in image analysis applications in materials microscopy. Phase 2 - Geospatial

    20. Weak Interaction | Jefferson Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Weak Interaction February 22, 2011 Jefferson Lab has an accelerator designed to do incisive medium energy physics. This program is dominated by experiments aimed at developing our...

    1. ERHIC INTERACTION REGION DESIGN.

      SciTech Connect (OSTI)

      MONTAG,C.PARKER,B.PTITSYN,V.TEPIKIAN,S.WANG,D.WANG,F.

      2003-10-13

      This paper presents the current interaction region design status of the ring-ring version of the electron-ion collider eRHIC (release 2.0).

    2. Nerve-pulse interactions

      SciTech Connect (OSTI)

      Scott, A.C.

      1982-01-01

      Some recent experimental and theoretical results on mechanisms through which individual nerve pulses can interact are reviewed. Three modes of interactions are considered: (1) interaction of pulses as they travel along a single fiber which leads to velocity dispersion; (2) propagation of pairs of pulses through a branching region leading to quantum pulse code transformations; and (3) interaction of pulses on parallel fibers through which they may form a pulse assembly. This notion is analogous to Hebb's concept of a cell assembly, but on a lower level of the neural hierarchy.

    3. [Computer Science and Telecommunications Board activities

      SciTech Connect (OSTI)

      Blumenthal, M.S.

      1993-02-23

      The board considers technical and policy issues pertaining to computer science, telecommunications, and associated technologies. Functions include providing a base of expertise for these fields in NRC, monitoring and promoting health of these fields, initiating studies of these fields as critical resources and sources of national economic strength, responding to requests for advice, and fostering interaction among the technologies and the other pure and applied science and technology. This document describes its major accomplishments, current programs, other sponsored activities, cooperative ventures, and plans and prospects.

    4. High Performance Computing at the Oak Ridge Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing at the Oak Ridge Leadership Computing Facility Go to Menu Page 2 Outline * Our Mission * Computer Systems: Present, Past, Future * Challenges Along the Way * Resources for Users Go to Menu Page 3 Our Mission Go to Menu Page 4 * World's most powerful computing facility * Nation's largest concentration of open source materials research * $1.3B budget * 4,250 employees * 3,900 research guests annually * $350 million invested in modernization * Nation's most diverse energy

    5. DIP: The Database of Interacting Proteins

      DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

      The DIP Database catalogs experimentally determined interactions between proteins. It combines information from a variety of sources to create a single, consistent set of protein-protein interactions. By interaction, the DIP Database creators mean that two amino acid chains were experimentally identified to bind to each other. The database lists such pairs to aid those studying a particular protein-protein interaction but also those investigating entire regulatory and signaling pathways as well as those studying the organisation and complexity of the protein interaction network at the cellular level. The data stored within the DIP database were curated, both, manually by expert curators and also automatically using computational approaches that utilize the knowledge about the protein-protein interaction networks extracted from the most reliable, core subset of the DIP data. It is a relational database that can be searched by protein, sequence, motif, article information, and pathBLAST. The website also serves as an access point to a number of projects related to DIP, such as LiveDIP, The Database of Ligand-Receptor Partners (DLRP) and JDIP. Users have free and open access to DIP after login. [Taken from the DIP Guide and the DIP website] (Specialized Interface) (Registration Required)

    6. Multiprocessor computing for images

      SciTech Connect (OSTI)

      Cantoni, V. ); Levialdi, S. )

      1988-08-01

      A review of image processing systems developed until now is given, highlighting the weak points of such systems and the trends that have dictated their evolution through the years producing different generations of machines. Each generation may be characterized by the hardware architecture, the programmability features and the relative application areas. The need for multiprocessing hierarchical systems is discussed focusing on pyramidal architectures. Their computational paradigms, their virtual and physical implementation, their programming and software requirements, and capabilities by means of suitable languages, are discussed.

    7. computational-hydraulics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne, Illinois Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. A training course in the use of computational hydraulics and aerodynamics CFD software using CD-adapco's STAR-CCM+ for analysis will be held at TRACC from March 21-22, 2012. The course assumes a basic knowledge of fluid mechanics and will make extensive use of hands on tutorials. CD-adapco will issue

    8. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege o

    9. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, William C.

      1998-01-01

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them.

    10. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, W.C.

      1998-03-17

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers is disclosed. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them. 5 figs.

    11. Scanning computed confocal imager

      DOE Patents [OSTI]

      George, John S. (Los Alamos, NM)

      2000-03-14

      There is provided a confocal imager comprising a light source emitting a light, with a light modulator in optical communication with the light source for varying the spatial and temporal pattern of the light. A beam splitter receives the scanned light and direct the scanned light onto a target and pass light reflected from the target to a video capturing device for receiving the reflected light and transferring a digital image of the reflected light to a computer for creating a virtual aperture and outputting the digital image. In a transmissive mode of operation the invention omits the beam splitter means and captures light passed through the target.

    12. Parallel, distributed and GPU computing technologies in single-particle electron microscopy

      SciTech Connect (OSTI)

      Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger

      2009-07-01

      An introduction to the current paradigm shift towards concurrency in software. Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of todays technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.

    13. Elementary particle interactions

      SciTech Connect (OSTI)

      Bugg, W.M.; Condo, G.T.; Handler, T.; Hart, E.L.; Ward, B.F.L.; Close, F.E.; Christophorou, L.G.

      1990-10-01

      This report discusses freon bubble chamber experiments exposed to {mu}{sup +} and neutrinos, photon-proton interactions; shower counter simulations; SLD detectors at the Stanford Linear Collider, and the detectors at the Superconducting Super Collider; elementary particle interactions; physical properties of dielectric materials used in High Energy Physics detectors; and Nuclear Physics. (LSP)

    14. Introduction to High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Introduction to High Performance Computing Introduction to High Performance Computing June 10, 2013 Photo on 7 30 12 at 7.10 AM Downloads Download File Gerber-HPC-2.pdf...

    15. Computer Wallpaper | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Wallpaper We've incorporated the tagline, Creating Materials and Energy Solutions, into a computer wallpaper so you can display it on your desktop as a constant reminder....

    16. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse osmosis to "super purify" water allows the system to reuse water and cool down our powerful yet thirsty computers. January 30, 2014 Super recycled water: quenching computers LANL's Sanitary Effluent Reclamation Facility, key to reducing the Lab's discharge of liquid. Millions of gallons of industrial

    17. Fermilab | Science at Fermilab | Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Computing is indispensable to science at Fermilab. High-energy physics experiments generate an astounding amount of data that physicists need to store, analyze and communicate with others. Cutting-edge technology allows scientists to work quickly and efficiently to advance our understanding of the world . Fermilab's Computing Division is recognized for its expertise in handling huge amounts of data, its success in high-speed parallel computing and its willingness to take its craft in

    18. History | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Leadership Computing The Argonne Leadership Computing Facility (ALCF) was established at Argonne National Laboratory in 2004 as part of a U.S. Department of Energy (DOE) initiative dedicated to enabling leading-edge computational capabilities to advance fundamental discovery and understanding in a broad range of scientific and engineering disciplines. Supported by the Advanced Scientific Computing Research (ASCR) program within DOE's Office of Science, the ALCF is one half of the DOE Leadership

    19. Steven Weinberg, Weak Interactions, and Electromagnetic Interactions

      Office of Scientific and Technical Information (OSTI)

      Steven Weinberg and Weak and Electromagnetic Interactions Resources with Additional Information Steven Weinberg Courtesy Dr. Steven Weinberg Steven "Weinberg is a professor of physics and astronomy at UT [The University of Texas] Austin and is founding director of the Theory Group in the College of Natural Sciences. [He is] well known for his development of a field theory that unifies the electromagnetic and weak nuclear forces, and for other major contributions to physics and cosmology ...

    20. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the microgrid.

    1. System and method for controlling power consumption in a computer system based on user satisfaction

      DOE Patents [OSTI]

      Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

      2014-04-22

      Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

    2. Noise tolerant spatiotemporal chaos computing

      SciTech Connect (OSTI)

      Kia, Behnam; Kia, Sarvenaz; Ditto, William L.; Lindner, John F.; Sinha, Sudeshna

      2014-12-01

      We introduce and design a noise tolerant chaos computing system based on a coupled map lattice (CML) and the noise reduction capabilities inherent in coupled dynamical systems. The resulting spatiotemporal chaos computing system is more robust to noise than a single map chaos computing system. In this CML based approach to computing, under the coupled dynamics, the local noise from different nodes of the lattice diffuses across the lattice, and it attenuates each other's effects, resulting in a system with less noise content and a more robust chaos computing architecture.

    3. AMRITA -- A computational facility

      SciTech Connect (OSTI)

      Shepherd, J.E.; Quirk, J.J.

      1998-02-23

      Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generates the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.

    4. Computer memory management system

      DOE Patents [OSTI]

      Kirk, III, Whitson John

      2002-01-01

      A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

    5. The Macolumn - the Mac gets geophysical. [A review of geophysical software for the Apple Macintosh computer

      SciTech Connect (OSTI)

      Busbey, A.B. )

      1990-02-01

      Seismic Processing Workshop, a program by Parallel Geosciences of Austin, TX, is discussed in this column. The program is a high-speed, interactive seismic processing and computer analysis system for the Apple Macintosh II family of computers. Also reviewed in this column are three products from Wilkerson Associates of Champaign, IL. SubSide is an interactive program for basin subsidence analysis; MacFault and MacThrustRamp are programs for modeling faults.

    6. Running Interactive Batch Jobs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      if the node of choice is not immediately available Start an interactive session in the debug queue qsh -l debug1 -now no qlogin -l debug1 -now no This is useful when the cluster...

    7. Computer modeling of the global warming effect

      SciTech Connect (OSTI)

      Washington, W.M.

      1993-12-31

      The state of knowledge of global warming will be presented and two aspects examined: observational evidence and a review of the state of computer modeling of climate change due to anthropogenic increases in greenhouse gases. Observational evidence, indeed, shows global warming, but it is difficult to prove that the changes are unequivocally due to the greenhouse-gas effect. Although observational measurements of global warming are subject to ``correction,`` researchers are showing consistent patterns in their interpretation of the data. Since the 1960s, climate scientists have been making their computer models of the climate system more realistic. Models started as atmospheric models and, through the addition of oceans, surface hydrology, and sea-ice components, they then became climate-system models. Because of computer limitations and the limited understanding of the degree of interaction of the various components, present models require substantial simplification. Nevertheless, in their present state of development climate models can reproduce most of the observed large-scale features of the real system, such as wind, temperature, precipitation, ocean current, and sea-ice distribution. The use of supercomputers to advance the spatial resolution and realism of earth-system models will also be discussed.

    8. Laser Plasma Interactions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Laser Plasma Interactions Laser Plasma Interactions Understanding and controlling laser produced plasmas for fusion and basic science Contact David Montgomery (505) 665-7994 Email John Kline (505) 667-7062 Email Thomson scattering is widely used to measure plasma temperature, density, and flow velocity in laser-produced plasmas at Trident, and is also used to detect plasma waves driven by unstable and nonlinear processes. A typical configuration uses a low intensity laser beam (2nd, 3rd, or 4th

    9. Human-machine interactions

      DOE Patents [OSTI]

      Forsythe, J. Chris (Sandia Park, NM); Xavier, Patrick G. (Albuquerque, NM); Abbott, Robert G. (Albuquerque, NM); Brannon, Nathan G. (Albuquerque, NM); Bernard, Michael L. (Tijeras, NM); Speed, Ann E. (Albuquerque, NM)

      2009-04-28

      Digital technology utilizing a cognitive model based on human naturalistic decision-making processes, including pattern recognition and episodic memory, can reduce the dependency of human-machine interactions on the abilities of a human user and can enable a machine to more closely emulate human-like responses. Such a cognitive model can enable digital technology to use cognitive capacities fundamental to human-like communication and cooperation to interact with humans.

    10. Radionuclide Interaction and Transport in Representative Geologic Media |

      Energy Savers [EERE]

      Department of Energy Radionuclide Interaction and Transport in Representative Geologic Media Radionuclide Interaction and Transport in Representative Geologic Media The report presents information related to the development of a fundamental understanding of disposal-system performance in a range of environments for potential wastes that could arise from future nuclear fuel cycle alternatives. It addresses selected aspects of the development of computational modeling capability for the

    11. Intro to computer programming, no computer required! | Argonne Leadership

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Facility Intro to computer programming, no computer required! Author: Laura Wolf January 6, 2016 Facebook Twitter LinkedIn Google E-mail Printer-friendly version Pairing the volunteers with interested schools was the easy part. School administrators and teachers alike were delighted to have Argonne National Laboratory volunteers visit and help guide their Hour of Code activities last December. In all, Argonne's Educational Programs department helped place 44 volunteers in Chicago

    12. Other World Computing | Open Energy Information

      Open Energy Info (EERE)

      World Computing Jump to: navigation, search Name Other World Computing Facility Other World Computing Sector Wind energy Facility Type Community Wind Facility Status In Service...

    13. CLAMR (Compute Language Adaptive Mesh Refinement)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CLAMR (Compute Language Adaptive Mesh Refinement) CLAMR (Compute Language Adaptive Mesh Refinement) CLAMR (Compute Language Adaptive Mesh Refinement) is being developed as a DOE...

    14. Computer_Vision

      Energy Science and Technology Software Center (OSTI)

      2002-10-04

      The Computer_Vision software performs object recognition using a novel multi-scale characterization and matching algorithm. To understand the multi-scale characterization and matching software, it is first necessary to understand some details of the Computer Vision (CV) Project. This project has focused on providing algorithms and software that provide an end-to-end toolset for image processing applications. At a high-level, this end-to-end toolset focuses on 7 coy steps. The first steps are geometric transformations. 1) Image Segmentation. Thismore » step essentially classifies pixels in foe input image as either being of interest or not of interest. We have also used GENIE segmentation output for this Image Segmentation step. 2 Contour Extraction (patent submitted). This takes the output of Step I and extracts contours for the blobs consisting of pixels of interest. 3) Constrained Delaunay Triangulation. This is a well-known geometric transformation that creates triangles inside the contours. 4 Chordal Axis Transform (CAT) . This patented geometric transformation takes the triangulation output from Step 3 and creates a concise and accurate structural representation of a contour. From the CAT, we create a linguistic string, with associated metrical information, that provides a detailed structural representation of a contour. 5.) Normalization. This takes an attributed linguistic string output from Step 4 and balances it. This ensures that the linguistic representation accurately represents the major sections of the contour. Steps 6 and 7 are implemented by the multi-scale characterization and matching software. 6) Multi scale Characterization. This takes as input the attributed linguistic string output from Normalization. Rules from a context free grammar are applied in reverse to create a tree-like representation for each contour. For example, one of the grammar’s rules is L -> (LL ). When an (LL) is seen in a string, a parent node is created that points to the four child symbols ‘(‘ , ‘L’ , ‘L’, and ‘)‘ . Levels in the tree can then be thought of as coarser (towards the root) or finer (towards the leaves) representations of the same contours. 7.) Multi scale Matching. Having a multi-scale characterization allows us to compare objects at a coarser level before matching at finer levels of detail. Matching at a coarser level not only increases the speed of the matching process (you’re comparing fewer symbols) , but also increases accuracy since small variations along contours do not significantly detract from two objects’ similarity.« less

    15. Setting up boundary conditions for soil-structure interaction problems with DYNALK (a link from TENSOR to DYNA3D)

      SciTech Connect (OSTI)

      Thigpen, L.; Peterson, J.C.

      1983-08-01

      This report provides instructions on the use of the DYNALK computer program to generate boundary conditions for a soil island used in soil-structure interaction problems. DYNALK converts temporal motions from 2-D TENSOR calculations into appropriate three-dimensional boundary conditions for a DYNA3D soil-structure interaction problem. The program is operational on the CRAY-1 computer.

    16. Applicaiton of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities

      Office of Environmental Management (EM)

      of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop October 25, 2011 Application of the Computer Program SASSI for Seismic SSI Analysis for WTP Facilities, Farhang Ostadan & Raman Venkata, October 25, 2011, Page-2 Background *SASSI computer code was developed in the early 1980's to solve Soil-Structure-Interaction (SSI) problems * Original version of SASSI was

    17. Computational Fluid Dynamics Library

      Energy Science and Technology Software Center (OSTI)

      2005-03-04

      CFDLib05 is the Los Alamos Computational Fluid Dynamics LIBrary. This is a collection of hydrocodes using a common data structure and a common numerical method, for problems ranging from single-field, incompressible flow, to multi-species, multi-field, compressible flow. The data structure is multi-block, with a so-called structured grid in each block. The numerical method is a Finite-Volume scheme employing a state vector that is fully cell-centered. This means that the integral form of the conservation lawsmore » is solved on the physical domain that is represented by a mesh of control volumes. The typical control volume is an arbitrary quadrilateral in 2D and an arbitrary hexahedron in 3D. The Finite-Volume scheme is for time-unsteady flow and remains well coupled by means of time and space centered fluxes; if a steady state solution is required, the problem is integrated forward in time until the user is satisfied that the state is stationary.« less

    18. Bioinformatics Computing Consultant Position Available

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Bioinformatics Computing Consultant Position Available Bioinformatics Computing Consultant Position Available October 31, 2011 by Katie Antypas NERSC and the Joint Genome Institute (JGI) are searching for two individuals who can help biologists exploit advanced computing platforms. JGI provides production sequencing and genomics for the Department of Energy. These activities are critical to the DOE missions in areas related to clean energy generation and environmental characterization and

    19. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Parallel Computing Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email The Parallel Computing Summer Research Internship is an intense 10 week program aimed at providing students with a solid foundation in modern high performance

    20. computational-fluid-dynamics-training

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Table of Contents Date Location Advanced Hydraulic and Aerodynamic Analysis Using CFD March 27-28, 2013 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 30-31, 2011 Argonne TRACC Argonne, IL Computational Hydraulics for Transportation Workshop September 23-24, 2009 Argonne TRACC West Chicago, IL

    1. Careers | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Careers at Argonne Looking for a unique opportunity to work at the forefront of high-performance computing? At the Argonne Leadership Computing Facility, we are helping to redefine what's possible in computational science. With some of the most powerful supercomputers in the world and a talented and diverse team of experts, we enable researchers to pursue groundbreaking discoveries that would otherwise not be possible. Check out our open positions below. For the most current listing of

    2. Bioinformatics Computing Consultant Position Available

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      You can read more about the positions and apply at jobs.lbl.gov: Bioinformatics High Performance Computing Consultant (job number: 73194) and Software Developer for High...

    3. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      should have basic experience with a scientific computing language, such as C, C++, Fortran and with the LINUX operating system. Duration & Location The program will last ten...

    4. Tukey | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. Feedback Form Tukey The primary purpose of...

    5. QBox | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computers. Obtaining Qbox http:eslab.ucdavis.edusoftwareqbox Building Qbox for Blue GeneQ Qbox requires the standard math libraries plus the Xerces-C http:...

    6. Advanced Simulation and Computing Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The SSP mission is to analyze and predict the performance, safety, and reliability of nuclear weapons and certify their functionality. ASC works in partnership with computer ...

    7. Institutional computing (IC) information session

      SciTech Connect (OSTI)

      Koch, Kenneth R; Lally, Bryan R

      2011-01-19

      The LANL Institutional Computing Program (IC) will host an information session about the current state of unclassified Institutional Computing at Los Alamos, exciting plans for the future, and the current call for proposals for science and engineering projects requiring computing. Program representatives will give short presentations and field questions about the call for proposals and future planned machines, and discuss technical support available to existing and future projects. Los Alamos has started making a serious institutional investment in open computing available to our science projects, and that investment is expected to increase even more.

    8. A double-double/double-single computation package

      Energy Science and Technology Software Center (OSTI)

      2004-12-01

      The DDFUNIDSFUN software permits a new or existing Fortran-90 program to utilize double-double precision (approx. 31 digits) or double-single precision (approx. 14 digits) arithmetic. Double-double precision is required by a rapidly expandirtg body of scientific computations in physics and mathematics, for which the conventional 64-bit IEEE computer arithmetic (about 16 decimal digit accuracy) is not sufficient. Double-single precision permits users of systems that do not have hardware 64-bit IEEE arithmetic (such as some game systems)more » to perform arithmetic at a precision nearly as high as that of systems that do. Both packages run significantly faster Than using multiple precision or arbitrary precision software for this purpose. The package includes an extensive set of low-level routines to perform high-precision arithmetic, including routines to calculate various algebraic and transcendental functions, such as square roots, sin, ccc, exp, log and others. In addition, the package includes high-level translation facilities, so that Fortran programs can utilize these facilities by making only a few changes to conventional Fortran programs. In most cases, the only changes that are required are to change the type statements of variables that one wishes to be treated as multiple precision, plus a few other minor changes. The DDFUN package is similar in functionality to the double-double part of the GD package, which was previously written at LBNL. However, the DDFUN package is written exclusively in Fortran-90, thus avoidIng difficulties that some users experience when using GD, which includes both Fortran-90 and C++ code.« less

    9. Achromatic Interaction Point Design

      SciTech Connect (OSTI)

      Guimei Wang,, Yaroslav Derbenev, S.Alex Bogacz, P. Chevtsov, Andre Afanaciev, Charles Ankenbrandt, Valentin Ivanov, Rolland P. Johnson

      2009-05-01

      Designers of high-luminosity energy-frontier muon colliders must provide strong beam focusing in the interaction regions. However, the construction of a strong, aberration-free beam focus is difficult and space consuming, and long straight sections generate an off-site radiation problem due to muon decay neutrinos that interact as they leave the surface of the earth. Without some way to mitigate the neutrino radiation problem, the maximum c.m. energy of a muon collider will be limited to about 3.5 TeV. A new concept for achromatic low beta design is being developed, in which the interaction region telescope and optical correction elements, are installed in the bending arcs. The concept, formulated analytically, combines space economy, a preventative approach to compensation for aberrations, and a reduction of neutrino flux concentration. An analytical theory for the aberration-free, low beta, spatially compact insertion is being developed.

    10. Dike/Drift Interactions

      SciTech Connect (OSTI)

      E. Gaffiney

      2004-11-23

      This report presents and documents the model components and analyses that represent potential processes associated with propagation of a magma-filled crack (dike) migrating upward toward the surface, intersection of the dike with repository drifts, flow of magma in the drifts, and post-magma emplacement effects on repository performance. The processes that describe upward migration of a dike and magma flow down the drift are referred to as the dike intrusion submodel. The post-magma emplacement processes are referred to as the post-intrusion submodel. Collectively, these submodels are referred to as a conceptual model for dike/drift interaction. The model components and analyses of the dike/drift interaction conceptual model provide the technical basis for assessing the potential impacts of an igneous intrusion on repository performance, including those features, events, and processes (FEPs) related to dike/drift interaction (Section 6.1).

    11. History of Weak Interactions

      DOE R&D Accomplishments [OSTI]

      Lee, T. D.

      1970-07-01

      While the phenomenon of beta-decay was discovered near the end of the last century, the notion that the weak interaction forms a separate field of physical forces evolved rather gradually. This became clear only after the experimental discoveries of other weak reactions such as muon-decay, muon-capture, etc., and the theoretical observation that all these reactions can be described by approximately the same coupling constant, thus giving rise to the notion of a universal weak interaction. Only then did one slowly recognize that the weak interaction force forms an independent field, perhaps on the same footing as the gravitational force, the electromagnetic force, and the strong nuclear and sub-nuclear forces.

    12. Interactions of Hydrogen Isotopes and Oxides with Metal Tubes

      SciTech Connect (OSTI)

      Glen R. Longhurst

      2008-08-01

      Understanding and accounting for interaction of hydrogen isotopes and their oxides with metal surfaces is important for persons working with tritium systems. Reported data from several investigators have shown that the processes of oxidation, adsorption, absorption, and permeation are all coupled and interactive. A computer model has been developed for predicting the interaction of hydrogen isotopes and their corresponding oxides in a flowing carrier gas stream with the walls of a metallic tube, particularly at low hydrogen concentrations. An experiment has been constructed to validate the predictive model. Predictions from modeling lead to unexpected experiment results.

    13. Albeni Falls Wildlife Mitigation Project; Idaho Department of Fish and Game 2007 Final Annual Report.

      SciTech Connect (OSTI)

      Cousins, Katherine

      2009-04-03

      The Idaho Department of Fish and Game maintained a total of about 2,743 acres of wildlife mitigation habitat in 2007, and protected another 921 acres. The total wildlife habitat mitigation debt has been reduced by approximately two percent (598.22 HU) through the Department's mitigation activities in 2007. Implementation of the vegetative monitoring and evaluation program continued across protected lands. For the next funding cycle, the IDFG is considering a package of restoration projects and habitat improvements, conservation easements, and land acquisitions in the project area.

    14. Manufacturing Energy and Carbon Footprint - Sector: Computer...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computers, Electronics and Electrical Equipment (NAICS 334, 335) Process Energy ... Carbon Footprint Sector: Computers, Electronics and Electrical Equipment (NAICS 334, ...

    15. Computer-assisted data acquisition on Josephson junctions

      SciTech Connect (OSTI)

      Pagano, S. ); Costabile, G.; Fedullo, V.

      1989-09-01

      An automatic digital data-acquisition system for the test and characterization of superconducting Josephson tunnel junctions is presented. The key feature is represented by the high degree of interaction of the measurement system with the device under test. This is accomplished by an iterated sequence of data acquisitions, automatic analysis, and subsequent modifications of the control signals in the device. In this way, the basic calibration and the value of the relevant quantities involved with the Josephson junction are automatically determined. A connection with a host computer makes possible more complex data analysis, while the full control of the experiment by a dedicated computer allows the operator to perform nonroutine procedures.

    16. Computer-based and web-based radiation safety training

      SciTech Connect (OSTI)

      Owen, C., LLNL

      1998-03-01

      The traditional approach to delivering radiation safety training has been to provide a stand-up lecture of the topic, with the possible aid of video, and to repeat the same material periodically. New approaches to meeting training requirements are needed to address the advent of flexible work hours and telecommuting, and to better accommodate individuals learning at their own pace. Computer- based and web-based radiation safety training can provide this alternative. Computer-based and web- based training is an interactive form of learning that the student controls, resulting in enhanced and focused learning at a time most often chosen by the student.

    17. Computation Directorate 2008 Annual Report

      SciTech Connect (OSTI)

      Crawford, D L

      2009-03-25

      Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to its 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.

    18. Nucleon-nucleon interactions

      SciTech Connect (OSTI)

      Wiringa, R.B.

      1996-12-31

      Nucleon-nucleon interactions are at the heart of nuclear physics, bridging the gap between QCD and the effective interactions appropriate for the shell model. We discuss the current status of {ital NN} data sets, partial-wave analyses, and some of the issues that go into the construction of potential models. Our remarks are illustrated by reference to the Argonne {ital v}{sub 18} potential, one of a number of new potentials that fit elastic nucleon-nucleon data up to 350 MeV with a {Chi}{sup 2} per datum near 1. We also discuss the related issues of three-nucleon potentials, two-nucleon charge and current operators, and relativistic effects. We give some examples of calculations that can be made using these realistic descriptions of {ital NN} interactions. We conclude with some remarks on how our empirical knowledge of {ital NN} interactions may help constrain models at the quark level, and hence models of nucleon structure.

    19. Size-exclusion chromatography system for macromolecular interaction analysis

      DOE Patents [OSTI]

      Stevens, Fred J.

      1988-01-01

      A low pressure, microcomputer controlled system employing high performance liquid chromatography (HPLC) allows for precise analysis of the interaction of two reversibly associating macromolecules such as proteins. Since a macromolecular complex migrates faster than its components during size-exclusion chromatography, the difference between the elution profile of a mixture of two macromolecules and the summation of the elution profiles of the two components provides a quantifiable indication of the degree of molecular interaction. This delta profile is used to qualitatively reveal the presence or absence of significant interaction or to rank the relative degree of interaction in comparing samples and, in combination with a computer simulation, is further used to quantify the magnitude of the interaction in an arrangement wherein a microcomputer is coupled to analytical instrumentation in a novel manner.

    20. Hellsgate Big Game Winter Range Wildlife Mitigation Project : Annual Report 2008.

      SciTech Connect (OSTI)

      Whitney, Richard P.; Berger, Matthew T.; Rushing, Samuel; Peone, Cory

      2009-01-01

      The Hellsgate Big Game Winter Range Wildlife Mitigation Project (Hellsgate Project) was proposed by the Confederated Tribes of the Colville Reservation (CTCR) as partial mitigation for hydropower's share of the wildlife losses resulting from Chief Joseph and Grand Coulee Dams. At present, the Hellsgate Project protects and manages 57,418 acres (approximately 90 miles2) for the biological requirements of managed wildlife species; most are located on or near the Columbia River (Lake Rufus Woods and Lake Roosevelt) and surrounded by Tribal land. To date we have acquired about 34,597 habitat units (HUs) towards a total 35,819 HUs lost from original inundation due to hydropower development. In addition to the remaining 1,237 HUs left unmitigated, 600 HUs from the Washington Department of Fish and Wildlife that were traded to the Colville Tribes and 10 secure nesting islands are also yet to be mitigated. This annual report for 2008 describes the management activities of the Hellsgate Big Game Winter Range Wildlife Mitigation Project (Hellsgate Project) during the past year.

    1. Computational Aerodynamic Analysis of Offshore Upwind and Downwind Turbines

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Zhao, Qiuying; Sheng, Chunhua; Afjeh, Abdollah

      2014-01-01

      Aerodynamic interactions of the model NREL 5 MW offshore horizontal axis wind turbines (HAWT) are investigated using a high-fidelity computational fluid dynamics (CFD) analysis. Four wind turbine configurations are considered; three-bladed upwind and downwind and two-bladed upwind and downwind configurations, which operate at two different rotor speeds of 12.1 and 16 RPM. In the present study, both steady and unsteady aerodynamic loads, such as the rotor torque, blade hub bending moment, and base the tower bending moment of the tower, are evaluated in detail to provide overall assessment of different wind turbine configurations. Aerodynamic interactions between the rotor and tower are analyzed,more » including the rotor wake development downstream. The computational analysis provides insight into aerodynamic performance of the upwind and downwind, two- and three-bladed horizontal axis wind turbines.« less

    2. Oak Ridge National Laboratory - Computing and Computational Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Directorate Oak Ridge to acquire next generation supercomputer Oak Ridge to acquire next generation supercomputer The U.S. Department of Energy's (DOE) Oak Ridge Leadership Computing Facility (OLCF) has signed a contract with IBM to bring a next-generation supercomputer to Oak Ridge National Laboratory (ORNL). The OLCF's new hybrid CPU/GPU computing system, Summit, will be delivered in 2017. (more) Links Department of Energy Consortium for Advanced Simulation of Light Water Reactors Extreme

    3. Introduction to computers: Reference guide

      SciTech Connect (OSTI)

      Ligon, F.V.

      1995-04-01

      The ``Introduction to Computers`` program establishes formal partnerships with local school districts and community-based organizations, introduces computer literacy to precollege students and their parents, and encourages students to pursue Scientific, Mathematical, Engineering, and Technical careers (SET). Hands-on assignments are given in each class, reinforcing the lesson taught. In addition, the program is designed to broaden the knowledge base of teachers in scientific/technical concepts, and Brookhaven National Laboratory continues to act as a liaison, offering educational outreach to diverse community organizations and groups. This manual contains the teacher`s lesson plans and the student documentation to this introduction to computer course.

    4. Electron: Cluster interactions

      SciTech Connect (OSTI)

      Scheidemann, A.A.; Kresin, V.V.; Knight, W.D.

      1994-02-01

      Beam depletion spectroscopy has been used to measure absolute total inelastic electron-sodium cluster collision cross sections in the energy range from E {approximately} 0.1 to E {approximately} 6 eV. The investigation focused on the closed shell clusters Na{sub 8}, Na{sub 20}, Na{sub 40}. The measured cross sections show an increase for the lowest collision energies where electron attachment is the primary scattering channel. The electron attachment cross section can be understood in terms of Langevin scattering, connecting this measurement with the polarizability of the cluster. For energies above the dissociation energy the measured electron-cluster cross section is energy independent, thus defining an electron-cluster interaction range. This interaction range increases with the cluster size.

    5. Interactive optical panel

      DOE Patents [OSTI]

      Veligdan, James T. (Manorville, NY)

      1995-10-03

      An interactive optical panel assembly 34 includes an optical panel 10 having a plurality of ribbon optical waveguides 12 stacked together with opposite ends thereof defining panel first and second faces 16, 18. A light source 20 provides an image beam 22 to the panel first face 16 for being channeled through the waveguides 12 and emitted from the panel second face 18 in the form of a viewable light image 24a. A remote device 38 produces a response beam 40 over a discrete selection area 36 of the panel second face 18 for being channeled through at least one of the waveguides 12 toward the panel first face 16. A light sensor 42,50 is disposed across a plurality of the waveguides 12 for detecting the response beam 40 therein for providing interactive capability.

    6. Interactive optical panel

      DOE Patents [OSTI]

      Veligdan, J.T.

      1995-10-03

      An interactive optical panel assembly includes an optical panel having a plurality of ribbon optical waveguides stacked together with opposite ends thereof defining panel first and second faces. A light source provides an image beam to the panel first face for being channeled through the waveguides and emitted from the panel second face in the form of a viewable light image. A remote device produces a response beam over a discrete selection area of the panel second face for being channeled through at least one of the waveguides toward the panel first face. A light sensor is disposed across a plurality of the waveguides for detecting the response beam therein for providing interactive capability. 10 figs.

    7. Power throttling of collections of computing elements

      DOE Patents [OSTI]

      Bellofatto, Ralph E. (Ridgefield, CT); Coteus, Paul W. (Yorktown Heights, NY); Crumley, Paul G. (Yorktown Heights, NY); Gara, Alan G. (Mount Kidsco, NY); Giampapa, Mark E. (Irvington, NY); Gooding; Thomas M. (Rochester, MN); Haring, Rudolf A. (Cortlandt Manor, NY); Megerian, Mark G. (Rochester, MN); Ohmacht, Martin (Yorktown Heights, NY); Reed, Don D. (Mantorville, MN); Swetz, Richard A. (Mahopac, NY); Takken, Todd (Brewster, NY)

      2011-08-16

      An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

    8. ComPASS Present and Future Computing Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ComPASS Present and Future Computing Requirements Panagiotis Spentzouris (Fermilab) for the ComPASS collaboration NERSC BER Requirements for 2017 September 11-12, 2012 Rockville, MD Accelerators for High Energy Physics § At the Energy Frontier, high- energy particle beam collisions seek to uncover new phenomena * the origin of mass, the nature of dark matter, extra dimensions of space. § At the Intensity Frontier, high-flux beams enable exploration of * neutrino interactions, to answer

    9. ELEMENTARY PARTICLE INTERACTIONS

      SciTech Connect (OSTI)

      EFREMENKO, YURI; HANDLER, THOMAS; KAMYSHKOV, YURI; SIOPSIS, GEORGE; SPANIER, STEFAN

      2013-07-30

      The High-Energy Elementary Particle Interactions group at UT during the last three years worked on the following directions and projects: Collider-based Particle Physics; Neutrino Physics, particularly participation in NO?A, Double Chooz, and KamLAND neutrino experiments; and Theory, including Scattering amplitudes, Quark-gluon plasma; Holographic cosmology; Holographic superconductors; Charge density waves; Striped superconductors; and Holographic FFLO states.

    10. Interactive Activity Detection Tools

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Activity Detection Tools Interactive Activity Detection Tools Tools for detecting specified activities in video data provide a key intelligence capability. High numbers of false alarms, however, reduce tool effectiveness and analyst patience. User feedback reduces false alarms * This project will reduce the number of false alarms generated by activity detection tools (including single vehicle start / stop, multi-vehicle meetings and coordinated driving patterns) by exploiting user feedback in a

    11. Interactive Comparative Analysis

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Comparative Analysis Interactive Comparative Analysis We can learn the correlations between sensors and modalities that differentiate activities (or operating modes) by using transfer learning. Our new approach to data fusion and signature discovery has a number of advantages and applications: * Finding correlations that differentiate datasets requires less data than finding correlations that explain datasets. * The differences between datasets are smaller in number, and often easier to

    12. Species interactions differ in their genetic robustness

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Chubiz, Lon M.; Granger, Brian R.; Segre, Daniel; Harcombe, William R.

      2015-04-14

      Conflict and cooperation between bacterial species drive the composition and function of microbial communities. Stability of these emergent properties will be influenced by the degree to which species' interactions are robust to genetic perturbations. We use genome-scale metabolic modeling to computationally analyze the impact of genetic changes when Escherichia coli and Salmonella enterica compete, or cooperate. We systematically knocked out in silico each reaction in the metabolic network of E. coli to construct all 2583 mutant stoichiometric models. Then, using a recently developed multi-scale computational framework, we simulated the growth of each mutant E. coli in the presence of S.more » enterica. The type of interaction between species was set by modulating the initial metabolites present in the environment. We found that the community was most robust to genetic perturbations when the organisms were cooperating. Species ratios were more stable in the cooperative community, and community biomass had equal variance in the two contexts. Additionally, the number of mutations that have a substantial effect is lower when the species cooperate than when they are competing. In contrast, when mutations were added to the S. enterica network the system was more robust when the bacteria were competing. These results highlight the utility of connecting metabolic mechanisms and studies of ecological stability. Cooperation and conflict alter the connection between genetic changes and properties that emerge at higher levels of biological organization.« less

    13. Quantum Computing: Solving Complex Problems

      ScienceCinema (OSTI)

      DiVincenzo, David [IBM Watson Research Center

      2009-09-01

      One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

    14. SSRL Computer Account Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SSRL/LCLS Computer Account Request Form August 2009 Fill in this form and sign the security statement mentioned at the bottom of this page to obtain an account. Your Name: __________________________________________________________ Institution: ___________________________________________________________ Mailing Address: ______________________________________________________ Email Address: _______________________________________________________ Telephone:

    15. Computing at SSRL Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      contents you are looking for have moved. You will be redirected to the new location automatically in 5 seconds. Please bookmark the correct page at http://www-ssrl.slac.stanford.edu/content/staff-resources/computer-networking-group

    16. Events | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2:00 PM Finding Multiple Local Minima of Computationally Expensive Simulations Jeffery Larson Postdoctoral Appointee, MCS Building 240Room 4301 Pages 1 2 3 4 5 6 7 8 9 ... next...

    17. Present and Future Computing Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cosmology SciDAC-3 Project Ann Almgren (LBNL) Nick Gnedin (FNAL) Dave Higdon (LANL) Rob Ross (ANL) Martin White (UC Berkeley LBNL) Large Scale Production Computing and Storage...

    18. SSRL Computer Account Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SSRLLCLS Computer Account Request Form August 2009 Fill in this form and sign the security statement mentioned at the bottom of this page to obtain an account. Your Name:...

    19. Hanford general employee training: Computer-based training instructor's manual

      SciTech Connect (OSTI)

      Not Available

      1990-10-01

      The Computer-Based Training portion of the Hanford General Employee Training course is designed to be used in a classroom setting with a live instructor. Future references to this course'' refer only to the computer-based portion of the whole. This course covers the basic Safety, Security, and Quality issues that pertain to all employees of Westinghouse Hanford Company. The topics that are covered were taken from the recommendations and requirements for General Employee Training as set forth by the Institute of Nuclear Power Operations (INPO) in INPO 87-004, Guidelines for General Employee Training, applicable US Department of Energy orders, and Westinghouse Hanford Company procedures and policy. Besides presenting fundamental concepts, this course also contains information on resources that are available to assist students. It does this using Interactive Videodisk technology, which combines computer-generated text and graphics with audio and video provided by a videodisk player.

    20. Computer Assisted Virtual Environment - CAVE

      ScienceCinema (OSTI)

      Erickson, Phillip; Podgorney, Robert; Weingartner, Shawn; Whiting, Eric

      2014-06-09

      Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

    1. Secure computing for the 'Everyman'

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Secure computing for the 'Everyman' Secure computing for the 'Everyman' If implemented on a wide scale, quantum key distribution technology could ensure truly secure commerce, banking, communications and data transfer. September 2, 2014 This small device developed at Los Alamos National Laboratory uses the truly random spin of light particles as defined by laws of quantum mechanics to generate a random number for use in a cryptographic key that can be used to securely transmit information

    2. computational-hydraulics-for-transportation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Transportation Workshop Sept. 23-24, 2009 Argonne TRACC Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small The Transportation Research and Analysis Computing Center at Argonne National Laboratory will hold a workshop on the use of computational hydraulics for transportation applications. The goals of the workshop are: Bring together people who are using or would benefit from the use of high performance cluster

    3. Computational Sciences and Engineering Division

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      If you have questions or comments regarding any of our research and development activities, how to work with ORNL and the Computational Sciences and Engineering (CSE) Division, or the content of this website please contact one of the following people: If you have questions regarding CSE technologies and capabilities, job opportunities, working with ORNL and the CSE Division, intellectual property, etc., contact, Shaun S. Gleason, Ph.D. Division Director, Computational Sciences and Engineering

    4. Computational Sciences and Engineering Division

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The Computational Sciences and Engineering Division is a major research division at the Department of Energy's Oak Ridge National Laboratory. CSED develops and applies creative information technology and modeling and simulation research solutions for National Security and National Energy Infrastructure needs. The mission of the Computational Sciences and Engineering Division is to enhance the country's capabilities in achieving important objectives in the areas of national defense, homeland

    5. Mira | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Featured Videos Mira: Argonne's 10-Petaflop Supercomputer Mira's Dedication Ceremony Introducing Mira: Our Next-Generation Supercomputer Mira Mira Ushers in a New Era of Scientific Supercomputing As one of the fastest supercomputers, Mira, our 10-petaflops IBM Blue Gene/Q system, is capable of 10 quadrillion calculations per second. With this computing power, Mira can do in one day what it would take

    6. Cooley | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Changes from Tukey to Cooley Compiling and Linking Using Cobalt on Cooley Visit on Cooley Paraview on Cooley ParaView Tutorial VNC on Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Cooley The primary purpose of Cooley is to analyze and visualize data produced on Mira. Equipped with state-of-the-art graphics processing units (GPUs), Cooley converts computational data from Mira

    7. LAMMPS | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] LAMMPS Overview LAMMPS is a general-purpose molecular dynamics software package for massively parallel computers. It is written in an exceptionally clean style that makes it one of the most popular codes for users to extend and

    8. Automatic computation of transfer functions

      DOE Patents [OSTI]

      Atcitty, Stanley; Watson, Luke Dale

      2015-04-14

      Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

    9. Proposal for grid computing for nuclear applications

      SciTech Connect (OSTI)

      Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat; Muhamad, Shalina Bt. Sheik; Hassan, Hasni; Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri; and others

      2014-02-12

      The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

    10. Detection of molecular interactions

      DOE Patents [OSTI]

      Groves, John T. (Berkeley, CA); Baksh, Michael M. (Fremont, CA); Jaros, Michal (Brno, CH)

      2012-02-14

      A method and assay are described for measuring the interaction between a ligand and an analyte. The assay can include a suspension of colloidal particles that are associated with a ligand of interest. The colloidal particles are maintained in the suspension at or near a phase transition state from a condensed phase to a dispersed phase. An analyte to be tested is then added to the suspension. If the analyte binds to the ligand, a phase change occurs to indicate that the binding was successful.

    11. Four-boson system with short-range interactions (Journal Article) | SciTech

      Office of Scientific and Technical Information (OSTI)

      Connect Four-boson system with short-range interactions Citation Details In-Document Search Title: Four-boson system with short-range interactions We consider the nonrelativistic four-boson system with short-range forces and large scattering length in an effective quantum mechanics approach. We construct the effective interaction potential at leading order in the large scattering length and compute the four-body binding energies using the Yakubovsky equations. Cutoff independence of the

    12. Computation Directorate 2007 Annual Report

      SciTech Connect (OSTI)

      Henson, V E; Guse, J A

      2008-03-06

      If there is a single word that both characterized 2007 and dominated the thoughts and actions of many Laboratory employees throughout the year, it is transition. Transition refers to the major shift that took place on October 1, when the University of California relinquished management responsibility for Lawrence Livermore National Laboratory (LLNL), and Lawrence Livermore National Security, LLC (LLNS), became the new Laboratory management contractor for the Department of Energy's (DOE's) National Nuclear Security Administration (NNSA). In the 55 years under the University of California, LLNL amassed an extraordinary record of significant accomplishments, clever inventions, and momentous contributions in the service of protecting the nation. This legacy provides the new organization with a built-in history, a tradition of excellence, and a solid set of core competencies from which to build the future. I am proud to note that in the nearly seven years I have had the privilege of leading the Computation Directorate, our talented and dedicated staff has made far-reaching contributions to the legacy and tradition we passed on to LLNS. Our place among the world's leaders in high-performance computing, algorithmic research and development, applications, and information technology (IT) services and support is solid. I am especially gratified to report that through all the transition turmoil, and it has been considerable, the Computation Directorate continues to produce remarkable achievements. Our most important asset--the talented, skilled, and creative people who work in Computation--has continued a long-standing Laboratory tradition of delivering cutting-edge science even in the face of adversity. The scope of those achievements is breathtaking, and in 2007, our accomplishments span an amazing range of topics. From making an important contribution to a Nobel Prize-winning effort to creating tools that can detect malicious codes embedded in commercial software; from expanding BlueGene/L, the world's most powerful computer, by 60% and using it to capture the most prestigious prize in the field of computing, to helping create an automated control system for the National Ignition Facility (NIF) that monitors and adjusts more than 60,000 control and diagnostic points; from creating a microarray probe that rapidly detects virulent high-threat organisms, natural or bioterrorist in origin, to replacing large numbers of physical computer servers with small numbers of virtual servers, reducing operating expense by 60%, the people in Computation have been at the center of weighty projects whose impacts are felt across the Laboratory and the DOE community. The accomplishments I just mentioned, and another two dozen or so, make up the stories contained in this report. While they form an exceptionally diverse set of projects and topics, it is what they have in common that excites me. They share the characteristic of being central, often crucial, to the mission-driven business of the Laboratory. Computational science has become fundamental to nearly every aspect of the Laboratory's approach to science and even to the conduct of administration. It is difficult to consider how we would proceed without computing, which occurs at all scales, from handheld and desktop computing to the systems controlling the instruments and mechanisms in the laboratories to the massively parallel supercomputers. The reasons for the dramatic increase in the importance of computing are manifest. Practical, fiscal, or political realities make the traditional approach to science, the cycle of theoretical analysis leading to experimental testing, leading to adjustment of theory, and so on, impossible, impractical, or forbidden. How, for example, can we understand the intricate relationship between human activity and weather and climate? We cannot test our hypotheses by experiment, which would require controlled use of the entire earth over centuries. It is only through extremely intricate, detailed computational simulation that we can test our theories, and simulati

    13. Novel QCD Effects from Initial and Final State Interactions

      SciTech Connect (OSTI)

      Brodsky, Stanley J.

      2007-09-12

      Initial-state and final-state interactions which are conventionally neglected in the parton model, have a profound effect in QCD hard-scattering reactions. The effects, which arise from gluon exchange between the active and spectator quarks, cause leading-twist single-spin asymmetries, diffractive deep inelastic scattering, diffractive hard hadronic reactions, and the breakdown of the Lam-Tung relation in Drell-Yan reactions. Diffractive deep inelastic scattering also leads to nuclear shadowing and non-universal antishadowing of nuclear structure functions through multiple scattering reactions in the nuclear target. Factorization-breaking effects are particularly important for hard hadron interactions since both initial-state and final-state interactions appear. Related factorization breaking effects can also appear in exclusive electroproduction reactions and in deeply virtual Compton scattering. None of the effects of initial-state and final-state interactions are incorporated in the light-front wavefunctions of the target hadron computed in isolation.

    14. Interactive graphical model building using telepresence and virtual reality

      SciTech Connect (OSTI)

      Cooke, C.; Stansfield, S.

      1993-10-01

      This paper presents a prototype system developed at Sandia National Laboratories to create and verify computer-generated graphical models of remote physical environments. The goal of the system is to create an interface between an operator and a computer vision system so that graphical models can be created interactively. Virtual reality and telepresence are used to allow interaction between the operator, computer, and remote environment. A stereo view of the remote environment is produced by two CCD cameras. The cameras are mounted on a three degree-of-freedom platform which is slaved to a mechanically-tracked, stereoscopic viewing device. This gives the operator a sense of immersion in the physical environment. The stereo video is enhanced by overlaying the graphical model onto it. Overlay of the graphical model onto the stereo video allows visual verification of graphical models. Creation of a graphical model is accomplished by allowing the operator to assist the computer in modeling. The operator controls a 3-D cursor to mark objects to be modeled. The computer then automatically extracts positional and geometric information about the object and creates the graphical model.

    15. Relativistic Laser-Matter Interactions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Relativistic Laser-Matter Interactions Relativistic Laser-Matter Interactions Enabling the next generation of intense particle accelerators Contact Juan Fernandez (505) 667-6575 Email Short-pulse ion acceleration The Trident facility is a world-class performer in the area of ion acceleration from laser-solid target interactions. Trident has demonstrated over 100 MeV protons at intensities of 8x1020 W/cm2 with efficiencies approaching 5%. These intense relativistic interactions can be diagnosed

    16. Snowmass Computing Frontier I2: Distributed

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Snowmass Computing Frontier I2: Distributed Computing and Facility Infrastructures Ken Bloom Richard Gerber July 31, 2013 Thursday, October 10, 13 Computing Frontier I2: Distributed Computing and Facility Infrastructures 7/31/13 Who we are ‣ Ken Bloom, Associate Professor, Department of Physics and Astronomy, University of Nebraska-Lincoln ‣ Co-PI for the Nebraska CMS Tier-2 computing facility ‣ Tier-2 program manager and Deputy Manager of Software and Computing for US CMS ‣ Tier-2

    17. Yuri Alexeev | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Yuri Alexeev Assistant Computational Scientist Yury Alekseev Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 1126 Argonne IL, 60439 630-252-0157 yuri@alcf.anl.gov Yuri Alexeev is an Assistant Computational Scientist at the Argonne Leadership Computing Facility where he applies his skills, knowledge and experience for using and enabling computational methods in chemistry and biology for high-performance computing on next-generation high-performance computers. Yuri is

    18. Computer Science and Information Technology Student Pipeline

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Divisions recruit and hire promising undergraduate and graduate students in the areas of Computer Science, Information Technology, Management Information Systems, Computer...

    19. Applications for Postdoctoral Fellowship in Computational Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      at Berkeley Lab due November 26 October 15, 2012 by Francesca Verdier Researchers in computer science, applied mathematics or any computational science discipline who have...

    20. Sandia National Laboratories: Careers: Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Science Red Storm photo Sandia's supercomputing research is reaching for tomorrow's exascale performance while solving real-world problems today. Computer scientists and...

    1. Personal Computing Equipment | Open Energy Information

      Open Energy Info (EERE)

      Computing Equipment Jump to: navigation, search TODO: Add description List of Personal Computing Equipment Incentives Retrieved from "http:en.openei.orgwindex.php?titlePersona...

    2. Advanced Materials Development through Computational Design ...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Development through Computational Design Advanced Materials Development through Computational Design Presentation given at the 2007 Diesel Engine-Efficiency & Emissions Research ...

    3. Thermoelectric Materials by Design, Computational Theory and...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      by Design, Computational Theory and Structure Thermoelectric Materials by Design, Computational Theory and Structure 2009 DOE Hydrogen Program and Vehicle Technologies Program...

    4. Extreme Scale Computing, Co-Design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Extreme Scale Computing, Co-design Publications Publications Ramon Ravelo, Qi An, Timothy C. Germann, and Brad Lee Holian, ...

    5. Energy Storage Computational Tool | Open Energy Information

      Open Energy Info (EERE)

      Energy Storage Computational Tool Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Energy Storage Computational Tool AgencyCompany Organization: Navigant Consulting...

    6. Integrated Computational Materials Engineering (ICME) for Mg...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      lm012li2012o.pdf More Documents & Publications Integrated Computational Materials Engineering (ICME) for Mg: International Pilot Project Integrated Computational Materials...

    7. Integrated Computational Materials Engineering (ICME) for Mg...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      lm012li2011o.pdf More Documents & Publications Integrated Computational Materials Engineering (ICME) for Mg: International Pilot Project Integrated Computational Materials...

    8. Hybrid Rotaxanes: Interlocked Structures for Quantum Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      based on molecular magnets that may make them suitable as qubits for quantum computers. Chemistry Aids Quantum Computing Quantum bits or qubits are the fundamental...

    9. Compare Activities by Number of Computers

      U.S. Energy Information Administration (EIA) Indexed Site

      of Computers Office buildings contained the most computers per square foot, followed by education and outpatient health care buildings. Education buildings were the only type...

    10. Hybrid Rotaxanes: Interlocked Structures for Quantum Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Hybrid Rotaxanes: Interlocked Structures for Quantum Computing? Hybrid Rotaxanes: Interlocked Structures for Quantum Computing? Print Wednesday, 26 August 2009 00:00 Rotaxanes are...

    11. NERSC Enhances PDSF, Genepool Computing Capabilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Capabilities NERSC Enhances PDSF, Genepool Computing Capabilities Linux cluster expansion speeds data access and analysis January 3, 2014 Christmas came early for...

    12. Solvate Structures and Computational/Spectroscopic Characterization...

      Office of Scientific and Technical Information (OSTI)

      Solvate Structures and ComputationalSpectroscopic Characterization of LiPF6 Electrolytes Citation Details In-Document Search Title: Solvate Structures and Computational...

    13. Solvate Structures and Computational/Spectroscopic Characterization...

      Office of Scientific and Technical Information (OSTI)

      Solvate Structures and ComputationalSpectroscopic Characterization of LiBF4 Electrolytes Citation Details In-Document Search Title: Solvate Structures and Computational...

    14. Improved computer models support genetics research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      February Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and...

    15. Computer Accounts | Stanford Synchrotron Radiation Lightsource

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Accounts Each user group must have a computer account. Additionally, all persons using these accounts are responsible for understanding and complying with the terms...

    16. LANL computer model boosts engine efficiency

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      LANL computer model boosts engine efficiency LANL computer model boosts engine efficiency The KIVA model has been instrumental in helping researchers and manufacturers understand...

    17. Sandia National Laboratories: Advanced Simulation Computing:...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      These collaborations help solve the challenges of developing computing platforms and simulation tools across a number of disciplines. Computer Science Research Institute The...

    18. Significant Enhancement of Computational Efficiency in Nonlinear Multiscale Battery Model for Computer Aided Engineering

      SciTech Connect (OSTI)

      Smith, Kandler; Graf, Peter; Jun, Myungsoo; Yang, Chuanbo; Li, Genong; Li, Shaoping; Hochman, Amit; Tselepidakis, Dimitrios

      2015-06-09

      This presentation provides an update on improvements in computational efficiency in a nonlinear multiscale battery model for computer aided engineering.

    19. District-heating strategy model: computer programmer's manual

      SciTech Connect (OSTI)

      Kuzanek, J.F.

      1982-05-01

      The US Department of Housing and Urban Development (HUD) and the US Department of Energy (DOE) cosponsor a program aimed at increasing the number of district heating and cooling (DHC) systems. Such systems can reduce the amount and costs of fuels used to heat and cool buildings in a district. Twenty-eight communities have agreed to aid HUD in a national feasibility assessment of DHC systems. The HUD/DOE program entails technical assistance by Argonne National Laboratory and Oak Ridge National Laboratory. The assistance includes a computer program, called the district heating strategy model (DHSM), that performs preliminary calculations to analyze potential DHC systems. This report describes the general capabilities of the DHSM, provides historical background on its development, and explains the computer installation and operation of the model - including the data file structures and the options. Sample problems illustrate the structure of the various input data files, the interactive computer-output listings. The report is written primarily for computer programmers responsible for installing the model on their computer systems, entering data, running the model, and implementing local modifications to the code.

    20. Argonne Lea Computing F A

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lea Computing F A r g o n n e L e a d e r s h i p C o m p u t i n g FA c i l i t y 2 0 1 3 S c i e n c e H i g H l i g H t S Argonne leadership computing Facility C O N T E N T S About ALCF ...............................................................................................................................2 MirA...............................................................................................................................................3 SCienCe DireCtor'S MeSSAge

    1. Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... The Challenge is project-based learning geared to teaching a wide range of skills - 41615 A simulation of vortex induced motion shows how ocean currents affect offshore oil rigs. ...

    2. Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      This turbulent transport is caused by drift-wave instabilities, driven by free energy in plasma temperature and density gradients. * Unavoidable: These instabilities will persist ...

    3. 2015 Final Reports from the Los Alamos National Laboratory Computational Physics Student Summer Workshop

      SciTech Connect (OSTI)

      Runnels, Scott Robert; Caldwell, Wendy; Brown, Barton Jed; Pederson, Clark; Brown, Justin; Burrill, Daniel; Feinblum, David; Hyde, David; Levick, Nathan; Lyngaas, Isaac; Maeng, Brad; Reed, Richard LeRoy; Sarno-Smith, Lois; Shohet, Gil; Skarda, Jinhie; Stevens, Josey; Zeppetello, Lucas; Grossman-Ponemon, Benjamin; Bottini, Joseph Larkin; Loudon, Tyson Shane; VanGessel, Francis Gilbert; Nagaraj, Sriram; Price, Jacob

      2015-10-15

      The two primary purposes of LANLs Computational Physics Student Summer Workshop are (1) To educate graduate and exceptional undergraduate students in the challenges and applications of computational physics of interest to LANL, and (2) Entice their interest toward those challenges. Computational physics is emerging as a discipline in its own right, combining expertise in mathematics, physics, and computer science. The mathematical aspects focus on numerical methods for solving equations on the computer as well as developing test problems with analytical solutions. The physics aspects are very broad, ranging from low-temperature material modeling to extremely high temperature plasma physics, radiation transport and neutron transport. The computer science issues are concerned with matching numerical algorithms to emerging architectures and maintaining the quality of extremely large codes built to perform multi-physics calculations. Although graduate programs associated with computational physics are emerging, it is apparent that the pool of U.S. citizens in this multi-disciplinary field is relatively small and is typically not focused on the aspects that are of primary interest to LANL. Furthermore, more structured foundations for LANL interaction with universities in computational physics is needed; historically interactions rely heavily on individuals personalities and personal contacts. Thus a tertiary purpose of the Summer Workshop is to build an educational network of LANL researchers, university professors, and emerging students to advance the field and LANLs involvement in it. This report includes both the background for the program and the reports from the students.

    4. PERTURBATION APPROACH FOR QUANTUM COMPUTATION

      SciTech Connect (OSTI)

      G. P. BERMAN; D. I. KAMENEV; V. I. TSIFRINOVICH

      2001-04-01

      We discuss how to simulate errors in the implementation of simple quantum logic operations in a nuclear spin quantum computer with many qubits, using radio-frequency pulses. We verify our perturbation approach using the exact solutions for relatively small (L = 10) number of qubits.

    5. New challenges in computational biochemistry

      SciTech Connect (OSTI)

      Honig, B.

      1996-12-31

      The new challenges in computational biochemistry to which the title refers include the prediction of the relative binding free energy of different substrates to the same protein, conformational sampling, and other examples of theoretical predictions matching known protein structure and behavior.

    6. Experimental Mathematics and Computational Statistics

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2009-04-30

      The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

    7. Computational Design of Metal Ion Sequestering Agents

      SciTech Connect (OSTI)

      Hay, Benjamin P.; Rapko, Brian M.

      2005-06-15

      Organic ligands that exhibit a high degree of metal ion recognition are essential precursors for developing separation processes and sensors for metal ions. Since the beginning of the nuclear era, much research has focused on discovering ligands that target specific radionuclides. Members of the Group 1A and 2A cations (e.g., Cs, Sr, Ra) and the f-block metals (actinides and lanthanides) are of primary concern to DOE. Although there has been some success in identifying ligand architectures that exhibit a degree of metal ion recognition, the ability to control binding affinity and selectivity remains a significant challenge. The traditional approach for discovering such ligands has involved lengthy programs of organic synthesis and testing that, in the absence of reliable methods for screening compounds before synthesis, have resulted in much wasted research effort. This project seeks to enhance and strengthen the traditional approach through computer-aided design of new and improved host molecules. Accurate electronic structure calculations are coupled with experimental data to provide fundamental information about ligand structure and the nature of metal-donor group interactions (design criteria). This fundamental information then is used in a molecular mechanics model (MM) that helps us rapidly screen proposed ligand architectures and select the best members from a set of potential candidates. By using combinatorial methods, molecule building software has been developed that generates large numbers of candidate architectures for a given set of donor groups. The specific goals of this project are: further understand the structural and energetic aspects of individual donor group- metal ion interactions and incorporate this information within the MM framework further develop and evaluate approaches for correlating ligand structure with reactivity toward metal ions, in other words, screening capability use molecule structure building software to generate large numbers of candidate ligand architectures for given sets of donor groups screen candidates and identify ligand architectures that will exhibit enhanced metal ion recognition. These new capabilities are being applied to ligand systems identified under other DOEsponsored projects where studies have suggested that modifying existing architectures will lead to dramatic enhancements in metal ion binding affinity and selectivity. With this in mind, we are collaborating with Professors R. T. Paine (University of New Mexico), K. N. Raymond (University of California, Berkeley), and J. E. Hutchison (University of Oregon), and Dr. B. A. Moyer (Oak Ridge National Laboratory) to obtain experimental validation of the predicted new ligand structures. Successful completion of this study will yield molecular-level insight into the role that ligand architecture plays in controlling metal ion complexation and will provide a computational approach to ligand design.

    8. WE-B-BRD-01: Innovation in Radiation Therapy Planning II: Cloud Computing in RT

      SciTech Connect (OSTI)

      Moore, K; Kagadis, G; Xing, L; McNutt, T

      2014-06-15

      As defined by the National Institute of Standards and Technology, cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Despite the omnipresent role of computers in radiotherapy, cloud computing has yet to achieve widespread adoption in clinical or research applications, though the transition to such on-demand access is underway. As this transition proceeds, new opportunities for aggregate studies and efficient use of computational resources are set against new challenges in patient privacy protection, data integrity, and management of clinical informatics systems. In this Session, current and future applications of cloud computing and distributed computational resources will be discussed in the context of medical imaging, radiotherapy research, and clinical radiation oncology applications. Learning Objectives: Understand basic concepts of cloud computing. Understand how cloud computing could be used for medical imaging applications. Understand how cloud computing could be employed for radiotherapy research.4. Understand how clinical radiotherapy software applications would function in the cloud.

    9. ENERGETIC PHOTON AND ELECTRON INTERACTIONS WITH POSITIVE IONS

      SciTech Connect (OSTI)

      Phaneuf, Ronald A.

      2013-07-01

      The objective of this research is a deeper understanding of the complex multi-electron interactions that govern inelastic processes involving positive ions in plasma environments, such as those occurring in stellar cares and atmospheres, x-ray lasers, thermonuclear fusion reactors and materials-processing discharges. In addition to precision data on ionic structure and transition probabilities, high resolution quantitative measurements of ionization test the theoretical methods that provide critical input to computer codes used for plasma modeling and photon opacity calculations. Steadily increasing computational power and a corresponding emphasis on simulations gives heightened relevance to precise and accurate benchmark data. Photons provide a highly selective probe of the internal electronic structure of atomic and molecular systems, and a powerful means to better understand more complex electron-ion interactions.

    10. Computer Science and Information Technology Student Pipeline

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science and Information Technology Student Pipeline Program Description Los Alamos National Laboratory's High Performance Computing and Information Technology Divisions recruit and hire promising undergraduate and graduate students in the areas of Computer Science, Information Technology, Management Information Systems, Computer Security, Software Engineering, Computer Engineering, and Electrical Engineering. Students are provided a mentor and challenging projects to demonstrate their

    11. Storing and managing information artifacts collected by information analysts using a computing device

      DOE Patents [OSTI]

      Pike, William A; Riensche, Roderick M; Best, Daniel M; Roberts, Ian E; Whyatt, Marie V; Hart, Michelle L; Carr, Norman J; Thomas, James J

      2012-09-18

      Systems and computer-implemented processes for storage and management of information artifacts collected by information analysts using a computing device. The processes and systems can capture a sequence of interactive operation elements that are performed by the information analyst, who is collecting an information artifact from at least one of the plurality of software applications. The information artifact can then be stored together with the interactive operation elements as a snippet on a memory device, which is operably connected to the processor. The snippet comprises a view from an analysis application, data contained in the view, and the sequence of interactive operation elements stored as a provenance representation comprising operation element class, timestamp, and data object attributes for each interactive operation element in the sequence.

    12. FastBit: Interactively Searching Massive Data

      SciTech Connect (OSTI)

      Wu, Kesheng; Ahern, Sean; Bethel, E. Wes; Chen, Jacqueline; Childs, Hank; Cormier-Michel, Estelle; Geddes, Cameron; Gu, Junmin; Hagen, Hans; Hamann, Bernd; Koegler, Wendy; Lauret, Jerome; Meredith, Jeremy; Messmer, Peter; Otoo, Ekow; Perevoztchikov, Victor; Poskanzer, Arthur; Prabhat,; Rubel, Oliver; Shoshani, Arie; Sim, Alexander; Stockinger, Kurt; Weber, Gunther; Zhang, Wei-Ming

      2009-06-23

      As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reduces the response time and enables interactive exploration on terabytes of data.

    13. Internode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

      2013-09-03

      Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

    14. Internode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

      2014-02-11

      Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

    15. Link failure detection in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-11-09

      Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

    16. Broadcasting a message in a parallel computer

      DOE Patents [OSTI]

      Berg, Jeremy E. (Rochester, MN); Faraj, Ahmad A. (Rochester, MN)

      2011-08-02

      Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

    17. NERSC Enhances PDSF, Genepool Computing Capabilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Enhances PDSF, Genepool Computing Capabilities NERSC Enhances PDSF, Genepool Computing Capabilities Linux cluster expansion speeds data access and analysis January 3, 2014 Christmas came early for users of the Parallel Distributed Systems Facility (PDSF) and Genepool systems at Department of Energy's National Energy Research Scientific Computer Center (NERSC). Throughout November members of NERSC's Computational Systems Group were busy expanding the Linux computing resources that support PDSF's

    18. NERSC seeks Computational Systems Group Lead

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      seeks Computational Systems Group Lead NERSC seeks Computational Systems Group Lead January 6, 2011 by Katie Antypas Note: This position is now closed. The Computational Systems Group provides production support and advanced development for the supercomputer systems at NERSC. Manage the Computational Systems Group (CSG) which provides production support and advanced development for the supercomputer systems at NERSC (National Energy Research Scientific Computing Center). These systems, which

    19. Presentation: High Performance Computing Applications | Department of

      Energy Savers [EERE]

      Energy High Performance Computing Applications Presentation: High Performance Computing Applications A briefing to the Secretary's Energy Advisory Board on High Performance Computing Applications delivered by Frederick H. Streitz, Lawrence Livermore National Laboratory. PDF icon High Performance Computing More Documents & Publications Presentation: QER Energy Topics 2011_INCITE_Fact_Sheets.pdf DOEs Effort to Reduce Truck Aerodynamic Drag through Joint Experiments and Computations

    20. Advanced Computing Tech Team | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Advanced Computing Tech Team Advanced Computing Tech Team Advanced Computing Tech Team The Advanced Computing Tech Team is working with the DOE Energy Technology Offices, the Office of Science, and the National Nuclear Security Administration to deliver technologies that will be used to create new scientific insights into complex physical systems. Advanced computing technologies have been used for decades to provide better understanding of the performance and reliability of the nuclear stockpile

    1. Energy Efficient Computer Use | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Electricity & Fuel » Appliances & Electronics » Energy Efficient Computer Use Energy Efficient Computer Use Use sleep mode and power management features on your computer to save money on your energy bill. Use sleep mode and power management features on your computer to save money on your energy bill. If you wonder when you should turn off your personal computer for energy savings, here are some general guidelines to help you make that decision. Though there is a small surge in energy

    2. Species interactions differ in their genetic robustness

      SciTech Connect (OSTI)

      Chubiz, Lon M.; Granger, Brian R.; Segre, Daniel; Harcombe, William R.

      2015-04-14

      Conflict and cooperation between bacterial species drive the composition and function of microbial communities. Stability of these emergent properties will be influenced by the degree to which species' interactions are robust to genetic perturbations. We use genome-scale metabolic modeling to computationally analyze the impact of genetic changes when Escherichia coli and Salmonella enterica compete, or cooperate. We systematically knocked out in silico each reaction in the metabolic network of E. coli to construct all 2583 mutant stoichiometric models. Then, using a recently developed multi-scale computational framework, we simulated the growth of each mutant E. coli in the presence of S. enterica. The type of interaction between species was set by modulating the initial metabolites present in the environment. We found that the community was most robust to genetic perturbations when the organisms were cooperating. Species ratios were more stable in the cooperative community, and community biomass had equal variance in the two contexts. Additionally, the number of mutations that have a substantial effect is lower when the species cooperate than when they are competing. In contrast, when mutations were added to the S. enterica network the system was more robust when the bacteria were competing. These results highlight the utility of connecting metabolic mechanisms and studies of ecological stability. Cooperation and conflict alter the connection between genetic changes and properties that emerge at higher levels of biological organization.

    3. Sandian Re-Elected as President of the Association for Computing Machinery

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Special Interest Group on Graphics and Interactive Techniques Re-Elected as President of the Association for Computing Machinery Special Interest Group on Graphics and Interactive Techniques - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of

    4. Surprising Quasiparticle Interactions in Graphene

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Surprising Quasiparticle Interactions in Graphene Print Until now, the world's electronics have been dominated by silicon, whose properties, while excellent, significantly limit...

    5. Theoretical studies of molecular interactions

      SciTech Connect (OSTI)

      Lester, W.A. Jr.

      1993-12-01

      This research program is directed at extending fundamental knowledge of atoms and molecules including their electronic structure, mutual interaction, collision dynamics, and interaction with radiation. The approach combines the use of ab initio methods--Hartree-Fock (HF) multiconfiguration HF, configuration interaction, and the recently developed quantum Monte Carlo (MC)--to describe electronic structure, intermolecular interactions, and other properties, with various methods of characterizing inelastic and reaction collision processes, and photodissociation dynamics. Present activity is focused on the development and application of the QMC method, surface catalyzed reactions, and reorientation cross sections.

    6. Industry Interactive Procurement System (IIPS)

      Broader source: Energy.gov [DOE]

      Presentation on DOE’s Industry Interactive Procurement System (IIPS) presented at the PEM fuel cell pre-solicitation meeting held May 26, 2005 in Arlington, VA.

    7. Numerical computation of Pop plot

      SciTech Connect (OSTI)

      Menikoff, Ralph

      2015-03-23

      The Pop plot — distance-of-run to detonation versus initial shock pressure — is a key characterization of shock initiation in a heterogeneous explosive. Reactive burn models for high explosives (HE) must reproduce the experimental Pop plot to have any chance of accurately predicting shock initiation phenomena. This report describes a methodology for automating the computation of a Pop plot for a specific explosive with a given HE model. Illustrative examples of the computation are shown for PBX 9502 with three burn models (SURF, WSD and Forest Fire) utilizing the xRage code, which is the Eulerian ASC hydrocode at LANL. Comparison of the numerical and experimental Pop plot can be the basis for a validation test or as an aid in calibrating the burn rate of an HE model. Issues with calibration are discussed.

    8. Addressing failures in exascale computing

      SciTech Connect (OSTI)

      Snir, Marc; Wisniewski, Robert W.; Abraham, Jacob A.; Adve, Sarita; Bagchi, Saurabh; Balaji, Pavan; Belak, Jim; Bose, Pradip; Cappello, Franck; Carlson, William; Chien, Andrew A.; Coteus, Paul; Debardeleben, Nathan A.; Diniz, Pedro; Engelmann, Christian; Erez, Mattan; Saverio, Fazzari; Geist, Al; Gupta, Rinku; Johnson, Fred; Krishnamoorthy, Sriram; Leyffer, Sven; Liberty, Dean; Mitra, Subhasish; Munson, Todd; Schreiber, Robert; Stearly, Jon; Van Hensbergen, Eric

      2014-05-01

      We present here a report produced by a workshop on Addressing Failures in Exascale Computing held in Park City, Utah, August 411, 2012. The charter of this workshop was to establish a common taxonomy about resilience across all the levels in a computing system; discuss existing knowledge on resilience across the various hardware and software layers of an exascale system; and build on those results, examining potential solutions from both a hardware and software perspective and focusing on a combined approach. The workshop brought together participants with expertise in applications, system software, and hardware; they came from industry, government, and academia; and their interests ranged from theory to implementation. The combination allowed broad and comprehensive discussions and led to this document, which summarizes and builds on those discussions.

    9. gdb | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Allinea DDT Core File Settings Determining Memory Use Using VNC with a Debugger bgq_stack gdb Coreprocessor Runjob termination TotalView Performance Tools & APIs Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] gdb Using gdb Preliminaries You should prepare a debug version of your code: Compile using -O0 -g If you are using the XL

    10. Darshan | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Darshan References Darshan

    11. GAMESS | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Performance Tools & APIs Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] GAMESS What Is GAMESS? The General Atomic and Molecular Electronic Structure System (GAMESS) is a general ab initio quantum chemistry package. For more information on GAMESS, see the Gordon research

    12. GROMACS | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] GROMACS Building and Running GROMACS on Vesta/Mira The Gromacs Molecular Dynamics package has a large number of executables. Some of them, such as luck, are just utilities that do not need to be built for the back end. Begin by

    13. Policies | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Accounts Policy Account Sponsorship & Retention Policy ALCC Quarterly Report Policy ALCF Acknowledgment Policy Data Policy INCITE Quarterly Report Policy Job Scheduling Policy on BG/Q Job Scheduling Policies on Cooley Pullback Policy Refund Policy Software Policy User Authentication Policy Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Policies Official policies and procedures of the ALCF.

    14. Molecular Science Computing: 2010 Greenbook

      SciTech Connect (OSTI)

      De Jong, Wibe A.; Cowley, David E.; Dunning, Thom H.; Vorpagel, Erich R.

      2010-04-02

      This 2010 Greenbook outlines the science drivers for performing integrated computational environmental molecular research at EMSL and defines the next-generation HPC capabilities that must be developed at the MSC to address this critical research. The EMSL MSC Science Panel used EMSL’s vision and science focus and white papers from current and potential future EMSL scientific user communities to define the scientific direction and resulting HPC resource requirements presented in this 2010 Greenbook.

    15. Programs | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      INCITE Program ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Featured Science Snapshot of the global structure of a radiation-dominated accretion flow around a black hole computed using the Athena++ code Magnetohydrodynamic Models of Accretion Including Radiation Transport James Stone Allocation Program: INCITE Allocation Hours: 47 Million

    16. Projects | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Projects bgclang Compiler Hal Finkel Cobalt Scheduler Bill Allcock, Paul Rich, Brian Toonen, Tom Uram GLEAN: Scalable In Situ Analysis and I/O Acceleration on Leadership Computing Systems Michael E. Papka, Venkat Vishwanath, Mark Hereld, Preeti Malakar, Joe Insley, Silvio Rizzi, Tom Uram Petrel: Data Management and Sharing Pilot Ian Foster, Michael E. Papka, Bill Allcock, Ben Allen, Rachana Ananthakrishnan, Lukasz Lacinski The Swift Parallel Scripting Language for ALCF Systems Michael Wilde,

    17. MADNESS | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] MADNESS Overview MADNESS is a numerical tool kit used to solve integral differential equations using multi-resolution analysis and a low-rank separation representation. MADNESS can solve multi-dimensional equations, currently up

    18. Michael Levitt and Computational Biology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Michael Levitt and Computational Biology Resources with Additional Information * Publications Michael Levitt Courtesy of Linda A. Cicero / Stanford News Service Michael Levitt, PhD, professor of structural biology at the Stanford University School of Medicine, has won the 2013 Nobel Prize in Chemistry. ... Levitt ... shares the ... prize with Martin Karplus ... and Arieh Warshel ... "for the development of multiscale models for complex chemical systems." Levitt's work focuses on

    19. Allocations | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Allocation Management Determining Allocation Requirements Querying Allocations Using cbank Mira/Cetus/Vesta Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Allocations ALCF resources are primarily used for DOE INCITE and ALCC awarded projects. Additional information on the INCITE program can be found on the DOE INCITE website and the ALCC program can be found on the Office of

    20. computation | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      computation | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo Gallery Jobs Apply for Our Jobs Our Jobs Working at NNSA Blog

    1. computers | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      computers | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo Gallery Jobs Apply for Our Jobs Our Jobs Working at NNSA Blog

    2. computing | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      computing | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo Gallery Jobs Apply for Our Jobs Our Jobs Working at NNSA Blog

    3. TRIDAC host computer functional specification

      SciTech Connect (OSTI)

      Hilbert, S.M.; Hunter, S.L.

      1983-08-23

      The purpose of this document is to outline the baseline functional requirements for the Triton Data Acquisition and Control (TRIDAC) Host Computer Subsystem. The requirements presented in this document are based upon systems that currently support both the SIS and the Uranium Separator Technology Groups in the AVLIS Program at the Lawrence Livermore National Laboratory and upon the specific demands associated with the extended safe operation of the SIS Triton Facility.

    4. System for Analysis of Soil-Structure Interaction (SASSI) Verification &

      Energy Savers [EERE]

      Validation (V&V) Problem Set | Department of Energy System for Analysis of Soil-Structure Interaction (SASSI) Verification & Validation (V&V) Problem Set System for Analysis of Soil-Structure Interaction (SASSI) Verification & Validation (V&V) Problem Set System for Analysis of Soil-Structure Interaction (SASSI) Verification & Validation (V&V) Problem Set SASSI is the System for Analysis of Soil-Structure Interaction, a computer code for performing finite element

    5. A Tool for Interactive Protein Manipulation

      Energy Science and Technology Software Center (OSTI)

      2005-03-28

      ProteinShop is a graphical environment that facilitates a solution to the protein prediction problem through a combination of unique features and capabilities. These include: 1. Helping researchers automatically generate 3D protein structures from scratcW by using the sequence of amino acids and secondary structure specifications as input. 2. Enabling users to apply their accumulated biochemical knowledge and intuition during the interactive manipulation of structures. 3. FacIlitating interactive comparison and analysis of alternative structures through visualizationmore » of free energy computed during modeling. 4. Accelerating discovery of low-energy configurations by applying local optimizations plug-in to user-selected protein structures. ProteinShop v.2.0 includes the following new features: - Visualizes multiple-domain structures - Automatically creates a user-specified number of beta-sheet configurations - Provides the interface and the libraries for energy visualization and local minimization of protein structures - Reads standard POB files without previous editing« less

    6. Laser–plasma interactions for fast ignition

      SciTech Connect (OSTI)

      Kemp, A. J.; Fiuza, F.; Debayle, A.; Johzaki, T.; Mori, W. B.; Patel, P. K.; Sentoku, Y.; Silva, L. O.

      2014-04-17

      In the electron-driven fast-ignition approach to inertial confinement fusion, petawatt laser pulses are required to generate MeV electrons that deposit several tens of kilojoules in the compressed core of an imploded DT shell. We review recent progress in the understanding of intense laser- plasma interactions (LPI) relevant to fast ignition. Increases in computational and modeling capabilities, as well as algorithmic developments have led to enhancement in our ability to perform multidimensional particle-in-cell (PIC) simulations of LPI at relevant scales. We discuss the physics of the interaction in terms of laser absorption fraction, the laser-generated electron spectra, divergence, and their temporal evolution. Scaling with irradiation conditions such as laser intensity, f-number and wavelength are considered, as well as the dependence on plasma parameters. Different numerical modeling approaches and configurations are addressed, providing an overview of the modeling capabilities and limitations. In addition, we discuss the comparison of simulation results with experimental observables. In particular, we address the question of surrogacy of today's experiments for the full-scale fast ignition problem.

    7. Laser–plasma interactions for fast ignition

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Kemp, A. J.; Fiuza, F.; Debayle, A.; Johzaki, T.; Mori, W. B.; Patel, P. K.; Sentoku, Y.; Silva, L. O.

      2014-04-17

      In the electron-driven fast-ignition approach to inertial confinement fusion, petawatt laser pulses are required to generate MeV electrons that deposit several tens of kilojoules in the compressed core of an imploded DT shell. We review recent progress in the understanding of intense laser- plasma interactions (LPI) relevant to fast ignition. Increases in computational and modeling capabilities, as well as algorithmic developments have led to enhancement in our ability to perform multidimensional particle-in-cell (PIC) simulations of LPI at relevant scales. We discuss the physics of the interaction in terms of laser absorption fraction, the laser-generated electron spectra, divergence, and their temporalmore » evolution. Scaling with irradiation conditions such as laser intensity, f-number and wavelength are considered, as well as the dependence on plasma parameters. Different numerical modeling approaches and configurations are addressed, providing an overview of the modeling capabilities and limitations. In addition, we discuss the comparison of simulation results with experimental observables. In particular, we address the question of surrogacy of today's experiments for the full-scale fast ignition problem.« less

    8. Towards Energy-Centric Computing and Computer Architecture

      SciTech Connect (OSTI)

      2011-02-09

      Technology forecasts indicate that device scaling will continue well into the next decade. Unfortunately, it is becoming extremely difficult to harness this increase in the number of transistorsinto performance due to a number of technological, circuit, architectural, methodological and programming challenges.In this talk, I will argue that the key emerging showstopper is power. Voltage scaling as a means to maintain a constant power envelope with an increase in transistor numbers is hitting diminishing returns. As such, to continue riding the Moore's law we need to look for drastic measures to cut power. This is definitely the case for server chips in future datacenters,where abundant server parallelism, redundancy and 3D chip integration are likely to remove programming, reliability and bandwidth hurdles, leaving power as the only true limiter.I will present results backing this argument based on validated models for future server chips and parameters extracted from real commercial workloads. Then I use these results to project future research directions for datacenter hardware and software.About the speakerBabak Falsafi is a Professor in the School of Computer and Communication Sciences at EPFL, and an Adjunct Professor of Electrical and Computer Engineering and Computer Science at Carnegie Mellon. He is thefounder and the director ofthe Parallel Systems Architecture Laboratory (PARSA) at EPFL where he conducts research onarchitectural support for parallel programming, resilient systems, architectures to break the memory wall, and analytic and simulation tools for computer system performance evaluation.In 1999, in collaboration with T. N. Vijaykumar he showed for the first time that, contrary to conventional wisdom,multiprocessors do not needrelaxed memory consistency models (and the resulting convoluted programming interfaces found and used in modern systems) to achieve high performance. He is a recipient of an NSF CAREER award in 2000, IBM Faculty Partnership Awards between 2001 and 2004, and an Alfred P. Sloan Research Fellowship in 2004. He is a senior member of IEEE and ACM.

    9. Wake Fields in the Super B Factory Interaction Region

      SciTech Connect (OSTI)

      Weathersby, Stephen; Novokhatski, Alexander; /SLAC

      2011-06-02

      The geometry of storage ring collider interaction regions present an impedance to beam fields resulting in the generation of additional electromagnetic fields (higher order modes or wake fields) which affect the beam energy and trajectory. These affects are computed for the Super B interaction region by evaluating longitudinal loss factors and averaged transverse kicks for short range wake fields. Results indicate at least a factor of 2 lower wake field power generation in comparison with the interaction region geometry of the PEP-II B-factory collider. Wake field reduction is a consderation in the Super B design. Transverse kicks are consistent with an attractive potential from the crotch nearest the beam trajectory. The longitudinal loss factor scales as the -2.5 power of the bunch length. A factor of 60 loss factor reduction is possible with crotch geometry based on an intersecting tubes model.

    10. Strong interactions in air showers

      SciTech Connect (OSTI)

      Dietrich, Dennis D.

      2015-03-02

      We study the role new gauge interactions in extensions of the standard model play in air showers initiated by ultrahigh-energy cosmic rays. Hadron-hadron events remain dominated by quantum chromodynamics, while projectiles and/or targets from beyond the standard model permit us to see qualitative differences arising due to the new interactions.

    11. Human-computer interface incorporating personal and application domains

      DOE Patents [OSTI]

      Anderson, Thomas G.

      2004-04-20

      The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.

    12. Human-computer interface incorporating personal and application domains

      DOE Patents [OSTI]

      Anderson, Thomas G. (Albuquerque, NM)

      2011-03-29

      The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.

    13. Perspectives in Energy Research: How Can We Change the Game? (2011 Summit)

      ScienceCinema (OSTI)

      Isaacs, Eric (Director, Argonne National Laboratory)

      2012-03-14

      Eric Issacs, Director of DOE's Argonne National Laboratory, discussed the role of the EFRC Program and National Laboratories in developing game-changing energy technologies in the EFRC Summit session titled "Leading Perspectives in Energy Research." The 2011 EFRC Summit and Forum brought together the EFRC community and science and policy leaders from universities, national laboratories, industry and government to discuss "Science for our Nation's Energy Future." In August 2009, the Office of Science established 46 Energy Frontier Research Centers. The EFRCs are collaborative research efforts intended to accelerate high-risk, high-reward fundamental research, the scientific basis for transformative energy technologies of the future. These Centers involve universities, national laboratories, nonprofit organizations, and for-profit firms, singly or in partnerships, selected by scientific peer review. They are funded at $2 to $5 million per year for a total planned DOE commitment of $777 million over the initial five-year award period, pending Congressional appropriations. These integrated, multi-investigator Centers are conducting fundamental research focusing on one or more of several ?grand challenges? and use-inspired ?basic research needs? recently identified in major strategic planning efforts by the scientific community. The purpose of the EFRCs is to integrate the talents and expertise of leading scientists in a setting designed to accelerate research that transforms the future of energy and the environment.

    14. Toward Molecular Catalysts by Computer

      SciTech Connect (OSTI)

      Raugei, Simone; DuBois, Daniel L.; Rousseau, Roger J.; Chen, Shentan; Ho, Ming-Hsun; Bullock, R. Morris; Dupuis, Michel

      2015-02-17

      Rational design of molecular catalysts requires a systematic approach to designing ligands with specific functionality and precisely tailored electronic and steric properties. It then becomes possible to devise computer protocols to predict accurately the required properties and ultimately to design catalysts by computer. In this account we first review how thermodynamic properties such as oxidation-reduction potentials (E0), acidities (pKa), and hydride donor abilities (ΔGH-) form the basis for a systematic design of molecular catalysts for reactions that are critical for a secure energy future (hydrogen evolution and oxidation, oxygen and nitrogen reduction, and carbon dioxide reduction). We highlight how density functional theory allows us to determine and predict these properties within “chemical” accuracy (~ 0.06 eV for redox potentials, ~ 1 pKa unit for pKa values, and ~ 1.5 kcal/mol for hydricities). These quantities determine free energy maps and profiles associated with catalytic cycles, i.e. the relative energies of intermediates, and help us distinguish between desirable and high-energy pathways and mechanisms. Good catalysts have flat profiles that avoid high activation barriers due to low and high energy intermediates. We illustrate how the criterion of a flat energy profile lends itself to the prediction of design points by computer for optimum catalysts. This research was carried out in the Center for Molecular Electro-catalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences. Pacific Northwest National Laboratory (PNNL) is operated for the DOE by Battelle.

    15. GPU COMPUTING FOR PARTICLE TRACKING

      SciTech Connect (OSTI)

      Nishimura, Hiroshi; Song, Kai; Muriki, Krishna; Sun, Changchun; James, Susan; Qin, Yong

      2011-03-25

      This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculation of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.

    16. Controlling data transfers from an origin compute node to a target compute node

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2011-06-21

      Methods, apparatus, and products are disclosed for controlling data transfers from an origin compute node to a target compute node that include: receiving, by an application messaging module on the target compute node, an indication of a data transfer from an origin compute node to the target compute node; and administering, by the application messaging module on the target compute node, the data transfer using one or more messaging primitives of a system messaging module in dependence upon the indication.

    17. Computational Capabilities for Predictions of Interactions at the Grain Boundary of Refractory Alloys

      SciTech Connect (OSTI)

      Sengupta, Debasis; Kwak, Shaun; Vasenkov, Alex; Shin, Yun Kyung; Duin, Adri van

      2014-09-30

      New high performance refractory alloys are critically required for improving efficiency and decreasing CO2 emissions of fossil energy systems. The development of these materials remains slow because it is driven by a trial-and-error experimental approach and lacks a rational design approach. Atomistic Molecular Dynamic (MD) design has the potential to accelerate this development through the prediction of mechanical properties and corrosion resistance of new materials. The success of MD simulations depends critically on the fidelity of interatomic potentials. This project, in collaboration with Penn State, has focused on developing and validating high quality quantum mechanics based reactive potentials, ReaxFF, for Ni-Fe-Al-Cr-O-S system. A larger number of accurate density functional theory (DFT) calculations were performed to generate data for parameterizing the ReaxFF potentials. These potentials were then used in molecular dynamics (MD) and molecular dynamics-Monte Carlo (MD-MC) for much larger system to study for which DFT calculation would be prohibitively expensive, and to understand a number of chemical phenomena Ni-Fe-Al-Cr-O-S based alloy systems . These include catalytic oxidation of butane on clean Cr2O3 and pyrite/Cr2O3, interfacial reaction between Cr2O3 (refractory material) and Al2O3 (slag), cohesive strength of at the grain boundary of S-enriched Cr compared to bulk Cr and Ssegregation study in Al, Al2O3, Cr and Cr2O3 with a grain structure. The developed quantum based ReaxFF potential are available from the authors upon request. During this project, a number of papers were published in peer-reviewed journals. In addition, several conference presentations were made.

    18. Scott Runnels of Computational Physics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Scott Runnels of Computational Physics to teach at West Point March 19, 2013 LOS ALAMOS, N. M., March 19, 2013- Under an agreement between Los Alamos National Laboratory and the U.S. Military Academy, Scott Runnels has been selected for a two-year faculty post in the Department of Physics and Nuclear Engineering at West Point. The teaching position is intended to strengthen the ties between the U.S. national laboratories and the U.S. military academies by bringing in a top scientist to teach at

    19. computational-hydaulics-march-30

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Aerodynamics using STAR-CCM+ for CFD Analysis March 30-31, 2011 Argonne, Illinois Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small A training course in the use of computational hydraulics and aerodynamics CFD software using CD-adapco's STAR-CCM+ for analysis was held at TRACC from March 30-31, 2011. The course assumes a basic knowledge of fluid mechanics and made extensive use of hands on tutorials.

    20. computational-structural-mechanics-training

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Table of Contents Date Location Training Course: HyperMesh and HyperView April 12-14, 2011 Argonne TRACC Argonne, IL Introductory Course: Developing Compute-efficient, Quality Models with LS-PrePost® 3 on the TRACC Cluster October 21-22, 2010 Argonne TRACC West Chicago, IL Modeling and Simulation with LS-DYNA®: Insights into Modeling with a Goal of Providing Credible Predictive Simulations February 11-12, 2010 Argonne TRACC West Chicago, IL Introductory Course: Using LS-OPT® on the TRACC

    1. Computer network control plane tampering monitor

      DOE Patents [OSTI]

      Michalski, John T.; Tarman, Thomas D.; Black, Stephen P.; Torgerson, Mark D.

      2010-06-08

      A computer network control plane tampering monitor that detects unauthorized alteration of a label-switched path setup for an information packet intended for transmission through a computer network.

    2. Fermilab | Science | Particle Physics | Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Today Fermilab serves as one of two US computing centers that processes and analyzes data from experiments at the Large Hadron Collider. The worldwide LHC computing project is one ...

    3. NERSC Intern Wins Award for Computing Achievement

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (NCWIT) Aspirations in Computing award on Saturday, March 16, 2013 in a ceremony in San Jose, CA. The award honors young women at the high school level for their computing-related...

    4. The Brain: Key To a Better Computer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Brain: Key To a Better Computer - Sandia Energy Energy Search Icon Sandia Home Locations ... Twitter Google + Vimeo GovDelivery SlideShare The Brain: Key To a Better Computer Home...

    5. High Performance Computational Biology: A Distributed computing Perspective (2010 JGI/ANL HPC Workshop)

      ScienceCinema (OSTI)

      Konerding, David [Google, Inc

      2011-06-08

      David Konerding from Google, Inc. gives a presentation on "High Performance Computational Biology: A Distributed Computing Perspective" at the JGI/Argonne HPC Workshop on January 26, 2010.

    6. Significant Enhancement of Computational Efficiency in Nonlinear Multiscale Battery Model for Computer Aided Engineering (Presentation)

      SciTech Connect (OSTI)

      Kim, G.; Pesaran, A.; Smith, K.; Graf, P.; Jun, M.; Yang, C.; Li, G.; Li, S.; Hochman, A.; Tselepidakis, D.; White, J.

      2014-06-01

      This presentation discusses the significant enhancement of computational efficiency in nonlinear multiscale battery model for computer aided engineering in current research at NREL.

    7. NERSC Intern Wins Award for Computing Achievement

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Intern Wins Award for Computing Achievement NERSC Intern Wins Award for Computing Achievement March 27, 2013 Linda Vu, lvu@lbl.gov, +1 510 495 2402 ncwit1 Stephanie Cabanela, a student intern in the National Energy Research Scientific Computing Center's (NERSC) Operation Technologies Group was honored with the Bay Area Affiliate National Center for Women and Information Technology (NCWIT) Aspirations in Computing award on Saturday, March 16, 2013 in a ceremony in San Jose, CA. The award honors

    8. Validating Computer-Designed Proteins for Vaccines

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Validating Computer-Designed Proteins for Vaccines Validating Computer-Designed Proteins for Vaccines Print Thursday, 21 August 2014 12:05 In the struggle to keep up with microbes whose rapid mutations outpace our ability to produce vaccines, the human race has a powerful ally: computers. Researchers have now figured out a way to use computational protein design to generate small, stable proteins that accurately mimic key viral structures; these can then be used in vaccines to induce potent

    9. Future Computing Needs for Innovative Confinement Concepts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      of Plasma Science and Innovation Center Current Computing Utilization and Resources Near Term Needs Concluding Comments Future Computing Needs for Innovative Confinement Concepts Charlson C. Kim charlson@aa.washington.edu Plasma Science and Innovation Center University of Washington, Seattle August 3, 2010 Large Scale Computing Needs for Fusion Energy Science Workshop Rockville, MD Charlson C. Kim, PSI-Center Future Computing Needs of ICC's Introduction of Plasma Science and Innovation Center

    10. Computers and Monitors | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers and Monitors Buying a Computer or Monitor If you have a need to purchase a computer and/or monitor, follow this How To Guide to search the registry for EPEAT products. On your purchase requisition, indicate whether or not the item is EPEAT registered. *Acceptable Justifications/exceptions will be rare for computer or monitor through the Ames Laboratory storeroom. Both items offered through the storeroom are registered as EPEAT Gold* Office Electronics - look for ENERGY STAR and

    11. NERSC National Energy Research Scientific Computing Center

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      National Energy Research Scientific Computing Center 2007 Annual Report National Energy Research Scientific Computing Center 2007 Annual Report Ernest Orlando Lawrence Berkeley National Laboratory 1 Cyclotron Road, Berkeley, CA 94720-8148 This work was supported by the Director, Office of Science, Office of Ad- vanced Scientific Computing Research of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. LBNL-1143E, October 2008 iii National Energy Research Scientific Computing

    12. National Energ y Research Scientific Computing Center

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Annual Report This work was supported by the Director, Office of Science, Office of Advanced Scientific Computing Research of the U.S. Department of Energy under Contract No. DE-AC 03-76SF00098. LBNL-49186, December 2001 National Energ y Research Scientific Computing Center 2001 Annual Report NERSC aspires to be a world leader in accelerating scientific discovery through computation. Our vision is to provide high- performance computing tools to tackle science's biggest and most challenging

    13. Computing contingency statistics in parallel.

      SciTech Connect (OSTI)

      Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

      2010-09-01

      Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

    14. Argonne's Laboratory computing center - 2007 annual report.

      SciTech Connect (OSTI)

      Bair, R.; Pieper, G. W.

      2008-05-28

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    15. Interactive Map Shows Geothermal Resources

      Broader source: Energy.gov [DOE]

      The free interactive online map posted recently by the Oregon Department of Geology and Mineral Industries is part of a U.S. Department of Energy project to expand the knowledge of geothermal energy potential nationwide.

    16. Groundwater Report Goes Online, Interactive

      Broader source: Energy.gov [DOE]

      RICHLAND, Wash. – EM’s Richland Operations Office (RL) has moved its 1,200-page annual report on groundwater monitoring to a fully online and interactive web application.

    17. Surprising Quasiparticle Interactions in Graphene

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      that suggest new kinds of devices. Particle Physics in Your Pencil Quantum electrodynamics, or QED, is the theory of many-body interactions first invented in the 1950s by...

    18. Fundamental Interactions - Research - Cyclotron Institute

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Fundamental Interactions Production of 46V with MARS. Energy loss versus position on Y axis. The Standard Model, which unifies the strong, electromagnetic and weak forces, has been remarkably successful in describing the interactions of quarks and leptons. However, the model is incomplete, and it is the goal of this research program to sensitively probe its limits. Though in most cases we use the nucleus as a micro-laboratory for testing the Standard Model, the implications of the results extend

    19. Interactive Grid | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Educational Resources » Interactive Grid Interactive Grid Each time you flick a light switch or press a power button, you enjoy the benefits of the nation's incredible electric grid. The grid is a complex network of people and machinery working around the clock to produce and deliver electricity to millions of homes across the nation. The electric grid works so well, Americans often think about it only when they receive their electric bills, or in those rare instances when there is a power

    20. Adaptable Computing Environment/Self-Assembling Software

      Energy Science and Technology Software Center (OSTI)

      2007-09-25

      Complex software applications are difficult to learn to use and to remember how to use. Further, the user has no control over the functionality available in a given application. The software we use can be created and modified only by a relatively small group of elite, highly skilled artisans known as programmers. "Normal users" are powerless to create and modify software themselves, because the tools for software development, designed by and for programmers, are amore » barrier to entry. This software, when completed, will be a user-adaptable computing environment in which the user is really in control of his/her own software, able to adapt the system, make new parts of the system interactive, and even modify the behavior of the system itself. Som key features of the basic environment that have been implemented are (a) books in bookcases, where all data is stored, (b) context-sensitive compass menus (compass, because the buttons are located in compass directions relative to the mouose cursor position), (c) importing tabular data and displaying it in a book, (d) light-weight table querying/sorting, (e) a Reach&Get capability (sort of a "smart" copy/paste that prevents the user from copying invalid data), and (f) a LogBook that automatically logs all user actions that change data or the system itself. To bootstrap toward full end-user adaptability, we implemented a set of development tools. With the development tools, compass menus can be made and customized.« less

    1. Supporting large-scale computational science

      SciTech Connect (OSTI)

      Musick, R., LLNL

      1998-02-19

      Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

    2. Pervasive Collaboratorive Computing Environment Jabber Toolkit

      Energy Science and Technology Software Center (OSTI)

      2004-05-15

      PCCE Project background: Our experience in building distributed collaboratories has shown us that there is a growing need for simple, non-intrusive, and flexible ways to stay in touch and work together. Towards this goal we are developing a Pervasive Collaborative Computing Environment (PCCE) within which participants can rendezvous and interact with each other. The PCCE aims to support continuous or ad hoc collaboration, target daily tasks and base connectivity, be easy to use and installmore » across multiple platforms, leverage off of existing components when possible, use standards-based components, and leverage off of Grid services (e.g., security and directory services). A key concept for this work is "incremental trust", which allows the system's "trust" of a given user to change dynamically. PCCE Jabber client software: This leverages Jabber. an open Instant Messaging (IM) protocol and the related Internet Engineering Task Force (IETF) standards "XMPP" and "XMPP-IM" to allow collaborating parties to chat either one-on-one or in "chat rooms". Standard Jabber clients will work within this framework, but the software will also include extensions to a (multi-platform) GUI client (Gaim) for X.509-based security, search, and incremental trust. This software also includes Web interfaces for managing user registration to a Jabber server. PCCE Jabber server software: Extensions to the code, database, and configuration files for the dominant open-source Jabber server, "jabberd". Extensions for search, X.509 security, and incremental trust. Note that the jabberd software is not included as part of this software.« less

    3. Radiological Safety Analysis Computer Program

      Energy Science and Technology Software Center (OSTI)

      2001-08-28

      RSAC-6 is the latest version of the RSAC program. It calculates the consequences of a release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory; decay and in-grow the inventory during transport through processes, facilities, and the environment; model the downwind dispersion of the activity; and calculate doses to downwind individuals. Internal dose from the inhalation and ingestion pathways is calculated. External dose from ground surface andmore » plume gamma pathways is calculated. New and exciting updates to the program include the ability to evaluate a release to an enclosed room, resuspension of deposited activity and evaluation of a release up to 1 meter from the release point. Enhanced tools are included for dry deposition, building wake, occupancy factors, respirable fraction, AMAD adjustment, updated and enhanced radionuclide inventory and inclusion of the dose-conversion factors from FOR 11 and 12.« less

    4. Collective network for computer structures

      DOE Patents [OSTI]

      Blumrich, Matthias A. (Ridgefield, CT); Coteus, Paul W. (Yorktown Heights, NY); Chen, Dong (Croton On Hudson, NY); Gara, Alan (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Heidelberger, Philip (Cortlandt Manor, NY); Hoenicke, Dirk (Ossining, NY); Takken, Todd E. (Brewster, NY); Steinmacher-Burow, Burkhard D. (Wernau, DE); Vranas, Pavlos M. (Bedford Hills, NY)

      2011-08-16

      A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.

    5. Foundational Tools for Petascale Computing

      SciTech Connect (OSTI)

      Miller, Barton

      2014-05-19

      The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, High-Performance Energy Applications and Systems, SC0004061/FG02-10ER25972, UW PRJ36WV.

    6. Collective network for computer structures

      DOE Patents [OSTI]

      Blumrich, Matthias A; Coteus, Paul W; Chen, Dong; Gara, Alan; Giampapa, Mark E; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd E; Steinmacher-Burow, Burkhard D; Vranas, Pavlos M

      2014-01-07

      A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to the needs of a processing algorithm.

    7. ASCR Workshop on Quantum Computing for Science

      SciTech Connect (OSTI)

      Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

      2015-06-01

      This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

    8. Cupola Furnace Computer Process Model

      SciTech Connect (OSTI)

      Seymour Katz

      2004-12-31

      The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

    9. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

      SciTech Connect (OSTI)

      Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

      2009-10-12

      In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

    10. Chameleon: A Computer Science Testbed as Application of Cloud...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Chameleon: A Computer Science Testbed as Application of Cloud Computing Event Sponsor: Mathematics and Computing Science Brownbag Lunch Start Date: Dec 15 2015 - 12:00pm Building...

    11. Large Scale Computing and Storage Requirements for Advanced Scientific...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for ...

    12. Previous Computer Science Award Announcements | U.S. DOE Office...

      Office of Science (SC) Website

      Previous Computer Science Award Announcements Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop...

    13. Technical Standards, Guidance on MELCOR computer code - May 3...

      Office of Environmental Management (EM)

      Standards, Guidance on MELCOR computer code - May 3, 2004 Technical Standards, Guidance on MELCOR computer code - May 3, 2004 May 3, 2004 MELCOR Computer Code Application Guidance...

    14. High-performance computer system installed at Los Alamos National...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High-performance computer system installed at Lab High-performance computer system installed at Los Alamos National Laboratory New high-performance computer system, called Wolf,...

    15. Determining protein function and interaction from genome analysis

      DOE Patents [OSTI]

      Eisenberg, David; Marcotte, Edward M.; Thompson, Michael J.; Pellegrini, Matteo; Yeates, Todd O.

      2004-08-03

      A computational method system, and computer program are provided for inferring functional links from genome sequences. One method is based on the observation that some pairs of proteins A' and B' have homologs in another organism fused into a single protein chain AB. A trans-genome comparison of sequences can reveal these AB sequences, which are Rosetta Stone sequences because they decipher an interaction between A' and B. Another method compares the genomic sequence of two or more organisms to create a phylogenetic profile for each protein indicating its presence or absence across all the genomes. The profile provides information regarding functional links between different families of proteins. In yet another method a combination of the above two methods is used to predict functional links.

    16. Communication: Self-interaction correction with unitary invariance in density functional theory

      SciTech Connect (OSTI)

      Pederson, Mark R.; Ruzsinszky, Adrienn; Perdew, John P.; Department of Chemistry, Temple University, Philadelphia, Pennsylvania 19122

      2014-03-28

      Standard spin-density functionals for the exchange-correlation energy of a many-electron ground state make serious self-interaction errors which can be corrected by the Perdew-Zunger self-interaction correction (SIC). We propose a size-extensive construction of SIC orbitals which, unlike earlier constructions, makes SIC computationally efficient, and a true spin-density functional. The SIC orbitals are constructed from a unitary transformation that is explicitly dependent on the non-interacting one-particle density matrix. When this SIC is applied to the local spin-density approximation, improvements are found for the atomization energies of molecules.

    17. Locating hardware faults in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-04-13

      Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

    18. Impact analysis on a massively parallel computer

      SciTech Connect (OSTI)

      Zacharia, T.; Aramayo, G.A.

      1994-06-01

      Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper.

    19. Experimental and Modeling Investigation of Radionuclide Interaction...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      interactions with clay minerals with results suggesting that iodide may directly interact with clays by forming ion-pairs which may concentrate within the interlayer space as...

    20. Structural Interactions within Lithium Salt Solvates: Acyclic...

      Office of Scientific and Technical Information (OSTI)

      Structural Interactions within Lithium Salt Solvates: Acyclic Carbonates and Esters Citation Details In-Document Search Title: Structural Interactions within Lithium Salt Solvates: ...

    1. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

      DOE Patents [OSTI]

      Faraj, Ahmad (Rochester, MN)

      2012-04-17

      Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

    2. Elementary Particles and Weak Interactions

      DOE R&D Accomplishments [OSTI]

      Lee, T. D.; Yang, C. N.

      1957-01-01

      Some general patterns of interactions between various elementary particles are reviewed and some general questions concerning the symmetry properties of these particles are studied. Topics are included on the theta-tau puzzle, experimental limits on the validity of parity conservation, some general discussions on the consequences due to possible non-invariance under P, C, and T, various possible experimental tests on invariance under P, C, and T, a two-component theory of the neutrino, a possible law of conservation of leptons and the universal Fermi interactions, and time reversal invariance and Mach's principle. (M.H.R.)

    3. Multicore: Fallout from a Computing Evolution

      ScienceCinema (OSTI)

      Yelick, Kathy [Director, NERSC

      2009-09-01

      July 22, 2008 Berkeley Lab lecture: Parallel computing used to be reserved for big science and engineering projects, but in two years that's all changed. Even laptops and hand-helds use parallel processors. Unfortunately, the software hasn't kept pace. Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab, describes the resulting chaos and the computing community's efforts to develop exciting applications that take advantage of tens or hundreds of processors on a single chip.

    4. Validating Computer-Designed Proteins for Vaccines

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Validating Computer-Designed Proteins for Vaccines Print In the struggle to keep up with microbes whose rapid mutations outpace our ability to produce vaccines, the human race has a powerful ally: computers. Researchers have now figured out a way to use computational protein design to generate small, stable proteins that accurately mimic key viral structures; these can then be used in vaccines to induce potent neutralizing antibodies. The results were validated in part using protein structures

    5. Shaping Future Supercomputing Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      0 1 1 a n n u a l r e p o r t Shaping Future Supercomputing Argonne Leadership Computing Facility ANL-12/22 Argonne Leadership Computing Facility 2 0 1 1 a l c f a n n u a l r e p o r t w w w . a l c f . a n l . g o v Contents Overview .......................................2 Mira ..............................................4 Science Highlights ...........................8 Computing Resources ..................... 26 2011 ALCF Publications .................. 28 2012 INCITE Projects

    6. Present and Future Computing Requirements for PETSc

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Future Computing Requirements for PETSc Jed Brown jedbrown@mcs.anl.gov Mathematics and Computer Science Division, Argonne National Laboratory Department of Computer Science, University of Colorado Boulder NERSC ASCR Requirements for 2017 2014-01-15 Extending PETSc's Hierarchically Nested Solvers ANL Lois C. McInnes, Barry Smith, Jed Brown, Satish Balay UChicago Matt Knepley IIT Hong Zhang LBL Mark Adams Linear solvers, nonlinear solvers, time integrators, optimization methods (merged TAO)

    7. High Performance Computing Student Career Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC » Students High Performance Computing Student Career Resources Explore the multiple dimensions of a career at Los Alamos Lab: work with the best minds on the planet in an inclusive environment that is rich in intellectual vitality and opportunities for growth. Contact Us Student Liaison Josephine Kilde (505) 667-5086 Email High Performance Computing Capabilities The High Performance Computing (HPC) Division supports the Laboratory mission by managing world-class Supercomputing Centers. Our

    8. Validating Computer-Designed Proteins for Vaccines

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Validating Computer-Designed Proteins for Vaccines Print In the struggle to keep up with microbes whose rapid mutations outpace our ability to produce vaccines, the human race has a powerful ally: computers. Researchers have now figured out a way to use computational protein design to generate small, stable proteins that accurately mimic key viral structures; these can then be used in vaccines to induce potent neutralizing antibodies. The results were validated in part using protein structures

    9. Validating Computer-Designed Proteins for Vaccines

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Validating Computer-Designed Proteins for Vaccines Print In the struggle to keep up with microbes whose rapid mutations outpace our ability to produce vaccines, the human race has a powerful ally: computers. Researchers have now figured out a way to use computational protein design to generate small, stable proteins that accurately mimic key viral structures; these can then be used in vaccines to induce potent neutralizing antibodies. The results were validated in part using protein structures

    10. Validating Computer-Designed Proteins for Vaccines

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Validating Computer-Designed Proteins for Vaccines Print In the struggle to keep up with microbes whose rapid mutations outpace our ability to produce vaccines, the human race has a powerful ally: computers. Researchers have now figured out a way to use computational protein design to generate small, stable proteins that accurately mimic key viral structures; these can then be used in vaccines to induce potent neutralizing antibodies. The results were validated in part using protein structures

    11. INCITE Program | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science at ALCF Allocation Programs INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations INCITE Program Innovative and Novel Computational Impact on Theory and Experiment (INCITE) Program The INCITE program provides allocations to computationally intensive,

    12. Improved computer models support genetics research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      February » Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and computational methods provide insight into why genes are activated. February 8, 2013 When complete, these barriers will be a portion of the NMSSUP upgrade. This molecular structure depicts a yeast transfer ribonucleic acid (tRNA), which carries a single amino acid to the ribosome during protein construction. A combined

    13. Fermilab | Science at Fermilab | Computing | Networking

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Detectors and Computing Detectors and Computing Computing Networking Physicists are constantly exchanging information, within Fermilab and between Fermilab and collaborating institutions. They do this from the design phase of an experiment to long after they have finished collecting data. To move huge amounts of data from one place to another, Fermilab needs high-performance networking. For years, Fermilab has been the largest user of Energy Services Network, or ESnet, a network the Department

    14. Accounts Policy | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Accounts Policy All holders of user accounts must abide by all appropriate Argonne Leadership Computing Facility and Argonne National Laboratory computing usage policies. These are described at the time of the account request and include requirements such as using a sufficiently strong password, appropriate use of the system, and so on. Any user not following these requirements will have their account disabled. Furthermore, ALCF resources are intended to be used as a computing resource for

    15. Introduction to High Performance Computing Using GPUs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC Using GPUs Introduction to High Performance Computing Using GPUs July 11, 2013 NERSC, NVIDIA, and The Portland Group presented a one-day workshop "Introduction to High Performance Computing Using GPUs" on July 11, 2013 in Room 250 of Sutardja Dai Hall on the University of California, Berkeley, campus. Registration was free and open to all NERSC users; Berkeley Lab Researchers; UC students, faculty, and staff; and users of the Oak Ridge Leadership Computing Facility. This workshop

    16. Marta Garcia Martinez | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Marta Garcia Martinez Assistant Computational Scientist Marta Garcia Martinez Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 1132 Argonne IL, 60439 630-252-0091 mgarcia@alcf.anl.gov http://web.alcf.anl.gov/~mgarcia/ Marta García is an Assistant Computational Scientist at the ALCF. She is part of the Catalyst Team, where she focuses on assisting Computational Fluid Dynamics projects to maximize and accelerate their research on ALCF resources. She obtained a degree in

    17. Computer Networking Group | Stanford Synchrotron Radiation Lightsource

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Networking Group Do you need help? For assistance please submit a CNG Help Request ticket. CNG Logo Chris Ramirez SSRL Computer and Networking Group (650) 926-2901 | email Jerry Camuso SSRL Computer and Networking Group (650) 926-2994 | email Networking Support The Networking group provides connectivity and communications services for SSRL. The services provided by the Networking Support Group include: Local Area Network support for cable and wireless connectivity. Installation and

    18. Digital computer operation of a nuclear reactor

      DOE Patents [OSTI]

      Colley, Robert W. (Richland, WA)

      1984-01-01

      A method is described for the safe operation of a complex system such as a nuclear reactor using a digital computer. The computer is supplied with a data base containing a list of the safe state of the reactor and a list of operating instructions for achieving a safe state when the actual state of the reactor does not correspond to a listed safe state, the computer selects operating instructions to return the reactor to a safe state.

    19. User Advisory Council | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      About Overview History Staff Directory Our Teams User Advisory Council Careers Margaret Butler Fellowship Visiting Us Contact Us User Advisory Council The User Advisory Council meets regularly to review major policies and to provide user feedback to the facility leadership. All council members are active Principal Investigators or users of ALCF computational resources through one or more of the allocation programs. Martin Berzins Professor Department of Computer Science Scientific Computing and

    20. Improved computer models support genetics research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Simple computer models unravel genetic stress reactions in cells Simple computer models unravel genetic stress reactions in cells Integrated biological and computational methods provide insight into why genes are activated. February 8, 2013 When complete, these barriers will be a portion of the NMSSUP upgrade. This molecular structure depicts a yeast transfer ribonucleic acid (tRNA), which carries a single amino acid to the ribosome during protein construction. A combined experimental and