National Library of Energy BETA

Sample records for distributed computing group

  1. NERSC seeks Computational Systems Group Lead

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    seeks Computational Systems Group Lead NERSC seeks Computational Systems Group Lead January 6, 2011 by Katie Antypas Note: This position is now closed. The Computational Systems Group provides production support and advanced development for the supercomputer systems at NERSC. Manage the Computational Systems Group (CSG) which provides production support and advanced development for the supercomputer systems at NERSC (National Energy Research Scientific Computing Center). These systems, which

  2. Computer Networking Group | Stanford Synchrotron Radiation Lightsource

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer Networking Group Do you need help? For assistance please submit a CNG Help Request ticket. CNG Logo Chris Ramirez SSRL Computer and Networking Group (650) 926-2901 | email Jerry Camuso SSRL Computer and Networking Group (650) 926-2994 | email Networking Support The Networking group provides connectivity and communications services for SSRL. The services provided by the Networking Support Group include: Local Area Network support for cable and wireless connectivity. Installation and

  3. Distributed Energy Financial Group | Open Energy Information

    Open Energy Info (EERE)

    Financial Group Jump to: navigation, search Name: Distributed Energy Financial Group Place: Washington, DC, Washington, DC Zip: 20016-25 12 Sector: Services Product: The...

  4. Distributed Energy Systems Integration Group (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2009-10-01

    Factsheet developed to describe the activites of the Distributed Energy Systems Integration Group within NREL's Electricity, Resources, and Buildings Systems Integration center.

  5. Snowmass Computing Frontier I2: Distributed

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Snowmass Computing Frontier I2: Distributed Computing and Facility Infrastructures Ken Bloom Richard Gerber July 31, 2013 Thursday, October 10, 13 Computing Frontier I2: Distributed Computing and Facility Infrastructures 7/31/13 Who we are ‣ Ken Bloom, Associate Professor, Department of Physics and Astronomy, University of Nebraska-Lincoln ‣ Co-PI for the Nebraska CMS Tier-2 computing facility ‣ Tier-2 program manager and Deputy Manager of Software and Computing for US CMS ‣ Tier-2

  6. Distributions of methyl group rotational barriers in polycrystalline organic solids

    SciTech Connect (OSTI)

    Beckmann, Peter A. E-mail: wangxianlong@uestc.edu.cn; Conn, Kathleen G.; Division of Education and Human Services, Neumann University, One Neumann Drive, Aston, Pennsylvania 19014-1298 ; Mallory, Clelia W.; Department of Chemistry, Bryn Mawr College, 101 North Merion Ave., Bryn Mawr, Pennsylvania 19010-2899 ; Mallory, Frank B.; Rheingold, Arnold L.; Rotkina, Lolita; Wang, Xianlong E-mail: wangxianlong@uestc.edu.cn

    2013-11-28

    We bring together solid state {sup 1}H spin-lattice relaxation rate measurements, scanning electron microscopy, single crystal X-ray diffraction, and electronic structure calculations for two methyl substituted organic compounds to investigate methyl group (CH{sub 3}) rotational dynamics in the solid state. Methyl group rotational barrier heights are computed using electronic structure calculations, both in isolated molecules and in molecular clusters mimicking a perfect single crystal environment. The calculations are performed on suitable clusters built from the X-ray diffraction studies. These calculations allow for an estimate of the intramolecular and the intermolecular contributions to the barrier heights. The {sup 1}H relaxation measurements, on the other hand, are performed with polycrystalline samples which have been investigated with scanning electron microscopy. The {sup 1}H relaxation measurements are best fitted with a distribution of activation energies for methyl group rotation and we propose, based on the scanning electron microscopy images, that this distribution arises from molecules near crystallite surfaces or near other crystal imperfections (vacancies, dislocations, etc.). An activation energy characterizing this distribution is compared with a barrier height determined from the electronic structure calculations and a consistent model for methyl group rotation is developed. The compounds are 1,6-dimethylphenanthrene and 1,8-dimethylphenanthrene and the methyl group barriers being discussed and compared are in the 212 kJ?mol{sup ?1} range.

  7. Interoperable PKI Data Distribution in Computational Grids

    SciTech Connect (OSTI)

    Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.; Smith, Sean W.

    2008-07-25

    One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Grid Security Infrastructure (GSI).

  8. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group

    Broader source: Energy.gov [DOE]

    The Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG), launched in October 2006, provides a forum for effective communication and collaboration among participants in DOE...

  9. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Kick-Off Meeting | Department of Energy Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Kick-Off Meeting Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Kick-Off Meeting The U.S. Department of Energy held a kick-off meeting for the Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) on October 24, 2006, in Baltimore, Maryland. The Working Group is addressing technical challenges to distributed reforming of biomass-derived,

  10. High Performance Computational Biology: A Distributed computing Perspective (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Konerding, David [Google, Inc

    2011-06-08

    David Konerding from Google, Inc. gives a presentation on "High Performance Computational Biology: A Distributed Computing Perspective" at the JGI/Argonne HPC Workshop on January 26, 2010.

  11. Perspectives on distributed computing : thirty people, four user types, and the distributed computing user experience.

    SciTech Connect (OSTI)

    Childers, L.; Liming, L.; Foster, I.; Mathematics and Computer Science; Univ. of Chicago

    2008-10-15

    This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software; (2) A method for characterizing users according to their technology interactions, and identification of four user types among the interviewees using the method; (3) Four profiles that highlight points of commonality and diversity in each user type; (4) Recommendations for technology developers and future studies; (5) A description of the interview protocol and overall study methodology; (6) An anonymized list of the interviewees; and (7) Interview writeups and summary data. The interview summaries in Section 3 and transcripts in Appendix D illustrate the value of distributed computing software--and Globus in particular--to scientific enterprises. They also document opportunities to make these tools still more useful both to current users and to new communities. We aim our recommendations at developers who intend their software to be used and reused in many applications. (This kind of software is often referred to as 'middleware.') Our two core recommendations are as follows. First, it is essential for middleware developers to understand and explicitly manage the multiple user products in which their software components are used. We must avoid making assumptions about the commonality of these products and, instead, study and account for their diversity. Second, middleware developers should engage in different ways with different kinds of users. Having identified four general user types in Section 4, we provide specific ideas for how to engage them in Section 5.

  12. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    (BILIWG), Hydrogen Separation and Purification Working Group (PURIWG) & Hydrogen Production Technical Team | Department of Energy Working Group (BILIWG), Hydrogen Separation and Purification Working Group (PURIWG) & Hydrogen Production Technical Team Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG), Hydrogen Separation and Purification Working Group (PURIWG) & Hydrogen Production Technical Team 2007 Annual and Merit Review Reports compiled for the

  13. Distributed Design and Analysis of Computer Experiments

    Energy Science and Technology Software Center (OSTI)

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an algorithm by Michael McKay to compute variable correlations. DDACE can also be used to carry out a main-effects analysis to calculate the sensitivity of an output variable to each of the varied inputs taken individually. 1 Continued« less

  14. Working Group Report: Computing for the Intensity Frontier

    SciTech Connect (OSTI)

    Rebel, B.; Sanchez, M.C.; Wolbers, S.

    2013-10-25

    This is the report of the Computing Frontier working group on Lattice Field Theory prepared for the proceedings of the 2013 Community Summer Study ("Snowmass"). We present the future computing needs and plans of the U.S. lattice gauge theory community and argue that continued support of the U.S. (and worldwide) lattice-QCD effort is essential to fully capitalize on the enormous investment in the high-energy physics experimental program. We first summarize the dramatic progress of numerical lattice-QCD simulations in the past decade, with some emphasis on calculations carried out under the auspices of the U.S. Lattice-QCD Collaboration, and describe a broad program of lattice-QCD calculations that will be relevant for future experiments at the intensity and energy frontiers. We then present details of the computational hardware and software resources needed to undertake these calculations.

  15. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect (OSTI)

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  16. Clock distribution system for digital computers

    DOE Patents [OSTI]

    Wyman, Robert H.; Loomis, Jr., Herschel H.

    1981-01-01

    Apparatus for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse "overtaking" a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component V'.sub.01 (t); an array of N signal characteristic detector means, with detector means No. 1 receiving the timing means signal and producing a change-of-state signal V.sub.1 (t) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal V.sub.n (t) and producing a modified change-of-state signal V'.sub.n (t) (n=1, . . . , N) having a fundamental frequency component that is substantially proportional to V'.sub.01 (t-.theta..sub.n (t) with a cumulative phase shift .theta..sub.n (t) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1.ltoreq.n

  17. Evaluation of distributed ANSYS for high performance computing of MEMS.

    Office of Scientific and Technical Information (OSTI)

    (Conference) | SciTech Connect Evaluation of distributed ANSYS for high performance computing of MEMS. Citation Details In-Document Search Title: Evaluation of distributed ANSYS for high performance computing of MEMS. No abstract prepared. Authors: Baker, Michael Sean ; Yarberry, Victor R. ; Wittwer, Jonathan W. Publication Date: 2007-04-01 OSTI Identifier: 908706 Report Number(s): SAND2007-2708C TRN: US200722%%755 DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource

  18. Computation of glint, glare, and solar irradiance distribution

    SciTech Connect (OSTI)

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  19. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect (OSTI)

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  20. A directory service for configuring high-performance distributed computations

    SciTech Connect (OSTI)

    Fitzgerald, S.; Kesselman, C.; Foster, I.

    1997-08-01

    High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

  1. Institutional Computing Executive Group Review of Multi-programmatic & Institutional Computing, Fiscal Year 2005 and 2006

    SciTech Connect (OSTI)

    Langer, S; Rotman, D; Schwegler, E; Folta, P; Gee, R; White, D

    2006-12-18

    The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources. Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.

  2. Gaussian distributions, Jacobi group, and Siegel-Jacobi space

    SciTech Connect (OSTI)

    Molitor, Mathieu

    2014-12-15

    Let N be the space of Gaussian distribution functions over ?, regarded as a 2-dimensional statistical manifold parameterized by the mean ? and the deviation ?. In this paper, we show that the tangent bundle of N, endowed with its natural Khler structure, is the Siegel-Jacobi space appearing in the context of Number Theory and Jacobi forms. Geometrical aspects of the Siegel-Jacobi space are discussed in detail (completeness, curvature, group of holomorphic isometries, space of Khler functions, and relationship to the Jacobi group), and are related to the quantum formalism in its geometrical form, i.e., based on the Khler structure of the complex projective space. This paper is a continuation of our previous work [M. Molitor, Remarks on the statistical origin of the geometrical formulation of quantum mechanics, Int. J. Geom. Methods Mod. Phys. 9(3), 1220001, 9 (2012); M. Molitor, Information geometry and the hydrodynamical formulation of quantum mechanics, e-print arXiv (2012); M. Molitor, Exponential families, Khler geometry and quantum mechanics, J. Geom. Phys. 70, 5480 (2013)], where we studied the quantum formalism from a geometric and information-theoretical point of view.

  3. GAiN: Distributed Array Computation with Python

    SciTech Connect (OSTI)

    Daily, Jeffrey A.

    2009-04-24

    Scientific computing makes use of very large, multidimensional numerical arrays - typically, gigabytes to terabytes in size - much larger than can fit on even the largest single compute node. Such arrays must be distributed across a "cluster" of nodes. Global Arrays is a cluster-based software system from Battelle Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate these arrays. Written in and for the C and FORTRAN programming languages, it takes advantage of high-performance cluster interconnections to allow any node in the cluster to access data on any other node very rapidly. The "numpy" module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. numpy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, numpy is inherently serial. Our system, GAiN (Global Arrays in NumPy), is a parallel extension to Python that accesses Global Arrays through numpy. This allows parallel processing and/or larger problem sizes to be harnessed almost transparently within new or existing numpy programs.

  4. Providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Faraj, Ahmad A.; Inglett, Todd A.; Ratterman, Joseph D.

    2012-10-23

    Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier.

  5. Data-aware distributed scientific computing for big-data problems...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Data-aware distributed scientific computing for big-data problems in bio-surveillance Citation Details In-Document Search Title: Data-aware distributed scientific...

  6. Data-aware distributed scientific computing for big-data problems...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Data-aware distributed scientific computing for big-data problems in bio-surveillance Citation Details In-Document Search Title: Data-aware distributed scientific ...

  7. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Background Paper

    Broader source: Energy.gov [DOE]

    Paper by Arlene Anderson and Tracy Carole presented at the Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group, with a focus on key drivers, purpose, and scope.

  8. High-performance, distributed computing software libraries and services

    Energy Science and Technology Software Center (OSTI)

    2002-01-24

    The Globus toolkit provides basic Grid software infrastructure (i.e. middleware), to facilitate the development of applications which securely integrate geographically separated resources, including computers, storage systems, instruments, immersive environments, etc.

  9. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and

    Office of Scientific and Technical Information (OSTI)

    Delaunay Tessellation (Conference) | SciTech Connect SciTech Connect Search Results Conference: High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation Citation Details In-Document Search Title: High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations,

  10. Data-aware distributed scientific computing for big-data problems in

    Office of Scientific and Technical Information (OSTI)

    bio-surveillance (Technical Report) | SciTech Connect Technical Report: Data-aware distributed scientific computing for big-data problems in bio-surveillance Citation Details In-Document Search Title: Data-aware distributed scientific computing for big-data problems in bio-surveillance Authors: Bhattacharya, Tanmoy [1] + Show Author Affiliations Los Alamos National Laboratory Publication Date: 2013-09-09 OSTI Identifier: 1092438 Report Number(s): LA-UR-13-27019 DOE Contract Number:

  11. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Faraj, Ahmad A; Inglett, Todd A; Ratterman, Joseph D

    2013-04-16

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  12. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Meeting- November 2007

    Broader source: Energy.gov [DOE]

    The Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group participated in a Hydrogen Production Technical Team Research Review on November 6, 2007. The meeting provided the opportunity for researchers to share their experiences in converting bio-derived liquids to hydrogen with members of the Department of Energy Hydrogen Production Technical Team.

  13. Configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks

    DOE Patents [OSTI]

    Archer, Charles J. (Rochester, MN); Inglett, Todd A. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

    2010-03-02

    Methods, apparatus, and products are disclosed for configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks, the compute nodes in the operational group connected together for data communications through a global combining network, that include: partitioning the compute nodes in the operational group into a plurality of non-overlapping subgroups; designating one compute node from each of the non-overlapping subgroups as a master node; and assigning, to the compute nodes in each of the non-overlapping subgroups, class routing instructions that organize the compute nodes in that non-overlapping subgroup as a collective network such that the master node is a physical root.

  14. Computational study of ion distributions at the air/liquid methanol interface

    SciTech Connect (OSTI)

    Sun, Xiuquan; Wick, Collin D.; Dang, Liem X.

    2011-06-16

    Molecular dynamic simulations with polarizable potentials were performed to systematically investigate the distribution of NaCl, NaBr, NaI, and SrCl2 at the air/liquid methanol interface. The density profiles indicated that there is no substantial enhancement of anions at the interface for the NaX systems in contrast to what was observed at the air/aqueous interface. The surfactant-like shape of the larger more polarizable halide anions is compensated by the surfactant nature of methanol itself. As a result, methanol hydroxy groups strongly interacted with one side of polarizable anions, in which their induced dipole points, and methanol methyl groups were more likely to be found near the positive pole of anion induced dipoles. Furthermore, salts were found to disrupt the surface structure of methanol, reducing the observed enhancement of methyl groups at the outer edge of the air/liquid methanol interface. With the additional of salts to methanol, the computed surface potentials increased, which is in contrast to what is observed in corresponding aqueous systems, where the surface potential decreases with the addition of salts. Both of these trends have been indirectly observed with experiments. This was found to be due to the propensity of anions for the air/water interface that is not present at the air/liquid methanol interface. This work was supported by the US Department of Energy Basic Energy Sciences' Chemical Sciences, Geosciences & Biosciences Division. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.

  15. HIGH-PERFORMANCE COMPUTATION OF DISTRIBUTED-MEMORY PARALLEL 3D VORONOI

    Office of Scientific and Technical Information (OSTI)

    PERFORMANCE COMPUTATION OF DISTRIBUTED-MEMORY PARALLEL 3D VORONOI AND DELAUNAY TESSELLATION TOM PETERKA Argonne National Laboratory 9700 S. Cass Ave. Argonne IL 60439 USA DMITRIY MOROZOV Lawrence Berkeley National Laboratory 1 Cyclotron Rd. Berkeley CA 94720 USA CAROLYN PHILLIPS Argonne National Laboratory 9700 S. Cass Ave. Argonne, IL 60439 USA A b s t r a c t . Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured

  16. Probing the structure of complex solids using a distributed computing approach-Applications in zeolite science

    SciTech Connect (OSTI)

    French, Samuel A.; Coates, Rosie; Lewis, Dewi W.; Catlow, C. Richard A.

    2011-06-15

    We demonstrate the viability of distributed computing techniques employing idle desktop computers in investigating complex structural problems in solids. Through the use of a combined Monte Carlo and energy minimisation method, we show how a large parameter space can be effectively scanned. By controlling the generation and running of different configurations through a database engine, we are able to not only analyse the data 'on the fly' but also direct the running of jobs and the algorithms for generating further structures. As an exemplar case, we probe the distribution of Al and extra-framework cations in the structure of the zeolite Mordenite. We compare our computed unit cells with experiment and find that whilst there is excellent correlation between computed and experimentally derived unit cell volumes, cation positioning and short-range Al ordering (i.e. near neighbour environment), there remains some discrepancy in the distribution of Al throughout the framework. We also show that stability-structure correlations only become apparent once a sufficiently large sample is used. - Graphical Abstract: Aluminium distributions in zeolites are determined using e-science methods. Highlights: > Use of e-science methods to search configurationally space. > Automated control of space searching. > Identify key structural features conveying stability. > Improved correlation of computed structures with experimental data.

  17. Acidity of the amidoxime functional group in aqueous solution. A combined experimental and computational study

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; Tucker, Lyndsay; Correia, Bruna; Do-Thanh, Chi-Linh; Dai, Sheng; Hancock, Robert D.; Bryantsev, Vyacheslav S.

    2015-01-26

    Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pKa values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pKa values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopicmoretitrations to measure the pKa values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pKa values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pKa units and 0.35 pKa units, respectively, and a root mean square deviation of 0.46 pKa units and 0.45 pKa units, respectively. Finally, we employ our two best methods to predict the pKa values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.less

  18. Acidity of the amidoxime functional group in aqueous solution. A combined experimental and computational study

    SciTech Connect (OSTI)

    Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; Tucker, Lyndsay; Correia, Bruna; Do-Thanh, Chi-Linh; Dai, Sheng; Hancock, Robert D.; Bryantsev, Vyacheslav S.

    2015-01-26

    Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pKa values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pKa values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopic titrations to measure the pKa values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pKa values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pKa units and 0.35 pKa units, respectively, and a root mean square deviation of 0.46 pKa units and 0.45 pKa units, respectively. Finally, we employ our two best methods to predict the pKa values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.

  19. Methods and apparatuses for information analysis on shared and distributed computing systems

    DOE Patents [OSTI]

    Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

    2011-02-22

    Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

  20. Microsoft Word - The Essential Role of New Network Services for High Performance Distributed Computing - PARENG.CivilComp.2011.

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Second International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering 12-15 April 2011, Ajaccio - Corsica - France In "Trends in Parallel, Distributed, Grid and Cloud Computing for Engineering," Edited by: P. Iványi, B.H.V. Topping, Civil-Comp Press. Network Services for High Performance Distributed Computing and Data Management W. E. Johnston, C. Guok, J. Metzger, and B. Tierney ESnet and Lawrence Berkeley National Laboratory, Berkeley California, U.S.A

  1. Computing Frontier: Distributed Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    opportunistically available to other VOs. Thus we can see that the LHC experiments, the Open Science Grid and the US LHC users are all interdependent on each other for their...

  2. Assigning unique identification numbers to new user accounts and groups in a computing environment with multiple registries

    DOE Patents [OSTI]

    DeRobertis, Christopher V. (Hopewell Junction, NY); Lu, Yantian T. (Round Rock, TX)

    2010-02-23

    A method, system, and program storage device for creating a new user account or user group with a unique identification number in a computing environment having multiple user registries is provided. In response to receiving a command to create a new user account or user group, an operating system of a clustered computing environment automatically checks multiple registries configured for the operating system to determine whether a candidate identification number for the new user account or user group has been assigned already to one or more existing user accounts or groups, respectively. The operating system automatically assigns the candidate identification number to the new user account or user group created in a target user registry if the checking indicates that the candidate identification number has not been assigned already to any of the existing user accounts or user groups, respectively.

  3. System design and algorithmic development for computational steering in distributed environments

    SciTech Connect (OSTI)

    Wu, Qishi; Zhu, Mengxia; Gu, Yi; Rao, Nageswara S

    2010-03-01

    Supporting visualization pipelines over wide-area networks is critical to enabling large-scale scientific applications that require visual feedback to interactively steer online computations. We propose a remote computational steering system that employs analytical models to estimate the cost of computing and communication components and optimizes the overall system performance in distributed environments with heterogeneous resources. We formulate and categorize the visualization pipeline configuration problems for maximum frame rate into three classes according to the constraints on node reuse or resource sharing, namely no, contiguous, and arbitrary reuse. We prove all three problems to be NP-complete and present heuristic approaches based on a dynamic programming strategy. The superior performance of the proposed solution is demonstrated with extensive simulation results in comparison with existing algorithms and is further evidenced by experimental results collected on a prototype implementation deployed over the Internet.

  4. Models the Electromagnetic Response of a 3D Distribution using MP COMPUTERS

    Energy Science and Technology Software Center (OSTI)

    1999-05-01

    EM3D models the electromagnetic response of a 3D distribution of conductivity, dielectric permittivity and magnetic permeability within the earth for geophysical applications using massively parallel computers. The simulations are carried out in the frequency domain for either electric or magnetic sources for either scattered or total filed formulations of Maxwell''s equations. The solution is based on the method of finite differences and includes absorbing boundary conditions so that responses can be modeled up into themore »radar range where wave propagation is dominant. Recent upgrades in the software include the incorporation of finite size sources, that in addition to dipolar source fields, and a low induction number preconditioner that can significantly reduce computational run times. A graphical user interface (GUI) is bundled with the software so that complicated 3D models can be easily constructed and simulated with the software. The GUI also allows for plotting of the output.« less

  5. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) Kick-Off Meeting Proceedings Hilton Garden Inn-BWI,Baltimore, MD October 24, 2006

    Broader source: Energy.gov [DOE]

    Proceedings from the October 24, 2006 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Kick-Off Meeting.

  6. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  7. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOE Patents [OSTI]

    Bhanot, Gyan V.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Vranas, Pavlos M.

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  8. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOE Patents [OSTI]

    Bhanot, Gyan V.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Vranas, Pavlos M.

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  9. Technologies and tools for high-performance distributed computing. Final report

    SciTech Connect (OSTI)

    Karonis, Nicholas T.

    2000-05-01

    In this project we studied the practical use of the MPI message-passing interface in advanced distributed computing environments. We built on the existing software infrastructure provided by the Globus Toolkit{trademark}, the MPICH portable implementation of MPI, and the MPICH-G integration of MPICH with Globus. As a result of this project we have replaced MPICH-G with its successor MPICH-G2, which is also an integration of MPICH with Globus. MPICH-G2 delivers significant improvements in message passing performance when compared to its predecessor MPICH-G and was based on superior software design principles resulting in a software base that was much easier to make the functional extensions and improvements we did. Using Globus services we replaced the default implementation of MPI's collective operations in MPICH-G2 with more efficient multilevel topology-aware collective operations which, in turn, led to the development of a new timing methodology for broadcasts [8]. MPICH-G2 was extended to include client/server functionality from the MPI-2 standard [23] to facilitate remote visualization applications and, through the use of MPI idioms, MPICH-G2 provided application-level control of quality-of-service parameters as well as application-level discovery of underlying Grid-topology information. Finally, MPICH-G2 was successfully used in a number of applications including an award-winning record-setting computation in numerical relativity. In the sections that follow we describe in detail the accomplishments of this project, we present experimental results quantifying the performance improvements, and conclude with a discussion of our applications experiences. This project resulted in a significant increase in the utility of MPICH-G2.

  10. ITP Industrial Distributed Energy: 5th Annual CHP RoadmapWorkshop Breakout Group Results, November 2004

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    5 th Annual CHP Roadmap Workshop September 20-21, 2004 BREAKOUT GROUP RESULTS November 2004 CHP TECHNOLOGIES SUMMARY Since 1998, many improvements have been made in the efficiency of CHP technologies and the development of packaged- integrated-combined heat and power systems. Integration of CHP products and systems with renewables, biofuels, and a variety of prime movers has improved the market substantially. The need to increase emphasis on "bottoming-cycle" systems remains, as well

  11. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  12. Computer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    I. INTRODUCTION This paper presents several computational tools required for processing images of a heavy ion beam and estimating the magnetic field within a plasma. The...

  13. Parallel, distributed and GPU computing technologies in single-particle electron microscopy

    SciTech Connect (OSTI)

    Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger

    2009-07-01

    An introduction to the current paradigm shift towards concurrency in software. Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of todays technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.

  14. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. ! Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13 Garching bei München, Germany, July 17, 2007 ScicomP 13, July 17, 2007, Garching Overview * About Bassi * Memory on Bassi * Large Page Memory (It's Great!) * System Configuration * Large Page

  15. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear

  16. Proceedings of the sixth Berkeley workshop on distributed data management and computer networks

    SciTech Connect (OSTI)

    Various Authors

    1982-01-01

    A distributed data base management system allows data to be stored at multiple locations and to be accessed as a single unified data base. In this workshop, seventeen papers were presented which have been prepared separately for the energy data base. These items deal with data transfer, protocols and management. (GHT)

  17. System and method for secure group transactions

    DOE Patents [OSTI]

    Goldsmith, Steven Y. (Rochester, MN)

    2006-04-25

    A method and a secure system, processing on one or more computers, provides a way to control a group transaction. The invention uses group consensus access control and multiple distributed secure agents in a network environment. Each secure agent can organize with the other secure agents to form a secure distributed agent collective.

  18. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect (OSTI)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  19. NERSC Enhances PDSF, Genepool Computing Capabilities

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Enhances PDSF, Genepool Computing Capabilities NERSC Enhances PDSF, Genepool Computing Capabilities Linux cluster expansion speeds data access and analysis January 3, 2014 Christmas came early for users of the Parallel Distributed Systems Facility (PDSF) and Genepool systems at Department of Energy's National Energy Research Scientific Computer Center (NERSC). Throughout November members of NERSC's Computational Systems Group were busy expanding the Linux computing resources that support PDSF's

  20. Distributed computing strategies for processing of FT-ICR MS imaging datasets for continuous mode data visualization

    SciTech Connect (OSTI)

    Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco; Kilic, Mehmet; Heeren, Ronald M.

    2015-03-01

    High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (big data) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode Mosaic Datacube approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, but requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.

    1. Metal distributions out to 0.5 r {sub 180} in the intracluster medium of four galaxy groups observed with Suzaku

      SciTech Connect (OSTI)

      Sasaki, Toru; Matsushita, Kyoko; Sato, Kosuke E-mail: matusita@rs.kagu.tus.ac.jp

      2014-01-20

      We studied the distributions of metal abundances and metal-mass-to-light ratios in the intracluster medium (ICM) of four galaxy groups, MKW 4, HCG 62, the NGC 1550 group, and the NGC 5044 group, out to ?0.5 r {sub 180} observed with Suzaku. The iron abundance decreases with radius and is about 0.2-0.4 solar beyond 0.1 r {sub 180}. At a given radius in units of r {sub 180}, the iron abundance in the ICM of the four galaxy groups was consistent with or smaller than those of clusters of galaxies. The Mg/Fe and Si/Fe ratios in the ICM are nearly constant at the solar ratio out to 0.5 r {sub 180}. We also studied systematic uncertainties in the derived metal abundances, comparing the results from two versions of atomic data for astrophysicists (ATOMDB) and single- and two-temperature model fits. Since the metals have been synthesized in galaxies, we collected K-band luminosities of galaxies from the Two Micron All Sky Survey catalog and calculated the integrated iron-mass-to-light-ratios (IMLR), or the ratios of the iron mass in the ICM to light from stars in galaxies. The groups with smaller gas-mass-to-light ratios have smaller IMLR values and the IMLR is inversely correlated with the entropy excess. Based on these abundance features, we discussed the past history of metal enrichment processes in groups of galaxies.

    2. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    3. Women's Employee Resource Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Group Women's Employee Resource Group The Women's Employee Resource Group encourages women's contributions, professional development opportunities, and shared support across the Laboratory. Contact Us Office of Diversity and Strategic Staffing (505) 667-2602 Email Computational scientist Hai Ah Nam, a member of the Women's Employee Resource Group Computational scientist Hai Ah Nam, a member of the Women's Employee Resource Group, works on the Laboratory's new Trinity supercomputing system.

    4. Final report and documentation for the security enabled programmable switch for protection of distributed internetworked computers LDRD.

      SciTech Connect (OSTI)

      Van Randwyk, Jamie A.; Robertson, Perry J.; Durgin, Nancy Ann; Toole, Timothy J.; Kucera, Brent D.; Campbell, Philip LaRoche; Pierson, Lyndon George

      2010-02-01

      An increasing number of corporate security policies make it desirable to push security closer to the desktop. It is not practical or feasible to place security and monitoring software on all computing devices (e.g. printers, personal digital assistants, copy machines, legacy hardware). We have begun to prototype a hardware and software architecture that will enforce security policies by pushing security functions closer to the end user, whether in the office or home, without interfering with users' desktop environments. We are developing a specialized programmable Ethernet network switch to achieve this. Embodied in this device is the ability to detect and mitigate network attacks that would otherwise disable or compromise the end user's computing nodes. We call this device a 'Secure Programmable Switch' (SPS). The SPS is designed with the ability to be securely reprogrammed in real time to counter rapidly evolving threats such as fast moving worms, etc. This ability to remotely update the functionality of the SPS protection device is cryptographically protected from subversion. With this concept, the user cannot turn off or fail to update virus scanning and personal firewall filtering in the SPS device as he/she could if implemented on the end host. The SPS concept also provides protection to simple/dumb devices such as printers, scanners, legacy hardware, etc. This report also describes the development of a cryptographically protected processor and its internal architecture in which the SPS device is implemented. This processor executes code correctly even if an adversary holds the processor. The processor guarantees both the integrity and the confidentiality of the code: the adversary cannot determine the sequence of instructions, nor can the adversary change the instruction sequence in a goal-oriented way.

    5. Distributed computing for signal processing: modeling of asynchronous parallel computation. Appendix C. Fault-tolerant interconnection networks and image-processing applications for the PASM parallel processing systems. Final report

      SciTech Connect (OSTI)

      Adams, G.B.

      1984-12-01

      The demand for very-high-speed data processing coupled with falling hardware costs has made large-scale parallel and distributed computer systems both desirable and feasible. Two modes of parallel processing are single-instruction stream-multiple data stream (SIMD) and multiple instruction stream - multiple data stream (MIMD). PASM, a partitionable SIMD/MIMD system, is a reconfigurable multimicroprocessor system being designed for image processing and pattern recognition. An important component of these systems is the interconnection network, the mechanism for communication among the computation nodes and memories. Assuring high reliability for such complex systems is a significant task. Thus, a crucial practical aspect of an interconnection network is fault tolerance. In answer to this need, the Extra Stage Cube (ESC), a fault-tolerant, multistage cube-type interconnection network, is defined. The fault tolerance of the ESC is explored for both single and multiple faults, routing tags are defined, and consideration is given to permuting data and partitioning the ESC in the presence of faults. The ESC is compared with other fault-tolerant multistage networks. Finally, reliability of the ESC and an enhanced version of it are investigated.

    6. BOC Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: BOC Group Place: United Kingdom Zip: GU20 6HJ Sector: Services Product: UK-based industrial gases, vacuum technologies and distribution...

    7. Development of an Extensible Computational Framework for Centralized Storage and Distributed Curation and Analysis of Genomic Data Genome-scale Metabolic Models

      SciTech Connect (OSTI)

      Stevens, Rick

      2010-08-01

      The DOE funded KBase project of the Stevens group at the University of Chicago was focused on four high-level goals: (i) improve extensibility, accessibility, and scalability of the SEED framework for genome annotation, curation, and analysis; (ii) extend the SEED infrastructure to support transcription regulatory network reconstructions (2.1), metabolic model reconstruction and analysis (2.2), assertions linked to data (2.3), eukaryotic annotation (2.4), and growth phenotype prediction (2.5); (iii) develop a web-API for programmatic remote access to SEED data and services; and (iv) application of all tools to bioenergy-related genomes and organisms. In response to these goals, we enhanced and improved the ModelSEED resource within the SEED to enable new modeling analyses, including improved model reconstruction and phenotype simulation. We also constructed a new website and web-API for the ModelSEED. Further, we constructed a comprehensive web-API for the SEED as a whole. We also made significant strides in building infrastructure in the SEED to support the reconstruction of transcriptional regulatory networks by developing a pipeline to identify sets of consistently expressed genes based on gene expression data. We applied this pipeline to 29 organisms, computing regulons which were subsequently stored in the SEED database and made available on the SEED website (http://pubseed.theseed.org). We developed a new pipeline and database for the use of kmers, or short 8-residue oligomer sequences, to annotate genomes at high speed. Finally, we developed the PlantSEED, or a new pipeline for annotating primary metabolism in plant genomes. All of the work performed within this project formed the early building blocks for the current DOE Knowledgebase system, and the kmer annotation pipeline, plant annotation pipeline, and modeling tools are all still in use in KBase today.

    8. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      been distributed to the Focus Group prior to the meeting. The comments that required editorial changes to the document were made in the working electronic version. b. At the June...

    9. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      change. This distribution was to allow the Focus Group time to review the proposed language and be prepared for the matter to come to a vote at the next meeting of the Focus...

    10. Unix File Groups at NERSC

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      disk and tape. At NERSC, groups are also used to control access to certain computational resources (e.g., batch queues, testbed systems, licensed software). Overview of Unix...

    11. Important role of the non-uniform Fe distribution for the ferromagnetism in group-IV-based ferromagnetic semiconductor GeFe

      SciTech Connect (OSTI)

      Wakabayashi, Yuki K.; Ohya, Shinobu; Ban, Yoshisuke; Tanaka, Masaaki [Department of Electrical Engineering and Information Systems, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

      2014-11-07

      We investigate the growth-temperature dependence of the properties of the group-IV-based ferromagnetic semiconductor Ge{sub 1?x}Fe{sub x} films (x?=?6.5% and 10.5%), and reveal the correlation of the magnetic properties with the lattice constant, Curie temperature (T{sub C}), non-uniformity of Fe atoms, stacking-fault defects, and Fe-atom locations. While T{sub C} strongly depends on the growth temperature, we find a universal relationship between T{sub C} and the lattice constant, which does not depend on the Fe content x. By using the spatially resolved transmission-electron diffractions combined with the energy-dispersive X-ray spectroscopy, we find that the density of the stacking-fault defects and the non-uniformity of the Fe concentration are correlated with T{sub C}. Meanwhile, by using the channeling Rutherford backscattering and particle-induced X-ray emission measurements, we clarify that about 15% of the Fe atoms exist on the tetrahedral interstitial sites in the Ge{sub 0.935}Fe{sub 0.065} lattice and that the substitutional Fe concentration is not correlated with T{sub C}. Considering these results, we conclude that the non-uniformity of the Fe concentration plays an important role in determining the ferromagnetic properties of GeFe.

    12. Group X

      SciTech Connect (OSTI)

      Fields, Susannah

      2007-08-16

      This project is currently under contract for research through the Department of Homeland Security until 2011. The group I was responsible for studying has to remain confidential so as not to affect the current project. All dates, reference links and authors, and other distinguishing characteristics of the original group have been removed from this report. All references to the name of this group or the individual splinter groups has been changed to 'Group X'. I have been collecting texts from a variety of sources intended for the use of recruiting and radicalizing members for Group X splinter groups for the purpose of researching the motivation and intent of leaders of those groups and their influence over the likelihood of group radicalization. This work included visiting many Group X websites to find information on splinter group leaders and finding their statements to new and old members. This proved difficult because the splinter groups of Group X are united in beliefs, but differ in public opinion. They are eager to tear each other down, prove their superiority, and yet remain anonymous. After a few weeks of intense searching, a list of eight recruiting texts and eight radicalizing texts from a variety of Group X leaders were compiled.

    13. Jay Srinivasan! Group Lead, Computational Systems!

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      sing S LURM ( tested e xtensively o ver t he l ast y ear) - SLURM p rovides a ll t he s ame f unc?onality a s T orqueMoab - with a f ew d ifferences i n i mplementa?on - SLURM i...

    14. NERSC seeks Computational Systems Group Lead

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Linux clusters and large-scale proprietary technology. Serve as a consultant to senior management in long-range planning concerning new or projected areas of high performance...

    15. Computation & Simulation > Theory & Computation > Research >...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...

    16. Galaxy groups

      SciTech Connect (OSTI)

      Brent Tully, R.

      2015-02-01

      Galaxy groups can be characterized by the radius of decoupling from cosmic expansion, the radius of the caustic of second turnaround, and the velocity dispersion of galaxies within this latter radius. These parameters can be a challenge to measure, especially for small groups with few members. In this study, results are gathered pertaining to particularly well-studied groups over four decades in group mass. Scaling relations anticipated from theory are demonstrated and coefficients of the relationships are specified. There is an update of the relationship between light and mass for groups, confirming that groups with mass of a few times 10{sup 12}M{sub ?} are the most lit up while groups with more and less mass are darker. It is demonstrated that there is an interesting one-to-one correlation between the number of dwarf satellites in a group and the group mass. There is the suggestion that small variations in the slope of the luminosity function in groups are caused by the degree of depletion of intermediate luminosity systems rather than variations in the number per unit mass of dwarfs. Finally, returning to the characteristic radii of groups, the ratio of first to second turnaround depends on the dark matter and dark energy content of the universe and a crude estimate can be made from the current observations of ?{sub matter}?0.15 in a flat topology, with a 68% probability of being less than 0.44.

    17. Specific Group Hardware

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Specific Group Hardware Specific Group Hardware ALICE palicevo1 The Virtual Organization (VO) server. Serves as gatekeeper for ALICE jobs. It's duties include getting assignments from ALICE file catalog (at CERN), submitting jobs to pdsfgrid (via condor) which submits jobs to the compute nodes, monitoring the cluster work load, and uploading job information to ALICE file catalog. It is monitored with MonALISA (the monitoring page is here). It's made up of 2 Intel Xeon E5520 processors each with

    18. Link failure detection in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-11-09

      Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

    19. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    20. Computer Accounts | Stanford Synchrotron Radiation Lightsource

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Accounts Each user group must have a computer account. Additionally, all persons using these accounts are responsible for understanding and complying with the terms...

    1. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ADTSC » CCS » CCS-7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These

    2. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental health, cleaner energy, and national security. Contact Us Group Leader Carl Gable Deputy Group Leader Gilles Bussod Email Profile pages header Search our Profile pages Hari Viswanathan inspects a microfluidic cell used to study the extraction of hydrocarbon fuels from a complex fracture network. EES-16's Subsurface Flow

    3. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2016-02-01 08:07:08

    4. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop....

    5. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    6. Research Groups - Cyclotron Institute

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Research Groups Research Group Homepages: Nuclear Theory Group Dr. Sherry Yennello's Research Group Dr. Dan Melconian's Research Group Dr. Cody Folden's Group...

    7. Measurements and computations of room airflow with displacement ventilation

      SciTech Connect (OSTI)

      Yuan, X.; Chen, Q.; Glicksman, L.R.; Hu, Y.; Yang, X.

      1999-07-01

      This paper presents a set of detailed experimental data of room airflow with displacement ventilation. These data were obtained from a new environmental test facility. The measurements were conducted for three typical room configurations: a small office, a large office with partitions, and a classroom. The distributions of air velocity, air velocity fluctuation, and air temperature were measured by omnidirectional hot-sphere anemometers, and contaminant concentrations were measured by tracer gas at 54 points in the rooms. Smoke was used to observe airflow. The data also include the wall surface temperature distribution, air supply parameters, and the age of air at several locations in the rooms. A computational fluid dynamics (CFD) program with the Re-Normalization Group (RNG) {kappa}-{epsilon} model was also used to predict the indoor airflow. The agreement between the computed results and measured data of air temperature and velocity is good. However, some discrepancies exist in the computed and measured concentrations and velocity fluctuation.

    8. Prabhat Steps In as DAS Group Lead

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Prabhat Steps In as DAS Group Lead Prabhat Steps In as DAS Group Lead September 1, 2014 prabhat Prabhat has been named Group Lead of the Data and Analytics Services (DAS) Group at the Department of Energy's National Energy Research Scientific Computing Center (NERSC). The DAS group helps NERSC's users address data and analytics challenges arising from the increasing size and complexity of data from simulations and experiments. As the DAS Group Lead, Prabhat will play a key role in developing and

    9. Computing and Computational Sciences Directorate - Computer Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      AWARD Winners: Jess Gehin; Jackie Isaacs; Douglas Kothe; Debbie McCoy; Bonnie Nestor; John Turner; Gilbert Weigand Organization(s): Nuclear Technology Program; Computing and...

    10. EIS Distribution

      Broader source: Energy.gov [DOE]

      This DOE guidance presents a series of recommendations related to the EIS distribution process, which includes creating and updating a distribution list, distributing an EIS, and filing an EIS with the EPA.

    11. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and...

    12. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    13. # Energy Measuremenfs Group

      Office of Legacy Management (LM)

      ri EECE # Energy Measuremenfs Group SUMMARY REPORT . AiRIAL R4DIOLOGICAL SURVEY - NIAGARA FALLS AREA NIAGARA FALLS, NEh' YORK DATE OF SURVEY: SEPTEMBER 1979 APPROVED FOR DISTRIBUTION: P Stuart, EC&G, Inc. . . Herbirt F. Hahn, Department of Energy PERFDRflED BY EGtf, INC. UNDER CONTRACT NO. DE-AHO&76NV01163 WITH THE UNITED STATES DEPARTMENT OF ENERGY II'AFID 010 November 30, 1979 - The Aerial Measurements System (A%), operated by EC&t, Inc< for the Un i ted States Department of

    14. NERSC User Group Meeting 2014

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Meeting 2014 Planck @ NERSC Theodore Kisner Computational Cosmology Center, LBNL On behalf of the Planck collaboration NERSC User Group Meeting 2014 The Cosmic Microwave Background * Universe begins with hot Big Bang and then expands and cools. * After 370,000 years temperature drops to 3000K. * p + + e - => H : Universe becomes neutral & transparent. * Photons free-stream to observers today. They are redshifted and appear as a 3K blackbody. Source: NASA Temp = 3K Today NERSC User Group

    15. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    16. TEC Working Group Topic Groups Archives Consolidated Grant Topic Group |

      Office of Environmental Management (EM)

      Department of Energy Consolidated Grant Topic Group TEC Working Group Topic Groups Archives Consolidated Grant Topic Group The Consolidated Grant Topic Group arose from recommendations provided by the TEC and other external parties to the DOE Senior Executive Transportation Forum in July 1998. It was proposed that the consolidation of multiple funding streams from numerous DOE sources into a single grant would provide a more equitable and efficient means of assistance to States and Tribes

    17. Broadcasting a message in a parallel computer

      DOE Patents [OSTI]

      Berg, Jeremy E. (Rochester, MN); Faraj, Ahmad A. (Rochester, MN)

      2011-08-02

      Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

    18. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    19. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved within B 174. Use

    20. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center for Computational Sciences

    1. Computing at SSRL Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      contents you are looking for have moved. You will be redirected to the new location automatically in 5 seconds. Please bookmark the correct page at http://www-ssrl.slac.stanford.edu/content/staff-resources/computer-networking-group

    2. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes There are currently 2632 nodes available on PDSF. The compute (batch) nodes at PDSF are heterogenous, reflecting the periodic procurement of new nodes (and the eventual retirement of old nodes). From the user's perspective they are essentially all equivalent except that some have more memory per job slot. If your jobs have memory requirements beyond the default maximum of 1.1GB you should specify that in your job submission and the batch system will run your job on an

    3. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    4. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Exascale Computing CoDEx Project: A Hardware/Software Codesign Environment for the Exascale Era The next decade will see a rapid evolution of HPC node architectures as power and cooling constraints are limiting increases in microprocessor clock speeds and constraining data movement. Applications and algorithms will need to change and adapt as node architectures evolve. A key element of the strategy as we move forward is the co-design of applications, architectures and programming

    5. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    6. Data System Sciences & Engineering Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CSS Directorate ORNL Data System Sciences & Engineering Group Computational Sciences & Engineering Division Home Organization The Advanced Computing Solutions Team The Data Systems Research Integration Team Research Areas Data Systems Architectures for National Security Risk Analysis Streaming Realtime Sensor Networks Visual Analytics Opportunities Contact Us Data System Sciences & Engineering Group DSSE goes past traditional approaches to develop new methods for meeting user needs

    7. Distribution Workshop

      Broader source: Energy.gov [DOE]

      On September 24-26, 2012, the GTT presented a workshop on grid integration on the distribution system at the Sheraton Crystal City near Washington, DC.

    8. Computational Systems

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software Policies User Surveys NERSC Users Group User Announcements Help Staff Blogs Request Repository Mailing List Operations for: Passwords & Off-Hours Status...

    9. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    10. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home › About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and Computational Sciences Directorate Michael Bartell Chief Information Officer Information Technologies Services Division Jim Hack Director, Climate Science Institute National Center for Computational Sciences Shaun Gleason Division Director Computational Sciences and Engineering Barney Maccabe Division Director Computer Science

    11. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes MC-proc.png Compute Node Configuration 6,384 nodes 2 twelve-core AMD 'MagnyCours' 2.1-GHz processors per node (see die image to the right and schematic below) 24 cores per node (153,216 total cores) 32 GB DDR3 1333-MHz memory per node (6,000 nodes) 64 GB DDR3 1333-MHz memory per node (384 nodes) Peak Gflop/s rate: 8.4 Gflops/core 201.6 Gflops/node 1.28 Peta-flops for the entire machine Each core has its own L1 and L2 caches, with 64 KB and 512KB respectively One 6-MB

    12. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    13. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Resources This page is the repository for sundry items of information relevant to general computing on BooNE. If you have a question or problem that isn't answered here, or a suggestion for improving this page or the information on it, please mail boone-computing@fnal.gov and we'll do our best to address any issues. Note about this page Some links on this page point to www.everything2.com, and are meant to give an idea about a concept or thing without necessarily wading through a whole website

    14. ITP Industrial Distributed Energy: Distributed Energy Program...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      ITP Industrial Distributed Energy: Distributed Energy Program Project Profile: Verizon Central Office Building ITP Industrial Distributed Energy: Distributed Energy Program Project...

    15. Automatic identification of abstract online groups

      DOE Patents [OSTI]

      Engel, David W; Gregory, Michelle L; Bell, Eric B; Cowell, Andrew J; Piatt, Andrew W

      2014-04-15

      Online abstract groups, in which members aren't explicitly connected, can be automatically identified by computer-implemented methods. The methods involve harvesting records from social media and extracting content-based and structure-based features from each record. Each record includes a social-media posting and is associated with one or more entities. Each feature is stored on a data storage device and includes a computer-readable representation of an attribute of one or more records. The methods further involve grouping records into record groups according to the features of each record. Further still the methods involve calculating an n-dimensional surface representing each record group and defining an outlier as a record having feature-based distances measured from every n-dimensional surface that exceed a threshold value. Each of the n-dimensional surfaces is described by a footprint that characterizes the respective record group as an online abstract group.

    16. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model,...

    17. Secure computing for the 'Everyman'

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Secure computing for the 'Everyman' Secure computing for the 'Everyman' If implemented on a wide scale, quantum key distribution technology could ensure truly secure commerce, banking, communications and data transfer. September 2, 2014 This small device developed at Los Alamos National Laboratory uses the truly random spin of light particles as defined by laws of quantum mechanics to generate a random number for use in a cryptographic key that can be used to securely transmit information

    18. Sandia Energy - Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science Home Energy Research Advanced Scientific Computing Research (ASCR) Computational Science Computational Sciencecwdd2015-03-26T13:35:2...

    19. Nick Wright Named Advanced Technologies Group Lead

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nick Wright Named Advanced Technologies Group Lead Nick Wright Named Advanced Technologies Group Lead February 4, 2013 Nick Nick Wright has been named head of the National Energy Research Scientific Computing Center's (NERSC) Advanced Technologies Group (ATG), which focuses on understanding the requirements of current and emerging applications to make choices in hardware design and programming models that best serve the science needs of NERSC users. ATG specializes in benchmarking, system

    20. JLF User Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      JLF User Group NIF and Jupiter User Group Meeting 2016 The 2016 NIF User Group Meeting will take place the first week of February. The exact dates are Sunday evening, January 31th,...

    1. Debugging a high performance computing program

      DOE Patents [OSTI]

      Gooding, Thomas M.

      2014-08-19

      Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

    2. Debugging a high performance computing program

      DOE Patents [OSTI]

      Gooding, Thomas M.

      2013-08-20

      Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

    3. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    4. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Events Computing Events Spotlighting the most advanced scientific and technical applications in the world! Featuring exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies will be seen for the first time in Denver. Supercomputing Conference 13 Denver, Colorado November 17-22, 2013 Spotlighting the most advanced scientific and technical applications in the world, SC13 will bring together the international

    5. NERSC User's Group Meeting 2.4.14 Computational Facilities: ...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nam, Anne Houdusse, Robert Sauer Financial support: NIH Conformational change in ... Transition path theory and string methods B 1 B n (Hyperplane approximation) For ...

    6. TEC Working Group Topic Groups Manual Review

      Broader source: Energy.gov [DOE]

      This group is responsible for the update of DOE Manual 460.2-1, Radioactive Material Transportation Practices Manual.  This manual was issued on September 23, 2002, and establishes a set of...

    7. Computing and Computational Sciences Directorate - Computer Science and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics Division Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Our mission includes basic research in computational sciences and application of advanced computing systems, computational, mathematical and analysis techniques to the solution of scientific problems of national importance. We seek to work

    8. HEP-FCE Working Group on Libraries and Tools

      SciTech Connect (OSTI)

      Borgland, Anders; Elmer, Peter; Kirby, Michael; Patton, Simon; Potekhin, Maxim; Viren, Brett; Yanny, Brian

      2014-12-19

      The High-Energy Physics Forum for Computational Excellence (HEP-FCE) was formed by the Department of Energy as a follow-up to a recent report from the Topical Panel on Computing[1] and the associated P5 recommendation[2]. It is a pilot project distributed across the DOE Labs. During this initial incubation period the Forum is to develop a plan for a robust, long-term organization structure and a functioning web presence for forum activities and outreach, and a study of hardware and software needs across the HEP program. In the following sections we give this working groups vision for aspects and qualities we wish to see in a future HEP-FCE. We then give a prioritized list of technical activities with suggested scoping and deliverables that can be expected to provide cross-experiment benefits. The remaining bulk of the report gives a technical survey of some specific areas of opportunity for cross-experiment benefit in the realm of software libs/tools. This survey serves as support for the vision and prioritized list. For each area we describe the ways that cross-experiment benefit is achieved today, as well as describe known failings or pitfalls where such benefit has failed to be achieved and which should be avoided in the future. For both cases, we try to give concrete examples. Each area then ends with an examination of what opportunities exist for improvements in that particular area.

    9. JLab Users Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      JLab Users Group Please upgrade your browser. This site's design is only visible in a graphical browser that supports web standards, but its content is accessible to any browser. Concerns? JLab Users Group User Liaison Home Users Group Program Advisory Committee User/Researcher Information print version UG Resources Background & Purpose Users Group Wiki By Laws Board of Directors Board of Directors Minutes Directory of Members Events At-A-Glance Member Institutions News Users Group Mailing

    10. Jason Hick! Storage Systems Group! NERSC User Group Meeting!

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Group! ! NERSC User Group Meeting! February 6, 2014 Storage Systems: 2014 and beyond The compute and storage systems 2013 Produc(on C lusters Carver, P DSF, J GI,KBASE,HEP 1 4x Q DR Global Scratch 3.6 PB 5 x S FA12KE /project 5 PB DDN9900 & NexSAN /home 250 TB NetApp 5 460 50 P B s tored, 2 40 PB c apacity, 3 5 years o f community d ata HPSS 16 x Q DR I B 2.2 P B L ocal Scratch 70 GB/s 6.4 P B L ocal Scratch 140 GB/s 16 x F DR I B Ethernet & I B F abric Science F riendly S ecurity

    11. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Jefferson Lab Jefferson Lab Home Search Contact JLab Computing at JLab ---------------------- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org

    12. The Ren Group - Home

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      resolution (1-2 nm). We continue to develop this approach by optimizing through empirical and computational methods to achieve high-resolution structures of single...

    13. NERSC Intern Wins Award for Computing Achievement

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Intern Wins Award for Computing Achievement NERSC Intern Wins Award for Computing Achievement March 27, 2013 Linda Vu, lvu@lbl.gov, +1 510 495 2402 ncwit1 Stephanie Cabanela, a student intern in the National Energy Research Scientific Computing Center's (NERSC) Operation Technologies Group was honored with the Bay Area Affiliate National Center for Women and Information Technology (NCWIT) Aspirations in Computing award on Saturday, March 16, 2013 in a ceremony in San Jose, CA. The award honors

    14. Agenda for the Derived Liquids to Hydrogen Distributed Reforming...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Hydrogen Distributed Reforming Working Group (BILIWG) Hydrogen Production Technical Team Research Review This is the agenda for the working group sessions held in Laurel, Maryland...

    15. Moltech Power Systems Group MPS Group | Open Energy Information

      Open Energy Info (EERE)

      Moltech Power Systems Group MPS Group Jump to: navigation, search Name: Moltech Power Systems Group (MPS Group) Place: China Product: China-based subsidiary of Shanghai Huayi Group...

    16. Hanergy Holdings Group Company Ltd formerly Farsighted Group...

      Open Energy Info (EERE)

      Hanergy Holdings Group Company Ltd formerly Farsighted Group aka Huarui Group Jump to: navigation, search Name: Hanergy Holdings Group Company Ltd (formerly Farsighted Group, aka...

    17. MiniBooNE Pion Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Pion Group

    18. Distributed Merge Trees

      SciTech Connect (OSTI)

      Morozov, Dmitriy; Weber, Gunther

      2013-01-08

      Improved simulations and sensors are producing datasets whose increasing complexity exhausts our ability to visualize and comprehend them directly. To cope with this problem, we can detect and extract significant features in the data and use them as the basis for subsequent analysis. Topological methods are valuable in this context because they provide robust and general feature definitions. As the growth of serial computational power has stalled, data analysis is becoming increasingly dependent on massively parallel machines. To satisfy the computational demand created by complex datasets, algorithms need to effectively utilize these computer architectures. The main strength of topological methods, their emphasis on global information, turns into an obstacle during parallelization. We present two approaches to alleviate this problem. We develop a distributed representation of the merge tree that avoids computing the global tree on a single processor and lets us parallelize subsequent queries. To account for the increasing number of cores per processor, we develop a new data structure that lets us take advantage of multiple shared-memory cores to parallelize the work on a single node. Finally, we present experiments that illustrate the strengths of our approach as well as help identify future challenges.

    19. Exploratory Experimentation and Computation

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2010-02-25

      We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

    20. Introduction to computers: Reference guide

      SciTech Connect (OSTI)

      Ligon, F.V.

      1995-04-01

      The ``Introduction to Computers`` program establishes formal partnerships with local school districts and community-based organizations, introduces computer literacy to precollege students and their parents, and encourages students to pursue Scientific, Mathematical, Engineering, and Technical careers (SET). Hands-on assignments are given in each class, reinforcing the lesson taught. In addition, the program is designed to broaden the knowledge base of teachers in scientific/technical concepts, and Brookhaven National Laboratory continues to act as a liaison, offering educational outreach to diverse community organizations and groups. This manual contains the teacher`s lesson plans and the student documentation to this introduction to computer course.

    1. Distributed Optimization System

      DOE Patents [OSTI]

      Hurtado, John E. (Albuquerque, NM); Dohrmann, Clark R. (Albuquerque, NM); Robinett, III, Rush D. (Tijeras, NM)

      2004-11-30

      A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

    2. Distributed Generation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Untapped Value of Backup Generation While new guidelines and regulations such as IEEE (Institute of Electrical and Electronics Engineers) 1547 have come a long way in addressing interconnection standards for distributed generation, utilities have largely overlooked the untapped potential of these resources. Under certain conditions, these units (primarily backup generators) represent a significant source of power that can deliver utility services at lower costs than traditional centralized

    3. Distribution Category:

      Office of Legacy Management (LM)

      - Distribution Category: Remedial Action and Decommissioning Program (UC-70A) DOE/EV-0005/48 ANL-OHS/HP-84-104 ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 FORMERLY UTILIZED MXD/AEC SITES REMEDIAL ACTION PROGRAM RADIOLOGICAL SURVEY OF THE HARSHAW CHEMICAL COMPANY CLEVELAND. OHIO Prepared by R. A. Wynveen Associate Division Director, OHS W. H. Smith Senior Health Physicist C. M. Sholeen Health Physicist A. L. Justus Health Physicist K. F. Flynn Health Physicist

    4. Running Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Running Jobs by Group Running Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2016-02-01 08:06:40

    5. Pending Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Pending Jobs by Group Pending Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2016-02-01 08:07:15

    6. Running Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Running Jobs by Group Running Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2011-04-05 13:59:48...

    7. Pending Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Pending Jobs by Group Pending Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2011-04-05 14:00:14...

    8. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      at the August meeting, the Focus Group Secretary continues to work on deleting the language proposed by the QA Sub-group that would have divided the section on methods into one...

    9. TEC Communications Topic Group

      Office of Environmental Management (EM)

      procurement - Routing criteriaemergency preparedness Tribal Issues Topic Group * TEPP Navajo Nation (Tom Clawson) - 1404 - Needs Assessment * Identified strengths and...

    10. Interagency Sustainability Working Group

      Broader source: Energy.gov [DOE]

      The Interagency Sustainability Working Group (ISWG) is the coordinating body for sustainable buildings in the federal government.

    11. Computing Resources | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Computing Resources Theory and Computing Sciences Building Argonne's Theory and Computing Sciences (TCS) building houses a wide variety of computing systems including some of the most powerful supercomputers in the world. The facility has 25,000 square feet of raised computer floor space and a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. The building also

    12. SSRL ETS Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      STANFORD SYNCHROTRON RADIATION LABORATORY Stanford Linear Accelerator Center Engineering & Technical Services Groups: Mechanical Services Group Mechanical Services Group Sharepoint ASD: Schedule Priorites Accelerator tech support - Call List Documentation: Engineering Notes, Drawings, and Accelerator Safety Documents Mechanical Systems: Accelerator Drawings Accelerator Pictures Accelerator Vacuum Systems (SSRL) LCW Vacuum Projects: Last Updated: February 8, 2007 Ben Scott

    13. Nilsson Group Members

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      stanford top slac line home group research line Welcome to the Nilsson group. Primary research interests in the Nilsson group includes using x-ray spectroscopies to understand: The Structure of water Bond breakage and formation during catalytic reactions on surfaces The fundamental studies of electrochemistry for energy conversion

    14. Avanced Large-scale Integrated Computational Environment

      Energy Science and Technology Software Center (OSTI)

      1998-10-27

      The ALICE Memory Snooper is a software applications programming interface (API) and library for use in implementing computational steering systems. It allows distributed memory parallel programs to publish variables in the computation that may be accessed over the Internet. In this way, users can examine and even change the variables in their running application remotely. The API and library ensure the consistency of the variables across the distributed memory system.

    15. Secure key storage and distribution

      DOE Patents [OSTI]

      Agrawal, Punit

      2015-06-02

      This disclosure describes a distributed, fault-tolerant security system that enables the secure storage and distribution of private keys. In one implementation, the security system includes a plurality of computing resources that independently store private keys provided by publishers and encrypted using a single security system public key. To protect against malicious activity, the security system private key necessary to decrypt the publication private keys is not stored at any of the computing resources. Rather portions, or shares of the security system private key are stored at each of the computing resources within the security system and multiple security systems must communicate and share partial decryptions in order to decrypt the stored private key.

    16. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    17. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      User Defined Images Archive APEX Home R & D Exascale Computing CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in...

    18. Grouped exposed metal heaters

      DOE Patents [OSTI]

      Vinegar, Harold J. (Bellaire, TX); Coit, William George (Bellaire, TX); Griffin, Peter Terry (Brixham, GB); Hamilton, Paul Taylor (Houston, TX); Hsu, Chia-Fu (Granada Hills, CA); Mason, Stanley Leroy (Allen, TX); Samuel, Allan James (Kular Lumpar, MY); Watkins, Ronnie Wade (Cypress, TX)

      2010-11-09

      A system for treating a hydrocarbon containing formation is described. The system includes two or more groups of elongated heaters. The group includes two or more heaters placed in two or more openings in the formation. The heaters in the group are electrically coupled below the surface of the formation. The openings include at least partially uncased wellbores in a hydrocarbon layer of the formation. The groups are electrically configured such that current flow through the formation between at least two groups is inhibited. The heaters are configured to provide heat to the formation.

    19. Grouped exposed metal heaters

      DOE Patents [OSTI]

      Vinegar, Harold J. (Bellaire, TX); Coit, William George (Bellaire, TX); Griffin, Peter Terry (Brixham, GB); Hamilton, Paul Taylor (Houston, TX); Hsu, Chia-Fu (Granada Hills, CA); Mason, Stanley Leroy (Allen, TX); Samuel, Allan James (Kular Lumpar, ML); Watkins, Ronnie Wade (Cypress, TX)

      2012-07-31

      A system for treating a hydrocarbon containing formation is described. The system includes two or more groups of elongated heaters. The group includes two or more heaters placed in two or more openings in the formation. The heaters in the group are electrically coupled below the surface of the formation. The openings include at least partially uncased wellbores in a hydrocarbon layer of the formation. The groups are electrically configured such that current flow through the formation between at least two groups is inhibited. The heaters are configured to provide heat to the formation.

    20. January 2013 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      January 2013 Most Viewed Documents for Mathematics And Computing Cybersecurity through Real-Time Distributed Control Systems Kisner, Roger A ORNL; Manges, Wayne W ORNL; ...

    1. Supercomputing on a Shoestring: Cluster Computers at JLab | Jefferson...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      which describe the distribution of electric charge and current inside the nucleon. Apple To calculate the solution to a science problem, a cluster computer slices space up...

    2. DISTRIBUTION CATEGORY

      Office of Scientific and Technical Information (OSTI)

      DISTRIBUTION CATEGORY uc-11 I A W E N C E LIVERMORE IABORATORY University of Cahfmia/Livermore, California/94550 UCRL-52658 CALCULATION OF CHEMICAL EQUILIBRIUM BETWEEN AQUEOUS SOLUTION AND MINERALS: THE EQ3/6 - - SOFTWARE PACKAGE T. J. Wolery MS. date: February 1, 1979 . . - . . - . Tho rcpon rn prepared as an account of work sponsored by the United Stater Government. Seither Lhc Urutcd Stater nor the Umted Stater Department of Energy, nor any of their employees. nor any of their E O ~ ~ ~ B C I

    3. Specific Group Hardware

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      jobs to pdsfgrid (via condor) which submits jobs to the compute nodes, monitoring the cluster work load, and uploading job information to ALICE file catalog. It is monitored with...

    4. Logistical Multicast for Data Distribution linkbordercolor

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Logistical Multicast for Data Distribution Jason Zurawski, Martin Swany Micah Beck, Ying Ding Department of Computer and Information Sciences Department of Computer Science University of Delaware, Newark, DE 19716 University of Tennessee, Knoxville, TN 37996 {zurawski, swany}@cis.udel.edu {mbeck, ying}@cs.utk.edu Abstract This paper describes a simple scheduling procedure for use in multicast data distribution within a logistical networking infrastructure. The goal of our scheduler is to

    5. TEC Working Group Topic Groups Routing Key Documents | Department...

      Office of Environmental Management (EM)

      Key Documents TEC Working Group Topic Groups Routing Key Documents KEY DOCUMENTS PDF icon Proposed Task Plan - Routing Topic Group More Documents & Publications TEC Working Group...

    6. Mobile computing device configured to compute irradiance, glint, and glare of the sun

      DOE Patents [OSTI]

      Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib

      2014-03-11

      Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.

    7. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7, 2013 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:09 PM on December 17, 2013 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Taffy Almeida, Joe Archuleta, Jeff Cheadle, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Karl Pool, Chris Sutton, Amanda Tuttle, Rich Weiss and Eric Wyse. I. Huei Meznarich asked if there were any comments on the minutes from the

    8. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6, 2015 The meeting was called to order by Cliff Watkins, HASQARD Focus Group Secretary at 2:07 PM on May 26, 2015 in Conference Room 328 at 2420 Stevens. Those attending were: Jonathan Sanwald (Mission Support Alliance (MSA), Focus Group Chair), Cliff Watkins (Corporate Allocation Services, DOE-RL Support Contractor, Focus Group Secretary), Taffy Almeida (Pacific Northwest National Laboratory (PNNL)), Glen Clark (Washington River Protection Solution (WRPS)), Fred Dunhour (DOE-ORP), Scot

    9. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      22, 2015 The meeting was called to order by Cliff Watkins, HASQARD Focus Group Secretary at 2:05 PM on October 22, 2015 in Conference Room 328 at 2420 Stevens. Those attending were: Jonathan Sanwald (Mission Support Alliance (MSA), Focus Group Chair), Cliff Watkins (Corporate Allocation Services, DOE-RL Support Contractor, Focus Group Secretary), Glen Clark (Washington River Protection Solution (WRPS)), Fred Dunhour (DOE-ORP), Joan Kessner (Washington Closure Hanford (WCH)), Karl Pool (Pacific

    10. TEC Communications Topic Group

      Office of Environmental Management (EM)

      Tribal Issues Topic Group Judith Holm, Chair April 21, 2004 Albuquerque, NM Tribal Issues Topic Group * February Tribal Summit with Secretary of Energy (Kristen Ellis, CI) - Held in conjunction with NCAI mid-year conference - First Summit held in response to DOE Indian Policy - Addressed barriers to communication and developing framework for interaction Tribal Issues Topic Group * Summit (continued) - Federal Register Notice published in March soliciting input on how to improve summit process

    11. NIF User Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      users NIF User Group The National Ignition Facility User Group provides an organized framework and independent vehicle for interaction between the scientists who use NIF for "Science Use of NIF" experiments and NIF management. Responsibility for NIF and the research programs carried out at NIF resides with the NIF Director. The NIF User Group advises the NIF Director on matters of concern to users, as well as providing a channel for communication for NIF users with funding agencies and

    12. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:04 PM on October 16, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Jeff Cheadle, Glen Clark, Robert Elkins, Larry Markel, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Chris Sutton, Steve Trent, Amanda Tuttle, Sam Vega, Rich Weiss and Eric Wyse. New personnel have joined the Focus Group since the last

    13. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      27, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:09 PM on November 27, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Glen Clark, Robert Elkins, Joan Kessner, Larry Markel, Mary McCormick-Barger, Steve Trent, and Rich Weiss. I. Huei Meznarich requested comments on the minutes from the October 16, 2012 meeting. No HASQARD Focus Group members present stated any

    14. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      0, 2013 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on August 20, 2013 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Taffy Almeida, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Steve Smith, Rich Weiss and Eric Wyse. I. Huei Meznarich asked if there were any comments on the minutes from the July 23, 2013 meeting. No Focus Group members stated they had

    15. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      5, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:10 PM on April 15, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Glen Clark, Robert Elkins, Scot Fitzgerald, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, and Eric Wyse. I. Huei Meznarich asked if there were any comments on the minutes from the March 18, 2014 meeting. No Focus Group members stated they

    16. ALS Communications Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Communications Group Print From left: Ashley White, Lori Tamura, Keri Troutman, and Carina Braun. The ALS Communications staff maintain the ALS Web site; write and edit all print...

    17. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      deviations from a procedure or deviations from a published analytical method. Also, the language in this section of HASQARD uses the term "modification" and the Focus Group was...

    18. Photoelectrochemical Working Group

      Broader source: Energy.gov [DOE]

      The Photoelectrochemical Working Group meets regularly to review technical progress, develop synergies, and collaboratively develop common tools and processes for photoelectrochemical (PEC) water...

    19. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      a. The action item related to organizing a working group to address the HASQARD language regarding independent assessments to ensure the language addresses all organizations...

    20. Annual Emergency Preparedness Grant Distributed

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      May 2, 2013 Annual Emergency Preparedness Grant Distributed The Emergency Preparedness Working Group (EPWG) recently came together to distribute approximately $415,000 in grant funding for emergency response resources. The funds, which were collected in 2012, are part of an annual U.S. Department of Energy (DOE) community grant program benefitting rural Nevada communities. "This year's grant is going toward responder equipment, personal protective clothing, like helmets and coats, and

    1. Exascale Hardware Architectures Working Group

      SciTech Connect (OSTI)

      Hemmert, S; Ang, J; Chiang, P; Carnes, B; Doerfler, D; Leininger, M; Dosanjh, S; Fields, P; Koch, K; Laros, J; Noe, J; Quinn, T; Torrellas, J; Vetter, J; Wampler, C; White, A

      2011-03-15

      The ASC Exascale Hardware Architecture working group is challenged to provide input on the following areas impacting the future use and usability of potential exascale computer systems: processor, memory, and interconnect architectures, as well as the power and resilience of these systems. Going forward, there are many challenging issues that will need to be addressed. First, power constraints in processor technologies will lead to steady increases in parallelism within a socket. Additionally, all cores may not be fully independent nor fully general purpose. Second, there is a clear trend toward less balanced machines, in terms of compute capability compared to memory and interconnect performance. In order to mitigate the memory issues, memory technologies will introduce 3D stacking, eventually moving on-socket and likely on-die, providing greatly increased bandwidth but unfortunately also likely providing smaller memory capacity per core. Off-socket memory, possibly in the form of non-volatile memory, will create a complex memory hierarchy. Third, communication energy will dominate the energy required to compute, such that interconnect power and bandwidth will have a significant impact. All of the above changes are driven by the need for greatly increased energy efficiency, as current technology will prove unsuitable for exascale, due to unsustainable power requirements of such a system. These changes will have the most significant impact on programming models and algorithms, but they will be felt across all layers of the machine. There is clear need to engage all ASC working groups in planning for how to deal with technological changes of this magnitude. The primary function of the Hardware Architecture Working Group is to facilitate codesign with hardware vendors to ensure future exascale platforms are capable of efficiently supporting the ASC applications, which in turn need to meet the mission needs of the NNSA Stockpile Stewardship Program. This issue is relatively immediate, as there is only a small window of opportunity to influence hardware design for 2018 machines. Given the short timeline a firm co-design methodology with vendors is of prime importance.

    2. Jason Hick! Storage Systems Group NERSC User Group Storage Update

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      NERSC User Group Storage Update Feb 2 6, 2 014 The compute and storage systems 2014 Sponsored C ompute S ystems Carver, P DSF, J GI, K BASE, H EP 8 x F DR I B /global/ scratch 4 PB /project 5 PB /home 250 TB 45 P B s tored, 2 40 P B capacity, 4 0 y ears o f community d ata HPSS 48 GB/s 2.2 P B L ocal Scratch 70 GB/s 6.4 P B L ocal Scratch 140 GB/s 80 GB/s Ethernet & I B F abric Science F riendly S ecurity ProducKon M onitoring Power E fficiency WAN 2 x 10 Gb 1 x 100 Gb Science D ata N etwork

    3. Macro-Industrial Working Group: meeting 1

      Gasoline and Diesel Fuel Update (EIA)

      4 Macro/Industrial Working Group Macroeconomic team: Kay Smith, Russ Tarver, Elizabeth Sendich and Vipin Arora Briefing on Macroeconomic Reference Case for the Annual Energy Outlook 2015 Macro's FY2015 AEO initiatives met 2 Kay Smith, AEO2015 Macroeconomic/Industrial Working Group July 24, 2014 PLEASE DO NOT CITE OR DISTRIBUTE * Review incorporation of completed AEO macroeconomic initiatives. - Incorporation of 2009 based GDP - Use of 2007 supply matrix and its extension to 2012 - The extension

    4. Bridging the Gap to 64-bit Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Opteron and AMD64 A Commodity 64 bit x86 SOC Fred Weber Vice President and CTO Computation Products Group Advanced Micro Devices 22 April 2003 AMD - Salishan HPC 2003 2 Opteron...

    5. TEC Working Group Topic Groups Rail Conference Call Summaries...

      Office of Environmental Management (EM)

      Summaries Rail Topic Group TEC Working Group Topic Groups Rail Conference Call Summaries Rail Topic Group Rail Topic Group PDF icon May 17, 2007 PDF icon January 16, 2007 PDF icon...

    6. Distributed processor allocation for launching applications in a massively connected processors complex

      DOE Patents [OSTI]

      Pedretti, Kevin (Goleta, CA)

      2008-11-18

      A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.

    7. Introduction to High Performance Computing Using GPUs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC Using GPUs Introduction to High Performance Computing Using GPUs July 11, 2013 NERSC, NVIDIA, and The Portland Group presented a one-day workshop "Introduction to High Performance Computing Using GPUs" on July 11, 2013 in Room 250 of Sutardja Dai Hall on the University of California, Berkeley, campus. Registration was free and open to all NERSC users; Berkeley Lab Researchers; UC students, faculty, and staff; and users of the Oak Ridge Leadership Computing Facility. This workshop

    8. System-wide power management control via clock distribution network

      DOE Patents [OSTI]

      Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.

      2015-05-19

      An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.

    9. Trails Working Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Trails » Trails Working Group Trails Working Group Our mission is to inventory, map, and prepare historical reports on the many trails used at LANL. Contact Environmental Communication & Public Involvement P.O. Box 1663 MS M996 Los Alamos, NM 87545 (505) 667-0216 Email The LANL Trails Working Group inventories, maps, and prepares historical reports on the many trails used at LANL. Some of these trails are ancient pueblo footpaths that continue to be used for recreational hiking today. Some

    10. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:06 PM on June 12, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Jeff Cheadle, Glen Clark, Shannan Johnson, Joan Kessner, Larry Markel, Karl Pool, Steve Smith, Noe'l Smith-Jackson, Chris Sutton, Cindy Taylor, Chris Thomson, Amanda Tuttle, Sam Vega, Rick Warriner and Eric Wyse. I. Huei Meznarich requested comments on the

    11. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6, 2013 The beginning of the meeting was delayed due to an unannounced loss of the conference room scheduled for the meeting. After securing another meeting location, the meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:18 PM on April 16, 2013 in Conference Room 156 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Jeff Cheadle, Glen Clark, Joan Kessner, Larry Markel, Mary McCormick-Barger, Karl Pool,

    12. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      January 28, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:04 PM on January 28, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Joe Archuleta, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Chris Sutton, Chris Thompson, Rich Weiss and Eric Wyse. I. Huei Meznarich asked if there were any comments on

    13. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      5, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:07 PM on February 25, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Lynn Albin, Taffy Almeida, Joe Archuleta, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Chris Sutton, Chris Thompson, and Eric Wyse. I. Huei Meznarich asked if there were any

    14. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      0, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on May 20, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Lynn Albin, Taffy Almeida, Joe Archuleta, Glen Clark, Robert Elkins, Scot Fitzgerald, Shannan Johnson, Joan Kessner, Mary McCormick-Barger, Craig Perkins, Karl Pool, Noe'l Smith-Jackson, Chris Sutton, Chris Thompson and Eric Wyse. I. Acknowledging the

    15. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      4 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:07 PM on June 12, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Joe Archuleta, Sara Champoux, Glen Clark, Jim Douglas, Robert Elkins, Scot Fitzgerald, Joan Kessner, Jan McCallum, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Rich Weiss and Eric Wyse. I. Acknowledging the presence of new and/or infrequent

    16. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:10 PM on June 17, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Robert Elkins, Shannan Johnson, Joan Kessner, Jan McCallum, Craig Perkins, Karl Pool, Chris Sutton and Rich Weiss. I. Because of the short time since the last meeting, Huei Meznarich stated that the minutes from the June 12, 2014 meeting have not yet

    17. NERSC Users Group (NUG)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      NUGEX Elections Charter User Announcements Help Staff Blogs Request Repository Mailing List Operations for: Passwords & Off-Hours Status 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or 510-486-8612 Consulting http://help.nersc.gov consult@nersc.gov 1-800-66-NERSC, option 3 or 510-486-8611 Home » For Users » NERSC Users Group NERSC Users Group (NUG) The NERSC Users' Group, NUG, welcomes participation from all

    18. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      In the early 2000s, members of Fermilab's Computing Division looked ahead to experiments like those at the Large Hadron Collider, which would collect more data than any computing ...

    19. InterGroup Protocols

      Energy Science and Technology Software Center (OSTI)

      2003-04-02

      Existing reliable ordered group communication protocols have been developed for local-area networks and do not in general scale well to a large number of nodes and wide-area networks. The InterGroup suite of protocols is a scalable group communication system that introduces an unusual approach to handling group membership, and supports a receiver-oriented selection of service. The protocols are intended for a wide-area network, with a large number of nodes, that has highly variable delays andmore »a high message loss rate, such as the Internet. The levels of the message delivery service range from unreliable unordered to reliable timestamp ordered.« less

    20. Date Times Group Speakers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Meetings - Spring 2014 Date Times Group Speakers Tues, 1-13 2:30-3:30pm Faculty Meeting Fri, 1-24 12:30-1:30pm Group Research Meeting Emmanuel Giannelis Fri, 1-31 12:30-1:30pm Student & Postdoc Mtg Apostolos Enotiadis; Nikki Ritzert & Megan Holtz Fri, 2-7 12:30-1:30pm Group Research Meeting CHESS Mon, 2-10 2:30-3:30pm Faculty Meeting Will Dichtel Fri, 2-14 12:30-1:30pm Student & Postdoc Mtg Frank DiSalvo Fri, 2-21 12:30-1:30pm Group Research Meeting Lynden Archer Fri, 2-28

    1. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      at the September 21 meeting of the Focus Group, the concerns related to the current language in HASQARD Volume 1, Section 10.4, "Quality Systems" were discussed at the...

    2. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Markel, Mary McCormick-Barger, Dave St. John, Steve Smith, Steve Trent and Eric Wyse. ... On January 31, the Secretary received a call from the QA Sub-Group Chair, Steve Smith. ...

    3. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Elkins, Mary McCormick-Barger, Noe'l Smith-Jackson, Chris Sutton, Amanda Tuttle, Rick ... Noe'l Smith-Jackson stated that the HASQARD document is the work of the Focus Group not ...

    4. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Markel, Huei Meznarich, Karl Pool, Noe'l Smith-Jackson, Andrew Stevens, Genesis Thomas, ... the radar of the DOE- HQ QA group. Noe'l Smith-Jackson commented that Ecology was always ...

    5. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Group to review. Rich began his presentation by stating that he does not believe the language in Revision 3 works nor is it necessary anymore. The purpose of the Revision 3...

    6. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      at the last Focus Group meeting to get together and see if an agreement on proposed language could be achieved that would satisfy CHPRC sampling personnel and WSCF laboratory...

    7. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      the May 15 meeting, Rich Weiss sent an e-mail to the Focus Group to propose revised language for the last paragraph in Section 5.3 containing the sentence about measured...

    8. Tritium Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      matters related to tritium. Contacts Mike Rogers (505) 665-2513 Email Chandra Savage Marsden (505) 664-0183 Email The Tritium Focus Group consists of participants from member...

    9. Mira Computational Readiness Assessment | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Mira Computational Readiness Assessment Assess your project's computational readiness for Mira A review of the following computational readiness points in relation to scaling, porting, I/O, memory

    10. Macro Industrial Working Group

      Gasoline and Diesel Fuel Update (EIA)

      September 29, 2014 | Washington, DC WORKING GROUP PRESENTATION FOR DISCUSSION PURPOSES DO NOT QUOTE OR CITE AS RESULTS ARE SUBJECT TO CHANGE Industrial team preliminary results for AEO2015 Overview AEO2015 2 Industrial Team Washington DC, September 29, 2014 WORKING GROUP PRESENTATION FOR DISCUSSION PURPOSES DO NOT QUOTE OR CITE AS RESULTS ARE SUBJECT TO CHANGE * AEO2015 is a "Lite" year - New ethane/propane pricing model only major update - Major side cases released with Reference case

    11. DOE STGWG Group

      Energy Savers [EERE]

      STGWG Group The State and Tribal Government Working Group (STGWG) is one of the intergovernmental organizations with which the DOE EM office works with. They meet twice yearly for updates to the EM projects. They were formed in 1989. It is comprised of several state legislators and tribal staff and leadership from states in proximity to DOE's environmental cleanup sites of the following states: New York, South Carolina, Ohio, Washington, New Mexico, Idaho, California, Colorado, Georgia,

    12. ALS Communications Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Communications Group Print From left: Ashley White, Lori Tamura, Keri Troutman, and Carina Braun. The ALS Communications staff maintain the ALS Web site; write and edit all print and electronic publications for the ALS, including Science Highlights, Science Briefs, brochures, handouts, and the monthly newsletter ALSNews; and create educational and scientific outreach materials. In addition, members of the group organize bi-monthly Science Cafés, create conference and workshop Web sites and

    13. ALS Communications Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ALS Communications Group Print From left: Ashley White, Lori Tamura, Keri Troutman, and Carina Braun. The ALS Communications staff maintain the ALS Web site; write and edit all print and electronic publications for the ALS, including Science Highlights, Science Briefs, brochures, handouts, and the monthly newsletter ALSNews; and create educational and scientific outreach materials. In addition, members of the group organize bi-monthly Science Cafés, create conference and workshop Web sites and

    14. ALS Communications Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ALS Communications Group Print From left: Ashley White, Lori Tamura, Keri Troutman, and Carina Braun. The ALS Communications staff maintain the ALS Web site; write and edit all print and electronic publications for the ALS, including Science Highlights, Science Briefs, brochures, handouts, and the monthly newsletter ALSNews; and create educational and scientific outreach materials. In addition, members of the group organize bi-monthly Science Cafés, create conference and workshop Web sites and

    15. Presentation of the MERC work-flow for the computation of a 2D radial reflector in a PWR

      SciTech Connect (OSTI)

      Clerc, T.; Hebert, A.; Leroyer, H.; Argaud, J. P.; Poncot, A.; Bouriquet, B.

      2013-07-01

      This paper presents a work-flow for computing an equivalent 2D radial reflector in a pressurized water reactor (PWR) core, in adequacy with a reference power distribution, computed with the method of characteristics (MOC) of the lattice code APOLLO2. The Multi-modelling Equivalent Reflector Computation (MERC) work-flow is a coherent association of the lattice code APOLLO2 and the core code COCAGNE, structured around the ADAO (Assimilation de Donnees et Aide a l'Optimisation) module of the SALOME platform, based on the data assimilation theory. This study leads to the computation of equivalent few-groups reflectors, that can be spatially heterogeneous, which have been compared to those obtained with the OPTEX similar methodology developed with the core code DONJON, as a first validation step. Subsequently, the MERC work-flow is used to compute the most accurate reflector in consistency with all the R and D choices made at Electricite de France (EDF) for the core modelling, in terms of number of energy groups and simplified transport solvers. We observe important reductions of the power discrepancies distribution over the core when using equivalent reflectors obtained with the MERC work-flow. (authors)

    16. TEC Working Group Topic Groups Section 180(c) Meeting Summaries...

      Office of Environmental Management (EM)

      Meeting Summaries TEC Working Group Topic Groups Section 180(c) Meeting Summaries Meeting Summaries PDF icon Washington, DC TEC Meeting - 180(c) Group Summary - March 15, 2006 More...

    17. TEC Working Group Topic Groups Routing Meeting Summaries | Department...

      Office of Environmental Management (EM)

      Routing Meeting Summaries TEC Working Group Topic Groups Routing Meeting Summaries MEETING SUMMARIES PDF icon Atlanta TEC Meeting, Routing Topic Group Summary More Documents &...

    18. TEC Working Group Topic Groups Rail Conference Call Summaries...

      Office of Environmental Management (EM)

      Rail Conference Call Summaries TEC Working Group Topic Groups Rail Conference Call Summaries CONFERENCE CALL SUMMARIES Rail Topic Group Inspections Subgroup Planning Subgroup...

    19. TEC Working Group Topic Groups Archives Protocols Meeting Summaries...

      Office of Environmental Management (EM)

      Protocols Meeting Summaries TEC Working Group Topic Groups Archives Protocols Meeting Summaries Meeting Summaries PDF icon Philadelphia TEC Meeting, Protocols Topic Group Summary -...

    20. TEC Working Group Topic Groups Rail Meeting Summaries | Department...

      Office of Environmental Management (EM)

      TEC Working Group Topic Groups Rail Meeting Summaries MEETING SUMMARIES PDF icon Kansas City TEC Meeting, Rail Topic Group Summary - July 25, 2007 PDF icon Atlanta TEC...

    1. Good Energy Group Plc previously Monkton Group Plc | Open Energy...

      Open Energy Info (EERE)

      Plc previously Monkton Group Plc Jump to: navigation, search Name: Good Energy Group Plc (previously Monkton Group Plc) Place: Chippenham, Wiltshire, United Kingdom Zip: SN15 1EE...

    2. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zrich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    3. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    4. Computing for Finance

      SciTech Connect (OSTI)

      2010-03-24

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    5. Transport Modeling Working Group Meeting Reports | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Modeling Working Group Meeting Reports Transport Modeling Working Group Meeting Reports View reports from meetings of the Transport Modeling Working Group, which meets twice per year to exchange information, create synergies, share experimental and computational results, and collaboratively develop methodologies for and understanding of transport phenomena in polymer electrolyte fuel cell stacks. PDF icon Report of the 7th Meeting of the Transport Modeling Working Group, May 2014 PDF icon Report

    6. Sandia Energy - Computations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computations Home Transportation Energy Predictive Simulation of Engines Reacting Flow Applied Math & Software Computations ComputationsAshley Otero2015-10-30T02:18:51+00:00...

    7. Illinois Wind Workers Group

      SciTech Connect (OSTI)

      David G. Loomis

      2012-05-28

      The Illinois Wind Working Group (IWWG) was founded in 2006 with about 15 members. It has grown to over 200 members today representing all aspects of the wind industry across the State of Illinois. In 2008, the IWWG developed a strategic plan to give direction to the group and its activities. The strategic plan identifies ways to address critical market barriers to the further penetration of wind. The key to addressing these market barriers is public education and outreach. Since Illinois has a restructured electricity market, utilities no longer have a strong control over the addition of new capacity within the state. Instead, market acceptance depends on willing landowners to lease land and willing county officials to site wind farms. Many times these groups are uninformed about the benefits of wind energy and unfamiliar with the process. Therefore, many of the project objectives focus on conferences, forum, databases and research that will allow these stakeholders to make well-educated decisions.

    8. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    9. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    10. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

    11. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    12. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or...

    13. ENN Group aka XinAo Group | Open Energy Information

      Open Energy Info (EERE)

      ENN Group aka XinAo Group Jump to: navigation, search Name: ENN Group (aka XinAo Group) Place: Langfang, Hebei Province, China Zip: 65001 Product: Chinese private industrial...

    14. Bell, group and tangle

      SciTech Connect (OSTI)

      Solomon, A. I.

      2010-03-15

      The 'Bell' of the title refers to bipartite Bell states, and their extensions to, for example, tripartite systems. The 'Group' of the title is the Braid Group in its various representations; while 'Tangle' refers to the property of entanglement which is present in both of these scenarios. The objective of this note is to explore the relation between Quantum Entanglement and Topological Links, and to show that the use of the language of entanglement in both cases is more than one of linguistic analogy.

    15. Upgraded Coal Interest Group

      SciTech Connect (OSTI)

      Evan Hughes

      2009-01-08

      The Upgraded Coal Interest Group (UCIG) is an EPRI 'users group' that focuses on clean, low-cost options for coal-based power generation. The UCIG covers topics that involve (1) pre-combustion processes, (2) co-firing systems and fuels, and (3) reburn using coal-derived or biomass-derived fuels. The UCIG mission is to preserve and expand the economic use of coal for energy. By reducing the fuel costs and environmental impacts of coal-fired power generation, existing units become more cost effective and thus new units utilizing advanced combustion technologies are more likely to be coal-fired.

    16. The Chaninik Wind Group

      Energy Savers [EERE]

      Chaninik Wind Group It started as a small, simple idea..., now we are headed to become," the heartbeat of the region." William Igkurak, President USDoE Tribal Energy Program, Annual Program Review, November 13-16, 2012, Denver, Colorado Department of Energy Tribal Energy Chaninik Wind Group Villages Kongiganak pop.359 Kwigillingok pop. 388 Kipnuk pop.644 Tuntutuliak pop. 370 On average, 24% of families are below the poverty line. Chaninik's Goal is to become "The

    17. Greenko Group | Open Energy Information

      Open Energy Info (EERE)

      Greenko Group Jump to: navigation, search Name: Greenko Group Place: Hyderabad, India Zip: 500 033 Product: Focused on clean energy projects in Asia. References: Greenko Group1...

    18. Sinocome Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Sinocome Group Place: Beijing Municipality, China Sector: Solar Product: A Chinese high tech group with business in solar PV sector...

    19. Valesul Group | Open Energy Information

      Open Energy Info (EERE)

      Valesul Group Jump to: navigation, search Name: Valesul Group Place: Brazil Product: Brazilian ethanol producer. References: Valesul Group1 This article is a stub. You can help...

    20. Angeleno Group | Open Energy Information

      Open Energy Info (EERE)

      Angeleno Group Jump to: navigation, search Logo: Angeleno Group Name: Angeleno Group Address: 2029 Century Park East, Suite 2980 Place: Los Angeles, California Zip: 90067 Region:...

    1. MTorres Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: MTorres Group Place: Murcia, Spain Zip: 30320 Sector: Wind energy Product: Wind turbine manufacturer References: MTorres Group1 This...

    2. Ferrari Group | Open Energy Information

      Open Energy Info (EERE)

      Ferrari Group Jump to: navigation, search Name: Ferrari Group Place: Sao Paulo, Brazil Product: Sao Paulo-based ethanol producer. References: Ferrari Group1 This article is a...

    3. Climate Modeling using High-Performance Computing

      SciTech Connect (OSTI)

      Mirin, A A

      2007-02-05

      The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

    4. TEC Working Group Topic Groups Archives Communications Meeting Summaries |

      Office of Environmental Management (EM)

      Department of Energy Archives Communications Meeting Summaries TEC Working Group Topic Groups Archives Communications Meeting Summaries Meeting Summaries PDF icon Milwaukee TEC Meeting, Communications Topic Group Summary - July 1998 PDF icon Inaugural Group Meeting - April 1998 More Documents & Publications TEC Working Group Topic Groups Archives Communications Conference Call Summaries TEC Meeting Summaries - January 1997 TEC Working Group Topic Groups Tribal Conference Call Summa

    5. TEC Working Group Topic Groups Rail Conference Call Summaries Inspections

      Office of Environmental Management (EM)

      Subgroup | Department of Energy Summaries Inspections Subgroup TEC Working Group Topic Groups Rail Conference Call Summaries Inspections Subgroup Inspections Subgroup PDF icon April 6, 2006 PDF icon February 23, 2006 Draft PDF icon January 24, 2006 More Documents & Publications TEC Working Group Topic Groups Rail Conference Call Summaries Planning Subgroup TEC Working Group Topic Groups Rail Conference Call Summaries Tracking Subgroup TEC Working Group Topic Groups Rail Conference Call

    6. TEC Working Group Topic Groups Rail Key Documents Radiation Monitoring

      Office of Environmental Management (EM)

      Subgroup | Department of Energy Radiation Monitoring Subgroup TEC Working Group Topic Groups Rail Key Documents Radiation Monitoring Subgroup Radiation Monitoring Subgroup PDF icon Draft Work Plan - February 4, 2008 More Documents & Publications TEC Working Group Topic Groups Rail Meeting Summaries TEC Working Group Topic Groups Rail Conference Call Summaries Radiation Monitoring Subgroup TEC Working Group Topic Groups Rail Key Documents Intermodal Subgroup

    7. TRIDAC host computer functional specification

      SciTech Connect (OSTI)

      Hilbert, S.M.; Hunter, S.L.

      1983-08-23

      The purpose of this document is to outline the baseline functional requirements for the Triton Data Acquisition and Control (TRIDAC) Host Computer Subsystem. The requirements presented in this document are based upon systems that currently support both the SIS and the Uranium Separator Technology Groups in the AVLIS Program at the Lawrence Livermore National Laboratory and upon the specific demands associated with the extended safe operation of the SIS Triton Facility.

    8. Helms Research Group - Home

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Helms Group Home Research Members Publications Collaborations Connect Physical Organic Materials Chemistry Our research is devoted to understanding transport phenomena in mesostructured systems assembled from organic, organometallic, polymeric and nanocrystalline components. Enhanced capabilities relevant to energy, health, water, and food quality are enabled by our unique approaches to the modular design of their architectures and interfaces.

    9. Durability Working Group

      Broader source: Energy.gov [DOE]

      Description, technical targets, meeting archives, and contacts for the DOE Durability Working Group, which meets twice per year to exchange information, create synergies, and collaboratively develop both an understanding of and tools for studying degradation mechanisms of polymer electrolyte fuel cell stacks.

    10. Electrostatic Cooperativity of Hydroxyl Groups at Metal Oxide Surfaces

      SciTech Connect (OSTI)

      Boily, Jean F.; Lins, Roberto D.

      2009-09-24

      The O-H bond distribution of hydroxyl groups at the {110} goethite (R-FeOOH) surface was investigated by molecular dynamics. This distribution was strongly affected by electrostatic interactions with neighboring oxo and hydroxo groups. The effects of proton surface loading, simulated by emplacing two protons at different distances of separation, were diverse and generated several sets of O-H bond distributions. DFT calculations of a representative molecular cluster were also carried out to demonstrate the impact of these effects on the orientation of oxygen lone pairs in neighboring oxo groups. These effects should have strong repercussions on O-H stretching vibrations of metal oxide surfaces.h

    11. Working Group Report: Lattice Field Theory

      SciTech Connect (OSTI)

      Blum, T.; et al.,

      2013-10-22

      This is the report of the Computing Frontier working group on Lattice Field Theory prepared for the proceedings of the 2013 Community Summer Study ("Snowmass"). We present the future computing needs and plans of the U.S. lattice gauge theory community and argue that continued support of the U.S. (and worldwide) lattice-QCD effort is essential to fully capitalize on the enormous investment in the high-energy physics experimental program. We first summarize the dramatic progress of numerical lattice-QCD simulations in the past decade, with some emphasis on calculations carried out under the auspices of the U.S. Lattice-QCD Collaboration, and describe a broad program of lattice-QCD calculations that will be relevant for future experiments at the intensity and energy frontiers. We then present details of the computational hardware and software resources needed to undertake these calculations.

    12. Focus Group | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Outreach Forums » Focus Group and Work Group Activities » Focus Group Focus Group The Focus Group was formed in March 2007 to initiate dialogue and interface with labor unions, DOE Program Secretarial Offices, and stakeholders in areas of mutual interest and concern related to health, safety, security, and the environment. Meeting Documents Available for Download November 13, 2012 Work Group Leadership Meetings: Transition Elements This Focus Group Work Group telecom was held with the Work

    13. Cosmic Reionization On Computers | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      its Cosmic Reionization On Computers (CROC) project, using the Adaptive Refinement Tree (ART) code as its main simulation tool. An important objective of this research is to make...

    14. Computing and Computational Sciences Directorate - Information...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      cost-effective, state-of-the-art computing capabilities for research and development. ... communicates and manages strategy, policy and finance across the portfolio of IT assets. ...

    15. Evaluation of distributed ANSYS for high performance computing...

      Office of Scientific and Technical Information (OSTI)

      Research Org: Sandia National Laboratories Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 42 ENGINEERING; EVALUATION; PERFORMANCE; MEMBRANES...

    16. BILIWG Meeting: DOE Hydrogen Quality Working Group Update and Recent

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Progress (Presentation) | Department of Energy DOE Hydrogen Quality Working Group Update and Recent Progress (Presentation) BILIWG Meeting: DOE Hydrogen Quality Working Group Update and Recent Progress (Presentation) Presented at the 2007 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group held November 6, 2007 in Laurel, Maryland. PDF icon 12_anl_h2_quality_working_group_update.pdf More Documents & Publications Effects of Fuel and Air Impurities on PEM Fuel Cell

    17. Parallel computing works

      SciTech Connect (OSTI)

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    18. TEC Working Group Topic Groups Tribal Meeting Summaries | Department of

      Energy Savers [EERE]

      Energy Meeting Summaries TEC Working Group Topic Groups Tribal Meeting Summaries Meeting Summaries PDF icon Kansas City TEC Meeting - Tribal Group Summary - July 25, 2007 PDF icon Atlanta TEC Meeting - Tribal Group Summary - March 6, 2007 PDF icon Green Bay TEC Meeting -- Tribal Group Summary - October 26, 2006 PDF icon Washington TEC Meeting - Tribal Topic Group Summary - March 14, 2006 PDF icon Pueblo TEC Meeting - Tribal Topic Group Summary, September 22, 2005 PDF icon Phoenix TEC Meeting

    19. TEC Working Group Topic Groups | Department of Energy

      Energy Savers [EERE]

      Topic Groups TEC Working Group Topic Groups TEC Topic Groups were formed in 1991 following an evaluation of the TEC program. Interested members, DOE and other federal agency staff meet to examine specific issues related to radioactive materials transportation. TEC Topic Groups enable a small number of participants to focus intensively on key issues at a level of detail that is unattainable during the TEC semiannual meetings due to time and group size constraints. Topic Groups meet individually

    20. Working Group Reports

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Working Group Reports Special Working Session on the Role of Buoy Observations in the Tropical Western Pacific Measurement Scheme J. Downing Marine Sciences Laboratory Sequim, Washington R. M. Reynolds Brookhaven National Laboratory Upton, New York Attending W. Clements (TWPPO) F. Barnes (TWPPO) T. Ackerman (TWP Site Scientist) M. Ivey (ARCS Manager) H. Church J. Curry J. del Corral B. DeRoos S. Kinne J. Mather J. Michalsky M. Miller P. Minnett B. Porch J. Sheaffer P. Webster M. Wesely K.

    1. Schuck Group - Home

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      News Archive Research Members Publications Contacts The Schuck Research Group Home News Archive Research Members Publications Contacts Tweet We focus on investigating and controlling light-matter interactions at the nanoscale, and using light to probe local environments. We are particularly interested in understanding the nano- and meso-scale interactions between localized states in materials, and relating these properties with material and device functionality. We do this by correlating

    2. ORGANIZATION/GROUP

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HANFORD ADVISORY BOARD MEMBERSHIP Page 1 January 22, 2016 ORGANIZATION/GROUP PRIMARY MEMBER ALTERNATE LOCAL GOVERNMENT INTERESTS (7) Benton County Bob Suyama Larry Lockrem Benton-Franklin Council of Governments Dawn Wellman Tony Benegas City of Kennewick Bob Parks Dick Smith City of Pasco Rob Davis Vacant City of Richland Pam Larsen Vince Panesko City of West Richland Jerry Peltier Richard Bloom Grant & Franklin Counties Gary Garnant Mike Korenko LOCAL BUSINESS INTERESTS (1) Tri-Cities

    3. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:04 PM on January 17, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Chair), Cliff Watkins (Secretary), Mike Barnes, Jeff Cheadle, Glen Clark, Scot Fitzgerald, Shannan Johnson, Joan Kessner, Larry Markel, Cindy Taylor, Chris Thompson, Amanda Tuttle, Sam Vega, Rich Weiss and Eric Wyse. I. Huei Meznarich requested comments on the minutes from the December 13, 2011 meeting.

    4. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      1, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:02 PM on February 21, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Chair), Cliff Watkins (Secretary), Lynn Albin, Taffy Almeida, Courtney Blanchard, Glen Clark, Scot Fitzgerald, Shannan Johnson, Kris Kuhl-Klinger, Larry Markel, Karl Pool, Steve Smith, Cindy Taylor, Amanda Tuttle, Sam Vega, Rick Warriner, Rich Weiss and Eric Wyse. I. Huei Meznarich requested comments on

    5. Buildings Sector Working Group

      Gasoline and Diesel Fuel Update (EIA)

      July 22, 2013 AEO2014 Model Development For discussion purposes only Not for citation Overview Builldings Working Group Forrestal 2E-069 / July 22, 2013 2 * Residential projects - RECS update - Lighting model - Equipment, shell subsidies - ENERGY STAR benchmarking - Housing stock formation and decay * Commercial projects - Major end-use capacity factors - Hurdle rates - ENERGY STAR buildings * Both sectors - Consumer behavior workshop - Comparisons to STEO - AER  MER - Usual annual updates -

    6. Tritium Focus Group Meeting:

      Office of Environmental Management (EM)

      32 nd Tritium Focus Group Meeting: Tritium research activities in Safety and Tritium Applied Research (STAR) facility, Idaho National Laboratory Masashi Shimada Fusion Safety Program, Idaho National Laboratory April 25 th 2013, Germantown, MD STI #: INL/MIS-13-28975 Outlines 1. Motivation of tritium research activity in STAR facility 2. Unique capabilities in STAR facility 3. Research highlights from tritium retention in HFIR neutron- irradiated tungsten April 25th 2013 Germantown, MD STAR

    7. Detector Support Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      search Nuclear Physics Program Please upgrade your browser. This site's design is only visible in a graphical browser that supports web standards, but its content is accessible to any browser. Concerns? Hall B Navigation DSG Home Staff Presentations Notes print version Detector Support Group Spotlight Archive Index Rotation test for the SVT detector EPICS Interlock Testing Bundling HV DC cables Hall D N2 tank level check Parameter check of Hall D solenoid Testing of SVT Hybrid Flex Circuit

    8. Environmental/Interest Groups

      Office of Legacy Management (LM)

      Environmental/Interest Groups Miamisburg Mound Community Improvement Corporation (MMCIC) Mike J. Grauwelman President P.O. Box 232 Miamisburg, OH 45343-0232 (937) 865-4462 Email: mikeg@mound.com Mound Reuse Committee See MMCIC Mound Environmental Safety and Health Sharon Cowdrey President 5491 Weidner Road Springboro, OH 45066 (937) 748-4757 No email address available Mound Museum Association Dr. Don Sullenger President Mound Advanced Technology Center 720 Mound Road Miamisburg, OH 45342-6714

    9. Computers-BSA.ppt

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers! Boy Scout Troop 405! What is a computer?! Is this a computer?! Charles Babbage: Father of the Computer! 1830s Designed mechanical calculators to reduce human error. *Input device *Memory to store instructions and results *A processors *Output device! Vacuum Tube! Edison 1883 & Lee de Forest 1906 discovered that "vacuum tubes" could serve as electrical switches and amplifiers A switch can be ON (1)" or OFF (0) Electronic computers use Boolean (George Bool 1850) logic

    10. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    11. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege o

    12. EIA - Coal Distribution

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Annual Coal Distribution Report > Annual Coal Distribution Archives Annual Coal Distribution Archive Release Date: February 17, 2011 Next Release Date: December 2011 Domestic coal distribution by origin State, destination State, consumer category, method of transportation; foreign coal distribution by major coal-exporting state and method of transportation; and domestic and foreign coal distribution by origin state. Year Domestic and foreign distribution of U.S. coal by State of origin

    13. Theory & Computation > Research > The Energy Materials Center...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Theory & Computation In This Section Computation & Simulation Theory & Computation Computation & Simulation...

    14. Distributed Bio-Oil Reforming

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Distributed Bio-Oil Reforming R. Evans, S. Czernik, R. French, M. Ratcliff National Renewable Energy Laboratory J. Marda, A. M. Dean Colorado School of Mines Bio-Derived Liquids Distributed Reforming Working Group Meeting HFC&IT Program Baltimore, MD October 24, 2006 1 Gasification Partial oxidation CH 1.46 O .67 + 0.16 O 2 → CO + 0.73 H 2 Biomass Syngas Water-Gas Shift CO + H 2 O CO 2 + H 2 CH 1.46 O .67 + 0.16 O 2 +H 2 O →CO 2 + 1.73 H 2 Biomass Hydrogen (14.3% yield) Practical yields

    15. TEC Working Group Topic Groups Archives | Department of Energy

      Office of Environmental Management (EM)

      Archives TEC Working Group Topic Groups Archives The following Topic Groups are no longer active; however, related documents and notes for these archived Topic Groups are available through the following links: Communicatons Consolidated Grant Topic Group Training - Medical Training Protocols Route Identificaiton Process Mechanics of Funding and Technical Assistance

    16. Woo-Sun Yang! NERSC User Services Group! NUG Training!

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Group NUG Training February 3, 2014 Debugging Tools Debuggers on NERSC machines * Parallel d ebuggers w ith a g raphical u ser i nterface - DDT ( Distributed D ebugging T ool)...

    17. Traffic information computing platform for big data

      SciTech Connect (OSTI)

      Duan, Zongtao Li, Ying Zheng, Xibin Liu, Yan Dai, Jiting Kang, Jun

      2014-10-06

      Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

    18. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      1 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:04 PM on December 13, 2011 in Conference Room 126 at 2420 Stevens. Those attending were: Huei Meznarich (Chair), Cliff Watkins (Secretary), Lynn Albin, Heather Anastos, Jeff Cheadle, Glen Clark, Scot Fitzgerald, Shannan Johnson, Kris Kuhl-Klinger, Joan Kessner, Karl Pool, Dave St. John, Noe'l Smith-Jackson, Chris Sutton, Cindy Taylor, Amanda Tuttle, Rich Weiss and Eric Wyse. I. Huei Meznarich requested comments

    19. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:06 PM on April 17, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Chair), Cliff Watkins (Secretary), Lynn Albin, Taffy Almeida, Jeff Cheadle, Glen Clark, Scot Fitzgerald, Kris Kuhl-Klinger, Joan Kessner, Larry Markel, Noe'l Smith-Jackson, Cindy Taylor, Amanda Tuttle, Rich Weiss and Eric Wyse. I. Huei Meznarich requested comments on the minutes from the March 20, 2012

    20. Distributed resource management: garbage collection

      SciTech Connect (OSTI)

      Bagherzadeh, N.

      1987-01-01

      In recent years, there has been a great interest in designing high-performance distributed symbolic-processing computers. These architectures have special needs for resource management and dynamic reclamation of unused memory cells and objects. The memory management or garbage-collection aspects of these architectures are studied. Also introduced is a synchronous distributed algorithm for garbage collection. A special data structure is defined to handle the distributed nature of the problem. The author formally expresses the algorithm and shows the results of a synchronous garbage-collection simulation and its effect on the interconnection-network message to traffic. He presents an asynchronous distributed garbage collection to handle the resource management for a system that does not require a global synchronization mechanism. The distributed data structure is modified to include the asynchronous aspects of the algorithm. This method is extended to a multiple-mutator scheme, and the problem of having several processors share portion of a cyclical graph is discussed. Two models for the analytical study of the garbage-collection algorithms discussed are provided.

    1. Former NERSC Consultant Mentors Math, Computer Science Students

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Former NERSC Consultant Mentors Math, Computer Science Students Former NERSC Consultant Mentors Math, Computer Science Students March 10, 2015 Frank Hale, a former consultant in NERSC's User Services Group (USG) who currently tutors math at Diablo Valley College (DVC) in Pleasant Hill, CA, recently brought a group of computer science enthusiasts from the college to NERSC for a tour. Hale, the first person hired into the USG when NERSC relocated from Lawrence Livermore National Laboratory to

    2. TEC Working Group Topic Groups Archives Communications Conference Call

      Office of Environmental Management (EM)

      Summaries | Department of Energy Communications Conference Call Summaries TEC Working Group Topic Groups Archives Communications Conference Call Summaries Conference Call Summaries PDF icon Conference Call Summary April 2000 PDF icon Conference Call Summary February 1999 PDF icon Conference Call Summary November 1998 More Documents & Publications TEC Working Group Topic Groups Archives Communications Meeting Summaries TEC Working Group Topic Groups Tribal Conference Call Summaries TEC

    3. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw (Los Alamos, NM); Gokhale, Maya B. (Los Alamos, NM); McCabe, Kevin Peter (Los Alamos, NM)

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    4. The Magellan Final Report on Cloud Computing

      SciTech Connect (OSTI)

      ,; Coghlan, Susan; Yelick, Katherine

      2011-12-21

      The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.

    5. Distributed Reforming of Renewable Liquids via Water Splitting...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Distributed Reforming Working Group (BILIWG) Hydrogen Production Technical Team Research Review BILIWG Meeting: High Pressure Steam Reforming of Bio-Derived Liquids (Presentation)

    6. Bio-Derived Liquids to Hydrogen Distributed Reforming Targets

      Broader source: Energy.gov [DOE]

      Presentation by Arlene Anderson at the October 24, 2006 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Kick-Off Meeting.

    7. CORRELATION BETWEEN GROUP LOCAL DENSITY AND GROUP LUMINOSITY

      SciTech Connect (OSTI)

      Deng Xinfa; Yu Guisheng

      2012-11-10

      In this study, we investigate the correlation between group local number density and total luminosity of groups. In four volume-limited group catalogs, we can conclude that groups with high luminosity exist preferentially in high-density regions, while groups with low luminosity are located preferentially in low-density regions, and that in a volume-limited group sample with absolute magnitude limit M{sub r} = -18, the correlation between group local number density and total luminosity of groups is the weakest. These results basically are consistent with the environmental dependence of galaxy luminosity.

    8. TEC Working Group Topic Groups Rail Conference Call Summaries...

      Office of Environmental Management (EM)

      Summaries Inspections Subgroup TEC Working Group Topic Groups Rail Conference Call Summaries Inspections Subgroup Inspections Subgroup PDF icon April 6, 2006 PDF icon February 23,...

    9. TEC Working Group Topic Groups Routing Conference Call Summaries...

      Office of Environmental Management (EM)

      Routing Conference Call Summaries TEC Working Group Topic Groups Routing Conference Call Summaries CONFERENCE CALL SUMMARIES PDF icon January 31, 2008 PDF icon December 6, 2007 PDF...

    10. TEC Working Group Topic Groups Security Meeting Summaries | Department...

      Office of Environmental Management (EM)

      Meeting Summaries TEC Working Group Topic Groups Security Meeting Summaries Meeting Summaries PDF icon Green Bay STG Meeting Summary- September 14, 2006 PDF icon Washington STG...

    11. TEC Working Group Topic Groups Archives Mechanics of Funding...

      Office of Environmental Management (EM)

      Mechanics of Funding and Techical Assistance TEC Working Group Topic Groups Archives Mechanics of Funding and Techical Assistance Mechanics of Funding and Techical Assistance Items...

    12. TEC Working Group Topic Groups Rail Archived Documents | Department...

      Office of Environmental Management (EM)

      Archived Documents TEC Working Group Topic Groups Rail Archived Documents ARCHIVED DOCUMENTS PDF icon Inspections Summary Matrix PDF icon TEC Transportation Safety WIPP-PIG Rail...

    13. TEC Working Group Topic Groups Tribal Conference Call Summaries...

      Office of Environmental Management (EM)

      Conference Call Summaries TEC Working Group Topic Groups Tribal Conference Call Summaries Conference Call Summaries PDF icon March 12, 2008 PDF icon October 3, 2007 PDF icon...

    14. TEC Working Group Topic Groups Archives Communications Conference...

      Office of Environmental Management (EM)

      Communications Conference Call Summaries TEC Working Group Topic Groups Archives Communications Conference Call Summaries Conference Call Summaries PDF icon Conference Call Summary...

    15. TEC Working Group Topic Groups Archives Communications Meeting...

      Office of Environmental Management (EM)

      Archives Communications Meeting Summaries TEC Working Group Topic Groups Archives Communications Meeting Summaries Meeting Summaries PDF icon Milwaukee TEC Meeting, Communications...

    16. TEC Working Group Topic Groups Section 180(c) Key Documents ...

      Office of Environmental Management (EM)

      Key Documents TEC Working Group Topic Groups Section 180(c) Key Documents Key Documents Briefing Package for Section 180(c) Implementation - July 2005 PDF icon Executive Summary...

    17. TEC Working Group Topic Groups Security Conference Call Summaries...

      Office of Environmental Management (EM)

      Security Conference Call Summaries TEC Working Group Topic Groups Security Conference Call Summaries Conference Call Summaries PDF icon August 17, 2006 (Draft) PDF icon July 18,...

    18. TEC Working Group Topic Groups Rail Key Documents | Department...

      Office of Environmental Management (EM)

      Rail Key Documents TEC Working Group Topic Groups Rail Key Documents KEY DOCUMENTS Radiation Monitoring Subgroup Intermodal Subgroup Planning Subgroup PDF icon Current FRA State...

    19. TEC Working Group Topic Groups Rail Key Documents Intermodal...

      Office of Environmental Management (EM)

      Intermodal Subgroup TEC Working Group Topic Groups Rail Key Documents Intermodal Subgroup Intermodal Subgroup PDF icon Draft Work Plan More Documents & Publications TEC Working...

    20. TEC Working Group Topic Groups Rail Key Documents Radiation Monitoring...

      Office of Environmental Management (EM)

      Radiation Monitoring Subgroup TEC Working Group Topic Groups Rail Key Documents Radiation Monitoring Subgroup Radiation Monitoring Subgroup PDF icon Draft Work Plan - February 4,...

    1. ARM Cloud Properties Working Group: Meeting Logistics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cloud Properties WG Breakout Session 2008 ARM Science Team Meeting Mar. 10, 2008, Norfolk, VA Monday March 10, 2008 1500 to 1515: R. Hogan - A Proposal for ARM support of Cloudnet Activities 1515 to 1530: M. Jensen - Cloud Properties Value- Added Product Development 1530 to 1545: C. Long - Instrument Group Report 1545 to 1600: S. Matrosov - WSR-88D data for ARM science 1600 to 1615: Y. Zhao - A BimodalParticle Distribution Assumption in Cirrus: Comparison of retrieval results with in situ

    2. Fermilab | Science at Fermilab | Computing | High-performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lattice QCD Farm at the Grid Computing Center at Fermilab. Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer can perform billions of multiplication and addition operations each second. High-performance parallel computing becomes necessary when computations become too large or too long to complete on a single such machine. In parallel computing, computations are divided up so that many computers can work on the same problem at

    3. Information Technology Advisory Group (iTAG) | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Committees Information Technology Advisory Group (iTAG) The Information Technology Advisory Group (iTAG) is a standing Ames Laboratory committee consisting of Ames Lab scientists and IT professionals working together to look at and advise the computing needs for researchers. iTAG Charter The committee consists of: Diane Den Adel (Information Services Representative) Terry Herrman (Engineering Services Group Representative) Nathan Humeston (Science and Technology Representative) Cynthia Jenks

    4. EIA -Quarterly Coal Distribution

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      - Coal Distribution Home > Coal> Quarterly Coal Distribution Back Issues Quarterly Coal Distribution Archives Release Date: March 9, 2016 Next Release Date: May 2016 The Quarterly Coal Distribution Report (QCDR) provides detailed quarterly data on U.S. domestic coal distribution by coal origin, coal destination, mode of transportation and consuming sector. All data are preliminary and superseded by the final Coal Distribution - Annual Report. Year/Quarters By origin State By destination

    5. Fall 2012 Working Groups

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 C STEC W orking G roup S chedule Thrust I --- s elected Thursdays; M SE C onference R oom ( 3062 H H D ow) October 1 1 Dylan B ayerl ( Kioupakis g roup) 3:00---4:00pm November 1 Andy M artin ( Millunchick g roup) 2:00---3:00pm December 1 3 Brian R oberts ( Ku g roup) 2:00---3:00pm Thrust II --- s elected T hursdays, 3 :30---4:30pm; M SE C onference R oom ( 3062 H H D ow) September 2 7 Hang C hi ( Uher g roup) October 1 8 Reddy g roup November 2 9 Gunho Kim (Pipe group) Thrust III --- s elected

    6. Working Group Report: Sensors

      SciTech Connect (OSTI)

      Artuso, M.; et al.,

      2013-10-18

      Sensors play a key role in detecting both charged particles and photons for all three frontiers in Particle Physics. The signals from an individual sensor that can be used include ionization deposited, phonons created, or light emitted from excitations of the material. The individual sensors are then typically arrayed for detection of individual particles or groups of particles. Mounting of new, ever higher performance experiments, often depend on advances in sensors in a range of performance characteristics. These performance metrics can include position resolution for passing particles, time resolution on particles impacting the sensor, and overall rate capabilities. In addition the feasible detector area and cost frequently provides a limit to what can be built and therefore is often another area where improvements are important. Finally, radiation tolerance is becoming a requirement in a broad array of devices. We present a status report on a broad category of sensors, including challenges for the future and work in progress to solve those challenges.

    7. Rowan Group | Open Energy Information

      Open Energy Info (EERE)

      Rowan Group Place: United Kingdom Product: ( Private family-controlled ) References: Rowan Group1 This article is a stub. You can help OpenEI by expanding it. Rowan Group is a...

    8. Tecate Group | Open Energy Information

      Open Energy Info (EERE)

      Tecate Group Jump to: navigation, search Name: Tecate Group Place: San Diego, California Zip: 92108-4400 Product: The Tecate Group is a global supplier of electronic components and...

    9. USJ Group | Open Energy Information

      Open Energy Info (EERE)

      USJ Group Jump to: navigation, search Name: USJ Group Place: So Paulo, Sao Paulo, Brazil Zip: 04534 000 Product: Sao Paulo based ethanol producer. References: USJ Group1 This...

    10. ERIC Group | Open Energy Information

      Open Energy Info (EERE)

      ERIC Group Jump to: navigation, search Name: ERIC Group Place: Italy Product: Italian project developer of PV power plants. References: ERIC Group1 This article is a stub. You...

    11. About Industrial Distributed Energy

      Broader source: Energy.gov [DOE]

      The Advanced Manufacturing Office's (AMO's) Industrial Distributed Energy activities build on the success of predecessor DOE programs on distributed energy and combined heat and power (CHP) while...

    12. Distribution Grid Integration

      Broader source: Energy.gov [DOE]

      The DOE Systems Integration team funds distribution grid integration research and development (R&D) activities to address the technical issues that surround distribution grid planning,...

    13. Coal Distribution Database, 2006

      U.S. Energy Information Administration (EIA) Indexed Site

      Domestic Distribution of U.S. Coal by Origin State, Consumer, Destination and Method of Transportation, 2009 Final February 2011 2 Overview of 2009 Coal Distribution Tables...

    14. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    15. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    16. Cognitive Computing for Security.

      SciTech Connect (OSTI)

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    17. Getting Computer Accounts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Accounts When you first arrive at the lab, you will be presented with lots of forms that must be read and signed in order to get an ID and computer access. You must ensure...

    18. Staff Directory | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      About Overview History Staff Directory Our Teams User Advisory Council Careers Margaret Butler Fellowship Visiting Us Contact Us Staff Directory Yury Alekseev Yuri Alexeev Assistant Computational Scientist Catalyst 630-252-0157 yuri@alcf.anl.gov Bill Allcock Bill Allcock Manager, Advanced Integration Group Leadership, AIG 630-252-7573 allcock@anl.gov Ben Allen HPC Systems Administration Specialist Systems 630-252-0554 allen@alcf.anl.gov Ramesh Balakrishnan Computational Scientist Catalyst

    19. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    20. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    1. Agenda for the Derived Liquids to Hydrogen Distributed Reforming Working

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Group (BILIWG) Hydrogen Production Technical Team Research Review | Department of Energy Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) Hydrogen Production Technical Team Research Review Agenda for the Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) Hydrogen Production Technical Team Research Review This is the agenda for the working group sessions held in Laurel, Maryland on November 6, 2007. PDF icon biliwg_agenda.pdf More Documents &

    2. Rioglass Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Rioglass Group Place: Spain Product: A Spanish glass company supplying the automotive sector, who has recently announced to launch...

    3. Humus Group | Open Energy Information

      Open Energy Info (EERE)

      search Name: Humus Group Place: Brazil Product: Stakeholder in the Vertente ethanol mill in Brazil. References: Humus Group1 This article is a stub. You can help...

    4. Bumlai Group | Open Energy Information

      Open Energy Info (EERE)

      Jump to: navigation, search Name: Bumlai Group Place: Brazil Product: Investor in ethanol plant So Fernando Acar e lcool. References: Bumlai Group1 This...

    5. Paro group | Open Energy Information

      Open Energy Info (EERE)

      Paro group Jump to: navigation, search Name: Paro group Place: Brazil Product: Ethanol producer that plans to jointly own an ethanol plant in Minas Gerais. References: Paro...

    6. Mouratoglou Group | Open Energy Information

      Open Energy Info (EERE)

      Mouratoglou Group Jump to: navigation, search Name: Mouratoglou Group Place: France Sector: Renewable Energy Product: Investment parent-company of EDF Energies Nouvelles, involved...

    7. Electrocell Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Electrocell Group Place: Sao Paolo, Brazil Zip: 05508-000 Product: Producer of fuel cells, accessories and controls. The company...

    8. Copisa Group | Open Energy Information

      Open Energy Info (EERE)

      Copisa Group Jump to: navigation, search Name: Copisa Group Place: Barcelona, Spain Zip: 8029 Product: Barcelona-based, construction company. Copisa is involved in building three...

    9. Emte Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Emte Group Place: Spain Sector: Renewable Energy, Services Product: String representation "EMTE is the ben ... ctor companies." is too long....

    10. Poyry Group | Open Energy Information

      Open Energy Info (EERE)

      Poyry Group Jump to: navigation, search Name: Poyry Group Place: Vantaa, Finland Zip: 1621 Product: Vantaa-based consulting and engineering firm, specialising in issues regarding...

    11. Anel Group | Open Energy Information

      Open Energy Info (EERE)

      Anel Group Jump to: navigation, search Name: Anel Group Place: ISTANBUL, Turkey Zip: 34768 Sector: Solar, Wind energy Product: Istanbul-based technological and engineering...

    12. Aksa Group | Open Energy Information

      Open Energy Info (EERE)

      Aksa Group Jump to: navigation, search Name: Aksa Group Place: Istanbul, Turkey Zip: 34212 Sector: Wind energy Product: Turkey-based international company recently involved in the...

    13. GEA Group | Open Energy Information

      Open Energy Info (EERE)

      Jump to: navigation, search Name: GEA Group Place: Bochum, Germany Zip: 44809 Sector: Biofuels, Solar Product: Bochum-based, engineering group specialising in process engineering...

    14. Daesung Group | Open Energy Information

      Open Energy Info (EERE)

      Daesung Group Place: Jongno-Gu Seoul, Korea (Republic) Zip: 110-300 Sector: Hydro, Hydrogen Product: Daesung Group, a Korea-based energy provider and electric machinary...

    15. Westly Group | Open Energy Information

      Open Energy Info (EERE)

      Westly Group Jump to: navigation, search Name: Westly Group Place: Menlo Park, California Zip: 94025 Product: Clean technology-oriented venture capital firm. References: Westly...

    16. Enerbio Group | Open Energy Information

      Open Energy Info (EERE)

      Enerbio Group Jump to: navigation, search Name: Enerbio Group Place: Porto Alegre, Rio Grande do Sul, Brazil Zip: 90480-003 Sector: Renewable Energy, Services Product: Brazilian...

    17. Jinglong Group | Open Energy Information

      Open Energy Info (EERE)

      Jinglong Group Jump to: navigation, search Name: Jinglong Group Place: Ningjin, Hebei Province, China Product: Chinese manufacturer and supplier of monocrystalline silicon and...

    18. Verdeo Group | Open Energy Information

      Open Energy Info (EERE)

      Verdeo Group Jump to: navigation, search Name: Verdeo Group Place: Washington, DC Zip: 20006 Sector: Carbon Product: Washington based integrated carbon solutions company....

    19. Bazan Group | Open Energy Information

      Open Energy Info (EERE)

      Bazan Group Jump to: navigation, search Name: Bazan Group Place: Pontal, Brazil Zip: 14180-000 Product: Bioethanol production company Coordinates: -21.023149, -48.037099 Show...

    20. Delaney Group | Open Energy Information

      Open Energy Info (EERE)

      Delaney Group Jump to: navigation, search Name: Delaney Group Place: Gloversville, New York Zip: 12078 Sector: Services, Wind energy Product: Services company focused on...

    1. Ramky Group | Open Energy Information

      Open Energy Info (EERE)

      Ramky Group Jump to: navigation, search Name: Ramky Group Place: Andhra Pradesh, India Zip: 500082 Product: Focussed on construction, infrastructure development and waste...

    2. Samaras Group | Open Energy Information

      Open Energy Info (EERE)

      Samaras Group Jump to: navigation, search Name: Samaras Group Place: Greece Sector: Renewable Energy, Services Product: Greek consultancy services provider with specialization in...

    3. Altira Group | Open Energy Information

      Open Energy Info (EERE)

      Altira Group Jump to: navigation, search Name: Altira Group Address: 1675 Broadway, Suite 2400 Place: Denver, Colorado Zip: 80202 Region: Rockies Area Product: Venture Capital...

    4. Sunvim Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Sunvim Group Place: Gaomi, Shandong Province, China Zip: 261500 Product: Sunvim, a Chinese home textile maker, is also engaged in the...

    5. Balta Group | Open Energy Information

      Open Energy Info (EERE)

      Balta Group Jump to: navigation, search Name: Balta Group Place: Sint Baafs Vijve, Belgium Zip: 8710 Product: Belgium-based manufacturer of broadloom carpets, rugs and laminate...

    6. Noribachi Group | Open Energy Information

      Open Energy Info (EERE)

      Noribachi Group Jump to: navigation, search Name: Noribachi Group Place: Albuquerque, New Mexico Zip: 87104 Product: New Mexico-based private equity firm focused on investing in...

    7. Lucas Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Lucas Group Place: Chicago, Illinois Sector: Services Product: Renewable Energy Recruiters Year Founded: 1970 Coordinates: 41.850033,...

    8. Pohlen Group | Open Energy Information

      Open Energy Info (EERE)

      Pohlen Group Jump to: navigation, search Name: Pohlen Group Place: Geilenkirchen, Germany Product: Specialises in roof engineering, including installing and maintaining PV systems...

    9. Vaillant Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Vaillant Group Place: Remscheid, Germany Zip: 42859 Product: For nearly 130 years Vaillant has been at the forefront of heating technology....

    10. Ostwind Group | Open Energy Information

      Open Energy Info (EERE)

      Ostwind Group Jump to: navigation, search Name: Ostwind Group Place: Regensburg, Germany Zip: D-93047 Sector: Biomass, Hydro, Wind energy Product: Develops wind projects, and also...

    11. Schaffner Group | Open Energy Information

      Open Energy Info (EERE)

      Schaffner Group Jump to: navigation, search Name: Schaffner Group Place: Switzerland Zip: 4542 Product: Switzerland-based company supplier of components that support the efficient...

    12. Schulthess Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Schulthess Group Place: Wolfhausen, Switzerland Zip: CH-8633 Product: A company with activities in regenerative energy production,...

    13. TRITEC Group | Open Energy Information

      Open Energy Info (EERE)

      TRITEC Group Jump to: navigation, search Name: TRITEC Group Place: Basel, Switzerland Zip: CH-4123 Product: Basel-based installer and distributor for PV products. Coordinates:...

    14. Swatch Group | Open Energy Information

      Open Energy Info (EERE)

      Swatch Group Jump to: navigation, search Name: Swatch Group Place: Switzerland Product: String representation "The Swatch Grou ... ther industries" is too long. References: Swatch...

    15. Shenergy Group | Open Energy Information

      Open Energy Info (EERE)

      Shenergy Group Place: Shanghai Municipality, China Product: Gas and power project investor and developer based in Shanghai. References: Shenergy Group1 This article is a stub....

    16. Ralos Group | Open Energy Information

      Open Energy Info (EERE)

      Ralos Group Jump to: navigation, search Name: Ralos Group Place: Michelstadt, Germany Zip: D-64720 Sector: Solar Product: Germany-based solar project developer that specialises in...

    17. Enovos Group | Open Energy Information

      Open Energy Info (EERE)

      Enovos Group Jump to: navigation, search Name: Enovos Group Place: Germany Sector: Solar Product: Germany-based utility. The utility has interests in solar energy. References:...

    18. Richway Group | Open Energy Information

      Open Energy Info (EERE)

      by expanding it. Richway Group is a company based in Richmond, British Columbia. FROM WASTE TO ENERGY, YOUR WISE CHOICE Vision and Objectives Richway Group (Richway) is located...

    19. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences and Engineering The Computational Sciences and Engineering Division (CSED) is ORNL's premier source of basic and applied research in the field of data sciences and knowledge discovery. CSED's science agenda is focused on research and development related to knowledge discovery enabled by the explosive growth in the availability, size, and variability of dynamic and disparate data sources. This science agenda encompasses data sciences as well as advanced modeling and

    20. BNL ATLAS Grid Computing

      ScienceCinema (OSTI)

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    1. Computing environment logbook

      DOE Patents [OSTI]

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    2. Mathematical and Computational Epidemiology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    3. TEC Working Group Topic Groups Archives Protocols | Department of Energy

      Office of Environmental Management (EM)

      Protocols TEC Working Group Topic Groups Archives Protocols The Transportation Protocols Topic Group serves as an important vehicle for DOE senior managers to assess and incorporate stakeholder input into the protocols process. The Topic Group was formed to review a series of transportation protocols developed in response to a request for DOE to be more consistent in its approach to transportation.

    4. Distributed optimization system and method

      DOE Patents [OSTI]

      Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

      2003-06-10

      A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

    5. TEC Working Group Topic Groups Tribal | Department of Energy

      Office of Environmental Management (EM)

      Tribal TEC Working Group Topic Groups Tribal The Tribal Topic Group was established in January 1998 to address government-to-government consultation between DOE and Indian Tribes affected by its transportation activities. The group focuses on transportation planning, funding, and training. Members convene at the semiannual TEC meetings and hold frequent conference calls between TEC sessions. The group has addressed issues such as a consolidated transportation funding grant, DOE's revised Indian

    6. Focus Group Training Work Group Meeting | Department of Energy

      Energy Savers [EERE]

      Date: September 13, 2012 In conjunction with the HAMMER Steering Committee meeting the HSS Focus Group Training Working Group Meeting was conducted from 2:00 PM to 4:30 PM at the HAMMER Training Facility in Richland, WA. Documents Available for Download PDF icon Meeting Agenda PDF icon Meeting Summary More Documents & Publications Focus Group Training Work Group Meeting DOE Training Reciprocity Program Training Work Group Charter

    7. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    8. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    9. Annual Coal Distribution Report

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Annual Coal Distribution Report Release Date: April 16, 2015 | Next Release Date: March 2016 | full report | Revision/Correction Revision to the Annual Coal Distribution Report 2013 data The 2013 Annual Coal Distribution Report has been republished to include final 2013 electric power sector data as well as domestic and foreign distribution data. Contact:

    10. September 8, 2011, HSS/Union Focus Group Work Group Telecom - Work Group Charter

      Energy Savers [EERE]

      TEMPLATE Office of Health, Safety and Security Focus Group [Name of Work Group] Work Group Charter (Date) I. PURPOSE The HSS Focus Group [Name of Work Group] is one of several HSS Work Groups, established to address worker health, safety and security programs improvements across the U.S. Department of Energy Complex. The [Name of Work Group] has been established to (state specific purpose). II. OBJECTIVES (State the desired impact(s) and major outcome(s) for, the Work Group) 1. Establish

    11. Human perceptual deficits as factors in computer interface test and evaluation

      SciTech Connect (OSTI)

      Bowser, S.E.

      1992-06-01

      Issues related to testing and evaluating human computer interfaces are usually based on the machine rather than on the human portion of the computer interface. Perceptual characteristics of the expected user are rarely investigated, and interface designers ignore known population perceptual limitations. For these reasons, environmental impacts on the equipment will more likely be defined than will user perceptual characteristics. The investigation of user population characteristics is most often directed toward intellectual abilities and anthropometry. This problem is compounded by the fact that some deficits capabilities tend to be found in higher-than-overall population distribution in some user groups. The test and evaluation community can address the issue from two primary aspects. First, assessing user characteristics should be extended to include tests of perceptual capability. Secondly, interface designs should use multimode information coding.

    12. High Temperature Membrane Working Group

      Broader source: Energy.gov [DOE]

      This presentation provides an overview of the High Temperature Membrane Working Group Meeting in May 2007.

    13. Energy optimization of water distribution system

      SciTech Connect (OSTI)

      Not Available

      1993-02-01

      In order to analyze pump operating scenarios for the system with the computer model, information on existing pumping equipment and the distribution system was collected. The information includes the following: component description and design criteria for line booster stations, booster stations with reservoirs, and high lift pumps at the water treatment plants; daily operations data for 1988; annual reports from fiscal year 1987/1988 to fiscal year 1991/1992; and a 1985 calibrated KYPIPE computer model of DWSD`s water distribution system which included input data for the maximum hour and average day demands on the system for that year. This information has been used to produce the inventory database of the system and will be used to develop the computer program to analyze the system.

    14. Science Education Group | Princeton Plasma Physics Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science Education Group View larger image Sci Ed Group 15 View larger image Group 21

    15. Deformations of polyhedra and polygons by the unitary group

      SciTech Connect (OSTI)

      Livine, Etera R.

      2013-12-15

      We introduce the set of framed (convex) polyhedra with N faces as the symplectic quotient C{sup 2N}//SU(2). A framed polyhedron is then parametrized by N spinors living in C{sup 2} satisfying suitable closure constraints and defines a usual convex polyhedron plus extra U(1) phases attached to each face. We show that there is a natural action of the unitary group U(N) on this phase space, which changes the shape of faces and allows to map any (framed) polyhedron onto any other with the same total (boundary) area. This identifies the space of framed polyhedra to the Grassmannian space U(N)/ (SU(2)U(N?2)). We show how to write averages of geometrical observables (polynomials in the faces' area and the angles between them) over the ensemble of polyhedra (distributed uniformly with respect to the Haar measure on U(N)) as polynomial integrals over the unitary group and we provide a few methods to compute these integrals systematically. We also use the Itzykson-Zuber formula from matrix models as the generating function for these averages and correlations. In the quantum case, a canonical quantization of the framed polyhedron phase space leads to the Hilbert space of SU(2) intertwiners (or, in other words, SU(2)-invariant states in tensor products of irreducible representations). The total boundary area as well as the individual face areas are quantized as half-integers (spins), and the Hilbert spaces for fixed total area form irreducible representations of U(N). We define semi-classical coherent intertwiner states peaked on classical framed polyhedra and transforming consistently under U(N) transformations. And we show how the U(N) character formula for unitary transformations is to be considered as an extension of the Itzykson-Zuber to the quantum level and generates the traces of all polynomial observables over the Hilbert space of intertwiners. We finally apply the same formalism to two dimensions and show that classical (convex) polygons can be described in a similar fashion trading the unitary group for the orthogonal group. We conclude with a discussion of the possible (deformation) dynamics that one can define on the space of polygons or polyhedra. This work is a priori useful in the context of discrete geometry but it should hopefully also be relevant to (loop) quantum gravity in 2+1 and 3+1 dimensions when the quantum geometry is defined in terms of gluing of (quantized) polygons and polyhedra.

    16. Sandia Energy - Distribution Grid Integration

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Distribution Grid Integration Home Stationary Power Energy Conversion Efficiency Solar Energy Photovoltaics Grid Integration Distribution Grid Integration Distribution Grid...

    17. Sandia Energy - High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    18. Lisa Gerhardt NERSC User Services Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Gerhardt NERSC User Services Group NUG Training August 11, 2015 Data Management at NERSC Where Do I Put My Data? - 2 - * Overview of NERSC file systems - Local vs. Global - Permanent vs. Purged - Personal vs. Shared * HPSS Archive System - What is it and how to use it NERSC File Systems - 3 - The compute and storage systems 2015 Production Clusters Carver, PDSF, JGI, MatComp, Planck /global/scr atch 4 PB /project 5 PB /home 250 TB 70 PB stored, 240 PB capacity, 40 years of community data HPSS

    19. Method and structure for skewed block-cyclic distribution of...

      Office of Scientific and Technical Information (OSTI)

      A method and structure of distributing elements of an array of data in a computer memory to a specific processor of a multi-dimensional mesh of parallel processors includes ...

    20. Method and structure for skewed block-cyclic distribution of...

      Office of Scientific and Technical Information (OSTI)

      A method and structure of distributing elements of an array of data in a computer memory to a specific processor of a multi-dimensional mesh of parallel processors includes...

    1. TEC Working Group Topic Groups Security Key Documents | Department...

      Office of Environmental Management (EM)

      Key Documents TEC Working Group Topic Groups Security Key Documents Key Documents PDF icon Security TG Work Plan August 7, 2006 PDF icon Security Lessons Learned Document August 2,...

    2. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.

    3. Nicholas J. Wright! Advanced Technologies Group Lead NERSC Initiative:

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nicholas J. Wright! Advanced Technologies Group Lead NERSC Initiative: Preparing Applications for Exascale --- 1 --- NERSC U ser G roup M ee0ng February 1 2. 2 013 * Technology disruption is underway at the processor and memory level. Computing challenges include: - Energy efficiency - Concurrency - Data movement - Programmability - Resilience * We can only meet these challenges through both hardware and software innovation - Rewrite application codes - Try to influence computer industry 2

    4. TEC Working Group Topic Groups Rail | Department of Energy

      Office of Environmental Management (EM)

      Rail TEC Working Group Topic Groups Rail The Rail Topic Group has the responsibility to identify and discuss current issues and concerns regarding rail transportation of radioactive materials by the Department of Energy (DOE). The group's current task is to examine different aspects of rail transportation including inspections, tracking and radiation monitoring, planning and process, and review of lessons learned. Ultimately, the main goal for members will be to assist in the identification of

    5. TEC Working Group Topic Groups Routing | Department of Energy

      Office of Environmental Management (EM)

      Routing TEC Working Group Topic Groups Routing ROUTING The Routing Topic Group has been established to examine topics of interest and relevance concerning routing of shipments of spent nuclear fuel (SNF) and high-level radioactive waste (HLW) to a national repository at Yucca Mountain, Nevada by highway, rail, and intermodal operations that could involve use of barges. Ultimately, the main goal for the topic group members will be to provide stakeholder perspectives and input to the Office of

    6. TEC Working Group Topic Groups Archives Communications | Department of

      Office of Environmental Management (EM)

      Energy Communications TEC Working Group Topic Groups Archives Communications The Communications Topic Group was convened in April 1998 to improve internal and external strategic level communications regarding DOE shipments of radioactive and other hazardous materials. Major issues under consideration by this Topic Group include: - Examination of DOE external and internal communications processes; - Roles and responsibilities when communicating with a diverse range of stakeholders; and -

    7. TEC Working Group Topic Groups Archives Training - Medical Training |

      Office of Environmental Management (EM)

      Department of Energy Training - Medical Training TEC Working Group Topic Groups Archives Training - Medical Training The TEC Training and Medical Training Issues Topic Group was formed to address the training issues for emergency responders in the event of a radioactive material transportation incident. The Topic Group first met in 1996 to assist DOE in developing an approach to address radiological emergency response training needs and to avoid redundancy of existing training materials. The

    8. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    9. Energy Aware Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy required to compute is to reduce the power usage on a node. One way to accomplish this is by lowering the frequency at which the CPU operates. However, reducing the clock speed increases the time to solution, creating a potential tradeoff. NERSC continues to examine how such methods impact its operations and its

    10. Method for distributed agent-based non-expert simulation of manufacturing process behavior

      DOE Patents [OSTI]

      Ivezic, Nenad; Potok, Thomas E.

      2004-11-30

      A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

    11. Personal Computer Inventory System

      Energy Science and Technology Software Center (OSTI)

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    12. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and ...

    13. Announcement of Computer Software

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      All Other Editions Are Obsolete UNITED STATES DEPARTMENT OF ENERGY ANNOUNCEMENT OF COMPUTER SOFTWARE OMB Control Number 1910-1400 (OMB Burden Disclosure Statement is on last...

    14. Research Group Websites - Links - Cyclotron Institute

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Research Group Websites Dr. Sherry J. Yennello's Research Group Nuclear Theory Group Dr. Dan Melconian's Research Group Dr. Cody Folden's Research Group...

    15. Secure computing for the 'Everyman' goes to market

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Secure computing for the 'Everyman' goes to market Secure computing for the 'Everyman' goes to market Quantum key distribution technology could ensure truly secure commerce, banking, communications and data transfer December 22, 2014 Secure computing for the 'Everyman' goes to market This small device developed at Los Alamos National Laboratory uses the truly random spin of light particles as defined by laws of quantum mechanics to generate a random number for use in a cryptographic key that can

    16. Performance Modeling for 3D Visualization in a Heterogeneous Computing

      Office of Scientific and Technical Information (OSTI)

      Environment (Technical Report) | SciTech Connect Performance Modeling for 3D Visualization in a Heterogeneous Computing Environment Citation Details In-Document Search Title: Performance Modeling for 3D Visualization in a Heterogeneous Computing Environment The visualization of large, remotely located data sets necessitates the development of a distributed computing pipeline in order to reduce the data, in stages, to a manageable size. The required baseline infrastructure for launching such

    17. Scientific computations section monthly report, November 1993

      SciTech Connect (OSTI)

      Buckner, M.R.

      1993-12-30

      This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

    18. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      60 Years of Computing 60 Years of Computing

    19. Weighted Running Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Weighted Running Jobs by Group Weighted Running Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2016-02-01 08:06:59

    20. Klebl Group | Open Energy Information

      Open Energy Info (EERE)

      Zip: 6388 Product: Construction and engineering group with some experience building PV plants. References: Klebl Group1 This article is a stub. You can help OpenEI by expanding...

    1. Sova Group | Open Energy Information

      Open Energy Info (EERE)

      Sova Group Jump to: navigation, search Name: Sova Group Place: Kolkata, West Bengal, India Zip: 700012 Product: Kolkatta-based iron and steel major. The firm plans to foray into PV...

    2. Minoan Group | Open Energy Information

      Open Energy Info (EERE)

      Minoan Group Jump to: navigation, search Name: Minoan Group Place: Kent, England, United Kingdom Zip: BR5 1XB Sector: Solar Product: UK-based developer of resorts in Greece that...

    3. ESV Group | Open Energy Information

      Open Energy Info (EERE)

      ESV Group Jump to: navigation, search Name: ESV Group Place: London, England, United Kingdom Zip: W1K 4QH Sector: Biofuels Product: UK-based investment agri-business involved in...

    4. Ensus Group | Open Energy Information

      Open Energy Info (EERE)

      Ensus Group Jump to: navigation, search Name: Ensus Group Place: Stockton-on-Tees, England, United Kingdom Zip: TS15 9BW Product: North Yorkshire-based developer & operator of...

    5. Camco Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Camco Group Place: Jersey, United Kingdom Zip: JE2 4UH Sector: Carbon, Renewable Energy, Services Product: UK-based firm that provides...

    6. Expanded Pending Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Expanded Pending Jobs by Group Expanded Pending Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2016-02-01 08:07:29

    7. Doubly Distributed Transactions

      Energy Science and Technology Software Center (OSTI)

      2014-08-25

      Doubly Distributed Transactions (D2T) offers a technique for managing operations from a set of parallel clients with a collection of distributed services. It detects and manages faults. Example code with a test harness is also provided

    8. Distributed Wind 2015

      Broader source: Energy.gov [DOE]

      Distributed Wind 2015 is committed to the advancement of both distributed and community wind energy. This two day event includes a Business Conference with sessions focused on advancing the...

    9. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Capabilities Information Science, Computing, Applied Math science-innovationassetsimagesicon-science.jpg Information Science, Computing, Applied Math National security ...

    10. What is Distributed Wind?

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Distributed Wind? Distributed wind energy systems are commonly installed on residential, agricultural, commercial, institutional, and industrial sites connected either physically or virtually on the customer side of the meter (to serve on-site load) or directly to the local distribution or micro grid (to support local grid operations or offset nearby loads). Because the definition is based on a wind project's location relative to end-use and power-distribution infrastructure, rather than on

    11. Citizenre Group | Open Energy Information

      Open Energy Info (EERE)

      19809 Product: A company planning to set up an integrated wafer, cell and module manufacturing plant, and then take part in the distribution and installation, and even asset...

    12. Integrated Transmission and Distribution Control

      SciTech Connect (OSTI)

      Kalsi, Karanjit; Fuller, Jason C.; Tuffner, Francis K.; Lian, Jianming; Zhang, Wei; Marinovici, Laurentiu D.; Fisher, Andrew R.; Chassin, Forrest S.; Hauer, Matthew L.

      2013-01-16

      Distributed, generation, demand response, distributed storage, smart appliances, electric vehicles and renewable energy resources are expected to play a key part in the transformation of the American power system. Control, coordination and compensation of these smart grid assets are inherently interlinked. Advanced control strategies to warrant large-scale penetration of distributed smart grid assets do not currently exist. While many of the smart grid technologies proposed involve assets being deployed at the distribution level, most of the significant benefits accrue at the transmission level. The development of advanced smart grid simulation tools, such as GridLAB-D, has led to a dramatic improvement in the models of smart grid assets available for design and evaluation of smart grid technology. However, one of the main challenges to quantifying the benefits of smart grid assets at the transmission level is the lack of tools and framework for integrating transmission and distribution technologies into a single simulation environment. Furthermore, given the size and complexity of the distribution system, it is crucial to be able to represent the behavior of distributed smart grid assets using reduced-order controllable models and to analyze their impacts on the bulk power system in terms of stability and reliability. The objectives of the project were to: • Develop a simulation environment for integrating transmission and distribution control, • Construct reduced-order controllable models for smart grid assets at the distribution level, • Design and validate closed-loop control strategies for distributed smart grid assets, and • Demonstrate impact of integrating thousands of smart grid assets under closed-loop control demand response strategies on the transmission system. More specifically, GridLAB-D, a distribution system tool, and PowerWorld, a transmission planning tool, are integrated into a single simulation environment. The integrated environment allows the load flow interactions between the bulk power system and end-use loads to be explicitly modeled. Power system interactions are modeled down to time intervals as short as 1-second. Another practical issue is that the size and complexity of typical distribution systems makes direct integration with transmission models computationally intractable. Hence, the focus of the next main task is to develop reduced-order controllable models for some of the smart grid assets. In particular, HVAC units, which are a type of Thermostatically Controlled Loads (TCLs), are considered. The reduced-order modeling approach can be extended to other smart grid assets, like water heaters, PVs and PHEVs. Closed-loop control strategies are designed for a population of HVAC units under realistic conditions. The proposed load controller is fully responsive and achieves the control objective without sacrificing the end-use performance. Finally, using the T&D simulation platform, the benefits to the bulk power system are demonstrated by controlling smart grid assets under different demand response closed-loop control strategies.

    13. Overview of the Distributed Generation Interconnection Collaborative

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      December 17, 2013 Overview presentation for group call, 1:00-2:30EST 2 October 21,2013 NREL and EPRI facilitated workshop of electric utilities, PV developers, PUCs, and other stakeholders to discuss the formulation of a collaborative effort focused on distributed PV interconnection: - Data and informational gaps/needs - Persistent challenges - Replicable innovation - Informed decision making and planning for anticipated rise in distributed PV interconnection Based on stakeholder input and

    14. NREL: Distributed Grid Integration - Research Staff

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Research Staff NREL's distributed grid integration research staff work to strengthen and diversify the electric power system through NREL's Power Systems Engineering Center. Photo of James Cale James Cale, Distributed Energy Systems Integration Group Manager Ph.D., Electrical Engineering, Purdue University M.S., Electrical Engineering, Purdue University B.S., Electrical Engineering, MS&T Dr. James Cale is an expert in the field of power electronics and electrical machine modeling and

    15. Executing a gather operation on a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN)

      2012-03-20

      Methods, apparatus, and computer program products are disclosed for executing a gather operation on a parallel computer according to embodiments of the present invention. Embodiments include configuring, by the logical root, a result buffer or the logical root, the result buffer having positions, each position corresponding to a ranked node in the operational group and for storing contribution data gathered from that ranked node. Embodiments also include repeatedly for each position in the result buffer: determining, by each compute node of an operational group, whether the current position in the result buffer corresponds with the rank of the compute node, if the current position in the result buffer corresponds with the rank of the compute node, contributing, by that compute node, the compute node's contribution data, if the current position in the result buffer does not correspond with the rank of the compute node, contributing, by that compute node, a value of zero for the contribution data, and storing, by the logical root in the current position in the result buffer, results of a bitwise OR operation of all the contribution data by all compute nodes of the operational group for the current position, the results received through the global combining network.

    16. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect (OSTI)

      GLIMM,J.

      2002-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    17. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect (OSTI)

      GLIMM,J.

      2001-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    18. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect (OSTI)

      GLIMM,J.

      2003-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    19. SC e-journals, Computer Science

      Office of Scientific and Technical Information (OSTI)

      & Mathematical Organization Theory Computational Complexity Computational Economics Computational Management ... Technology EURASIP Journal on Information Security ...

    20. Jefferson Lab Groups Encourage Digital Literacy Through Worldwide 'Hour

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      of Code' Campaign | Jefferson Lab Groups Encourage Digital Literacy Through Worldwide 'Hour of Code' Campaign Jefferson Lab Groups Encourage Digital Literacy Through Worldwide 'Hour of Code' Campaign Dana Cochran, Jefferson Lab staff member, helps students as they participate in a coding activity. To raise awareness of the need for digital literacy and a basic understanding of computer science, Jefferson Lab's Information Technology Division and Science Education staff are encouraging

    1. THE ABUNDANCE OF BULLET GROUPS IN ?CDM

      SciTech Connect (OSTI)

      Fernndez-Trincado, J. G.; Forero-Romero, J. E.; Foex, G.; Motta, V.; Verdugo, T. E-mail: je.forero@uniandes.edu.co

      2014-06-01

      We estimate the expected distribution of displacements between the two dominant dark matter (DM) peaks (DM-DM displacements) and between the DM and gaseous baryon peak (DM-gas displacements) in DM halos with masses larger than 10{sup 13}h {sup 1} M {sub ?}. As a benchmark, we use the observation of SL2S J085440121, which is the lowest mass system (1.0 10{sup 14}h {sup 1} M {sub ?}) observed so far, featuring a bi-modal DM distribution with a dislocated gas component. We find that (50 10)% of the DM halos with circular velocities in the range 300-700 km s{sup 1} (groups) show DM-DM displacements equal to or larger than 186 30 h {sup 1}kpc as observed in SL2S J085440121. For DM halos with circular velocities larger than 700 km s{sup 1} (clusters) this fraction rises to (70 10)%. Using the same simulation, we estimate the DM-gas displacements and find that 0.1%-1.0% of the groups should present separations equal to or larger than 87 14 h {sup 1}kpc, corresponding to our observational benchmark; for clusters, this fraction rises to (7 3)%, consistent with previous studies of DM to baryon separations. Considering both constraints on the DM-DM and DM-gas displacements, we find that the number density of groups similar to SL2S J085440121 is ?6.0 10{sup 7} Mpc{sup 3}, three times larger than the estimated value for clusters. These results open up the possibility for a new statistical test of ?CDM by looking for DM-gas displacements in low mass clusters and groups.

    2. TEC Working Group Topic Groups Security | Department of Energy

      Office of Environmental Management (EM)

      Security TEC Working Group Topic Groups Security The Security Topic group is comprised of regulators, law enforcement officials, labor and industry representatives and other subject matter experts concerned with secure transport of spent nuclear fuel (SNF) and high level waste (HLW) to Yucca Mountain. Current activities include updating the security portion of DOE's Transportation Practices Manual, identifying key State, Tribal and local security officials and organizations, and examining

    3. Natural Gas Transmission and Distribution Module

      Gasoline and Diesel Fuel Update (EIA)

      www.eia.gov Joe Benneche July 31, 2012, Washington, DC Major assumption changes for AEO2013 Oil and Gas Working Group Natural Gas Transmission and Distribution Module DRAFT WORKING GROUP PRESENTATION DO NOT QUOTE OR CITE Overview 2 Joe Benneche, Washington, DC, July 31, 2012 * Replace regional natural gas wellhead price projections with regional spot price projections * Pricing of natural gas vehicles fuels (CNG and LNG) * Methodology for modeling exports of LNG * Assumptions on charges related

    4. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    5. Distributed Reforming of Biomass Pyrolysis Oils (Presentation) | Department

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      of Energy Biomass Pyrolysis Oils (Presentation) Distributed Reforming of Biomass Pyrolysis Oils (Presentation) Presented at the 2007 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group held November 6, 2007 in Laurel, Maryland. PDF icon 06_nrel_distributed_reforming_biomass_pyrolysis_oils.pdf More Documents & Publications Distributed Bio-Oil Reforming Bioenergy Technologies Office R&D Pathways: In-Situ Catalytic Fast Pyrolysis Bioenergy Technologies Office R&D

    6. Applications of Parallel Computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course offered at the University of California, Berkeley. The course is being taught by UC Berkeley professor and LBNL Faculty Scientist Jim Demmel. CS267 is broadcast live over the internet and all NERSC users are invited to monitor the broadcast course, but course credit is available only to student registered for the

    7. Groups

      Open Energy Info (EERE)

      groupbig-clean-data" target"blank">read more

      Big Data Concentrated Solar Power DataAnalysis energy efficiency energy storage expert systems machine learning...

    8. Absolute nuclear material assay using count distribution (LAMBDA) space

      DOE Patents [OSTI]

      Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

      2012-06-05

      A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

    9. Absolute nuclear material assay using count distribution (LAMBDA) space

      DOE Patents [OSTI]

      Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

      2015-12-01

      A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

    10. TEC Working Group Topic Groups Archives Route Identification Process |

      Office of Environmental Management (EM)

      Department of Energy Route Identification Process TEC Working Group Topic Groups Archives Route Identification Process Route Identification Process Items Available for Download PDF icon Routing Discussion Paper (April 1998) More Documents & Publications TEC Meeting Summaries - January 1997 TEC Meeting Summaries - July 1997 TEC Meeting Summaries - January 1998

    11. Computing contingency statistics in parallel.

      SciTech Connect (OSTI)

      Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

      2010-09-01

      Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

    12. HASQARD Focus Group - Hanford Site

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Contracting Wastren Advantage, Inc. HASQARD Focus Group Contracting ORP Contracts and Procurements RL Contracts and Procurements CH2M HILL Plateau Remediation Company Mission Support Alliance Washington Closure Hanford HPM Corporation (HPMC) Wastren Advantage, Inc. Analytical Services HASQARD Focus Group Bechtel National, Inc. Washington River Protection Solutions HASQARD Focus Group Email Email Page | Print Print Page |Text Increase Font Size Decrease Font Size HASQARD Document HASQARD

    13. Creating Los Alamos Women's Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Raeanna Sharp-Geiger-Creating a cleaner, greener environment March 28, 2014 Creating Los Alamos Women's Group Inspired by their informal dinner discussions, Raeanna Sharp-Geiger and a few of her female colleagues decided to create a new resource a few years ago, the Los Alamos Women's Group. They wanted to create a comfortable environment where women from all across the diverse Lab could network, collaborate, share ideas and gain a broader perspective of the Lab's mission. The Women's Group has

    14. Copelouzos Group | Open Energy Information

      Open Energy Info (EERE)

      navigation, search Name: Copelouzos Group Place: Athens, Greece Product: Fully integrated business development organisation, servicing key industrial and technological sectors such...

    15. XSD Groups | Advanced Photon Source

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Imaging (IMG) Primary Contact: Francesco De Carlo Research Disciplines: Materials Science, Biology, Physics, Life Sciences The IMG group designs, supports, and operates...

    16. Arakaki Group | Open Energy Information

      Open Energy Info (EERE)

      Name: Arakaki Group Place: Fernandopolis, Sao Paulo, Brazil Product: Brazil based agriculture company, which owns 50% of an ethanol plant. Coordinates: -20.284244,...

    17. Royalstar Group | Open Energy Information

      Open Energy Info (EERE)

      search Name: Royalstar Group Place: Hefei, Anhui Province, China Sector: Solar Product: Chinese manufacturer of washing machines, solar water heaters, and as of June 2006,...

    18. Groupe Valeco | Open Energy Information

      Open Energy Info (EERE)

      Name: Groupe Valeco Place: Montpellier, France Zip: 34070 Sector: Biomass, Solar, Wind energy Product: Develops wind, solar, biomass and cogeneration projects in France....

    19. Airvoice Group | Open Energy Information

      Open Energy Info (EERE)

      Airvoice Group Place: Gurgaon, Haryana, India Zip: 122001 Sector: Services, Solar, Wind energy Product: Holding company with interest in tele-solutions, petrochemicals and...

    20. Kedco Group | Open Energy Information

      Open Energy Info (EERE)

      Co. Cork, Ireland Product: Cork-based project developer of biogas and gasification plants; also active in the residential heating sector. References: Kedco Group1 This...

    1. High Temperature Membrane Working Group

      Broader source: Energy.gov [DOE]

      The High Temperature Membrane Working Group consists of government, industry, and university researchers interested in developing high temperature membranes for fuel cells.

    2. Martifer Group | Open Energy Information

      Open Energy Info (EERE)

      search Name: Martifer Group Place: Oliveira de Frades, Portugal Zip: 3684-001 Sector: Biofuels, Solar, Wind energy Product: Portugal-based company divided across four core business...

    3. Traction Drive Systems Breakout Group

      Broader source: Energy.gov (indexed) [DOE]

      TRACTION DRIVE SYSTEM BREAKOUT GROUP EV Everywhere Workshop July 24, 2012 Breakout Session 1 - Discussion of Performance Targets and Barriers Comments on the Achievability of the...

    4. Groups | OpenEI Community

      Open Energy Info (EERE)

      technologies. Groups Home Title Posts Members Subgroups Description Created sort icon Big Clean Data 2 We aim to bring together professionals who want to share ideas, knowledge...

    5. DAQO Group | Open Energy Information

      Open Energy Info (EERE)

      An enterprise group whose industry field involves electric, environmental protection, science and technology and hotels, and is also setting up a polysilicon factory. References:...

    6. Tim Kuneli, Electronics Maintenance Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Tim Kuneli, Electronics Maintenance Group Print The recent ALS power supply failure was one of the most challenging projects that Electronics Engineer Technical Superintendent Tim...

    7. Acterra Group | Open Energy Information

      Open Energy Info (EERE)

      Product: Acterra Group provides consulting, project financing, services and support to energy, natural resource, and sustainability companies. Coordinates: 44.671312,...

    8. Marseglia Group | Open Energy Information

      Open Energy Info (EERE)

      diversified infrastructure developer. The firm is active in the fields of energy, tourism and hotels and real estate. References: Marseglia Group1 This article is a stub....

    9. TUNL Nuclear Data Evaluation Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      TUNL Nuclear Data Evaluation Group As a part of the United States Nuclear Data Network and the international Nuclear Structure and Decay Data Evaluators' Network, the Nuclear Data...

    10. Schaeffler Group | Open Energy Information

      Open Energy Info (EERE)

      rolling bearings and linear products worldwide as well as a renowned supplier to the automotive industry. References: Schaeffler Group1 This article is a stub. You can help...

    11. Quantum steady computation

      SciTech Connect (OSTI)

      Castagnoli, G. )

      1991-08-10

      This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

    12. Cloud computing security.

      SciTech Connect (OSTI)

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    13. Theory, Modeling and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme...

    14. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      a n n u a l r e p o r t 2 0 1 2 Argonne Leadership Computing Facility Director's Message .............................................................................................................................1 About ALCF ......................................................................................................................................... 2 IntroDuCIng MIrA Introducing Mira

    15. Interagency Sustainability Working Group | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Facilities Sustainable Buildings & Campuses Interagency Sustainability Working Group Interagency Sustainability Working Group The Interagency Sustainability Working Group ...

    16. Computational fluid dynamics improves liner cementing operation

      SciTech Connect (OSTI)

      Barton, N.A.; Archer, G.L. ); Seymour, D.A. )

      1994-09-26

      The use of computational fluid dynamics (CFD), an analytical tool for studying fluid mechanics, helped plan the successful cementing of a critical liner in a North Sea extended reach well. The results from CFD analysis increased the confidence in the primary cementing of the liner. CFD modeling was used to quantify the effects of increasing the displacement rate and of rotating the liner on the mud flow distribution in the annulus around the liner.

    17. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    18. Compute Reservation Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Reservation Request Form Compute Reservation Request Form Users can request a scheduled reservation of machine resources if their jobs have special needs that cannot be accommodated through the regular batch system. A reservation brings some portion of the machine to a specific user or project for an agreed upon duration. Typically this is used for interactive debugging at scale or real time processing linked to some experiment or event. It is not intended to be used to guarantee fast

    19. New TRACC Cluster Computer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      TRACC Cluster Computer With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD 16 core, 2.3 GHz, 32 GB processors. See also Computing Resources.

    20. Advanced Simulation and Computing

      National Nuclear Security Administration (NNSA)

      NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction

    1. Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computing Computing Fun fact: Most systems require air conditioning or chilled water to cool super powerful supercomputers, but the Olympus supercomputer at Pacific Northwest National Laboratory is cooled by the location's 65 degree groundwater. Traditional cooling systems could cost up to $61,000 in electricity each year, but this more efficient setup uses 70 percent less energy. | Photo courtesy of PNNL. Fun fact: Most systems require air conditioning or chilled water to cool super powerful

    2. David Turner to Retire from NERSC User Services Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      David Turner to Retire from NERSC User Services Group David Turner to Retire from NERSC User Services Group June 17, 2015 davidturnernow2 David Turner in the NERSC machine room, in front of Carver, circa 2015 Long-time User Services Group consultant David Turner is hanging up his headset after 17 years at NERSC. His love of math, science and computers began when he was still in high school, and it has not waned over the years. Here Turner, whose last official day is June 26, talks about how he

    3. Coal Distribution Database, 2006

      U.S. Energy Information Administration (EIA) Indexed Site

      TF RailroadVesselShip Fuel It is also noted that Destination State code of "X Export" indicates movements to foreign destinations. 1 68 Domestic Coal Distribution...

    4. Distribution of Correspondence

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      1996-08-30

      Defines correct procedures for distribution of correspondence to the Naval Reactors laboratories. Does not cancel another directive. Expired 8-30-97.

    5. Cooling water distribution system

      DOE Patents [OSTI]

      Orr, Richard

      1994-01-01

      A passive containment cooling system for a nuclear reactor containment vessel. Disclosed is a cooling water distribution system for introducing cooling water by gravity uniformly over the outer surface of a steel containment vessel using an interconnected series of radial guide elements, a plurality of circumferential collector elements and collector boxes to collect and feed the cooling water into distribution channels extending along the curved surface of the steel containment vessel. The cooling water is uniformly distributed over the curved surface by a plurality of weirs in the distribution channels.

    6. Annual Coal Distribution Tables

      U.S. Energy Information Administration (EIA) Indexed Site

      Domestic Distribution of U.S. Coal by Destination State, Consumer, Destination and Method of Transportation, 2001 (Thousand Short Tons) DESTINATION: Alabama State of Origin by...

    7. Coal Distribution Database, 2006

      U.S. Energy Information Administration (EIA) Indexed Site

      Report - Annual provides detailed information on domestic coal distribution by origin state, destination state, consumer category, and method of transportation. Also provided is...

    8. Distribution Grid Integration

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Carbide Thyristors Read More Permalink ECIS-Princeton Power Systems, Inc.: Demand Response Inverter DETL, Distribution Grid Integration, Energy, Energy Surety, Facilities, ...

    9. Federal Utility Partnership Working Group

      Broader source: Energy.gov [DOE]

      The Federal Utility Partnership Working Group (FUPWG) establishes partnerships and facilitates communications among Federal agencies, utilities, and energy service companies. The group develops strategies to implement cost-effective energy efficiency and water conservation projects through utility incentive programs at Federal sites.

    10. Focus Group Training Work Group Meeting | Department of Energy

      Energy Savers [EERE]

      Dates: July 10 - 11 The Focus Group Training Work Group met at the DOE National Training Center (NTC) inAlbuquerque, NM on Tuesday, July 10 and Wednesday, July 11, 2012. The meeting was chaired by the Work Group co-chairs, Karen Boardman,Pete Stafford (AFL-CIO BCTD/CPWR), and Julie Johnston (EFCOG). Attachment 1 is the Meeting Agenda; Attachment 2 is a list of meeting attendees; and Attachment3 is the proposed Radworker Training Reciprocity Program. Documents Available for Download PDF icon

    11. Can Cloud Computing Address the Scientific Computing Requirements for DOE

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe January 30, 2012 Jon Bashor, Jbashor@lbl.gov, +1 510-486-5849 Magellan1.jpg Magellan at NERSC After a two-year study of the feasibility of cloud computing systems for meeting the ever-increasing computational needs of scientists,

    12. Computing and Computational Sciences Directorate - Joint Institute for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences Joint Institute for Computational Sciences To help realize the full potential of new-generation computers for advancing scientific discovery, the University of Tennessee (UT) and Oak Ridge National Laboratory (ORNL) have created the Joint Institute for Computational Sciences (JICS). JICS combines the experience and expertise in theoretical and computational science and engineering, computer science, and mathematics in these two institutions and focuses these skills on

    13. Computing and Computational Sciences Directorate - National Center for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences Home National Center for Computational Sciences The National Center for Computational Sciences (NCCS), formed in 1992, is home to two of Oak Ridge National Laboratory's (ORNL's) high-performance computing projects-the Oak Ridge Leadership Computing Facility (OLCF) and the National Climate-Computing Research Center (NCRC). The OLCF (www.olcf.ornl.gov) was established at ORNL in 2004 with the mission of standing up a supercomputer 100 times more powerful than the leading

    14. September 8, 2011, HSS/Union Focus Group Work Group Telecom - Work Group Guidance

      Energy Savers [EERE]

      -29-11 Draft Collaboration provides an opportunity to serve as an entity that is greater than the sum of its parts. HSS FOCUS GROUP DRAFT PROPOSED WORK GROUP GUIDANCE BACKGROUND: The HSS Focus Group provides a forum for communication and collaboration related to worker health, safety and security among HSS management and staff, labor unions, DOE Programs and stakeholders. Based on the foundation that labor union representatives are an essential source of frontline perspective in identifying,

    15. in High Performance Computing Computer System, Cluster, and Networking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

    16. GPU COMPUTING FOR PARTICLE TRACKING

      SciTech Connect (OSTI)

      Nishimura, Hiroshi; Song, Kai; Muriki, Krishna; Sun, Changchun; James, Susan; Qin, Yong

      2011-03-25

      This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculation of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.

    17. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    18. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Capabilities » Information Science, Computing, Applied Math /science-innovation/_assets/images/icon-science.jpg Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los Alamos National Laboratory for the best of both. No place on Earth pursues a broader array of world-class scientific endeavors. Computer, Computational, and Statistical Sciences (CCS)» High Performance Computing (HPC)» Extreme Scale Computing, Co-design»

    19. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse...

    20. Computer simulation | Open Energy Information

      Open Energy Info (EERE)

      Computer simulation Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Computer simulation Author wikipedia Published wikipedia, 2013 DOI Not Provided...

    1. NREL: Computational Science Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      high-performance computing, computational science, applied mathematics, scientific data management, visualization, and informatics. NREL is home to the largest high performance...

    2. SCC: The Strategic Computing Complex

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computer room, which is an open room about three-fourths the size of a football field. The Strategic Computing Complex (SCC) at the Los Alamos National Laboratory...

    3. Method and structure for skewed block-cyclic distribution of lower-dimensional data arrays in higher-dimensional processor grids

      DOE Patents [OSTI]

      Chatterjee, Siddhartha (Yorktown Heights, NY); Gunnels, John A. (Brewster, NY)

      2011-11-08

      A method and structure of distributing elements of an array of data in a computer memory to a specific processor of a multi-dimensional mesh of parallel processors includes designating a distribution of elements of at least a portion of the array to be executed by specific processors in the multi-dimensional mesh of parallel processors. The pattern of the designating includes a cyclical repetitive pattern of the parallel processor mesh, as modified to have a skew in at least one dimension so that both a row of data in the array and a column of data in the array map to respective contiguous groupings of the processors such that a dimension of the contiguous groupings is greater than one.

    4. Human-computer interface

      DOE Patents [OSTI]

      Anderson, Thomas G.

      2004-12-21

      The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

    5. THREE DISCRETE GROUPS WITH HOMOGENEOUS CHEMISTRY ALONG THE RED GIANT BRANCH IN THE GLOBULAR CLUSTER NGC2808

      SciTech Connect (OSTI)

      Carretta, E.

      2014-11-10

      We present the homogeneous reanalysis of Mg and Al abundances from high resolution UVES/FLAMES spectra for 31 red giants in the globular cluster NGC2808. We found a well defined Mg-Al anticorrelation reaching a regime of subsolar Mg abundance ratios, with a spread of about 1.4dex in [Al/Fe]. The main result from the improved statistics of our sample is that the distribution of stars is not continuous along the anticorrelation because they are neatly clustered into three distinct clumps, each with different chemical compositions. One group (P) shows a primordial composition of field stars of similar metallicity, and the other two (I and E) have increasing abundances of Al and decreasing abundances of Mg. The fraction of stars we found in the three components (P: 68%, I: 19%, E: 13%) is in excellent agreement with the ratios computed for the three distinct main sequences in NGC2808: for the first time there is a clear correspondence between discrete photometric sequences of dwarfs and distinct groups of giants with homogeneous chemistry. The composition of the I group cannot be reproduced by mixing of matter with extreme processing in hot H-burning and gas with pristine, unprocessed composition, as also found in the recent analysis of three discrete groups in NGC6752. This finding suggests that different classes of polluters were probably at work in NGC2808 as well.

    6. Distribution Workshop | Department of Energy

      Office of Environmental Management (EM)

      Variable distributed generation Dispatchable distributed generation Electric vehicle charging and electrolyzers Energy storage Building and industrial loads and demand response ...

    7. Tinna Group | Open Energy Information

      Open Energy Info (EERE)

      New Delhi, Delhi (NCT), India Zip: 110030 Product: The India-based Tinna Group is a biodiesel producer, an oil seed processor, but also a transport company which has formed two...

    8. Heolo Group | Open Energy Information

      Open Energy Info (EERE)

      Product: Yunnan province based thermostable LiMn2O4 cathode material producer for Lithium secondary batteries. References: Heolo Group1 This article is a stub. You can help...

    9. Tonon Group | Open Energy Information

      Open Energy Info (EERE)

      Tonon Group Place: Bocaina, Sao Paulo, Brazil Zip: 17240-000 Product: Brazil-based ethanol producer, which owns two ethanol plants located in Bocaina, Sao Paulo, and Maracaju,...

    10. Noble Group | Open Energy Information

      Open Energy Info (EERE)

      Wealth Fund 2 Noble purchased 5.1% of USEC, a US company which enriches uranium for nuclear power reactors, in June 2010 2 References "Noble Group (HK)" 2.0 2.1 "New...

    11. Midwest Hydro Users Group Meeting

      Broader source: Energy.gov [DOE]

      The Midwest Hydro Users Group will be holding their annual Fall meeting on November 12th and 13th in Wausau, Wisconsin.  An Owners-only meeting on the afternoon of the 12th followed by a full...

    12. Junqueira Group | Open Energy Information

      Open Energy Info (EERE)

      Brazil Product: Brazilian sugar and ethanol company planning to build a mill in Paraguay. References: Junqueira Group1 This article is a stub. You can help OpenEI by...

    13. Zeppini Group | Open Energy Information

      Open Energy Info (EERE)

      Brazil Product: Brazilian firm that sells PV applications for homes, industry and business. References: Zeppini Group1 This article is a stub. You can help OpenEI by...

    14. AEO2016 Electricity Working Group

      Gasoline and Diesel Fuel Update (EIA)

      Office of Electricity, Coal, Nuclear, and Renewables Analysis December 8, 2015 | Washington, DC AEO2016 Electricity Working Group WORKING GROUP PRESENTATION FOR DISCUSSION PURPOSES DO NOT QUOTE OR CITE AS RESULTS ARE SUBJECT TO CHANGE What to look for: Electricity sector in AEO2016 * Inclusion of EPA final Clean Power Plan in Reference Case * Updated cost estimates for new generating technologies * Major data update on existing coal plant status: MATS- compliant technology or retirement

    15. Communications and Media Relations Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Communications and Media Relations Group Public Affairs Communications Community Public Affairs Org Chart Education Creative Services ⇒ Navigate Section Public Affairs Communications Community Public Affairs Org Chart Education Creative Services Berkeley Lab's Communications and Media Relations Group is responsible for gathering, reporting, and disseminating news about the Lab to both internal and external audiences, including employees, the media, and the community. The latest news can be

    16. LLNL Chemical Kinetics Modeling Group

      SciTech Connect (OSTI)

      Pitz, W J; Westbrook, C K; Mehl, M; Herbinet, O; Curran, H J; Silke, E J

      2008-09-24

      The LLNL chemical kinetics modeling group has been responsible for much progress in the development of chemical kinetic models for practical fuels. The group began its work in the early 1970s, developing chemical kinetic models for methane, ethane, ethanol and halogenated inhibitors. Most recently, it has been developing chemical kinetic models for large n-alkanes, cycloalkanes, hexenes, and large methyl esters. These component models are needed to represent gasoline, diesel, jet, and oil-sand-derived fuels.

    17. Orchestrating Distributed Resource Ensembles for Petascale Science

      SciTech Connect (OSTI)

      Baldin, Ilya; Mandal, Anirban; Ruth, Paul; Yufeng, Xin

      2014-04-24

      Distributed, data-intensive computational science applications of interest to DOE scientific com- munities move large amounts of data for experiment data management, distributed analysis steps, remote visualization, and accessing scientific instruments. These applications need to orchestrate ensembles of resources from multiple resource pools and interconnect them with high-capacity multi- layered networks across multiple domains. It is highly desirable that mechanisms are designed that provide this type of resource provisioning capability to a broad class of applications. It is also important to have coherent monitoring capabilities for such complex distributed environments. In this project, we addressed these problems by designing an abstract API, enabled by novel semantic resource descriptions, for provisioning complex and heterogeneous resources from multiple providers using their native provisioning mechanisms and control planes: computational, storage, and multi-layered high-speed network domains. We used an extensible resource representation based on semantic web technologies to afford maximum flexibility to applications in specifying their needs. We evaluated the effectiveness of provisioning using representative data-intensive ap- plications. We also developed mechanisms for providing feedback about resource performance to the application, to enable closed-loop feedback control and dynamic adjustments to resource allo- cations (elasticity). This was enabled through development of a novel persistent query framework that consumes disparate sources of monitoring data, including perfSONAR, and provides scalable distribution of asynchronous notifications.

    18. Katherine Riley | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Katherine Riley Director of Science Katherine Riley Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 2125 Argonne, IL 60439 630-252-5786 riley@alcf.anl.gov Katherine Riley is the Director of Science for the Scientific Applications (Catalyst) Group at the ALCF. Trained at the University of Chicago Flash Center, Riley helped develop a community code designed to solve a wide variety of scientific problems on the largest available computers. At Argonne, she has worked closely

    19. WINDExchange: Distributed Wind

      Wind Powering America (EERE)

      Distributed Wind Photo of a small wind turbine next to a farm house with a colorful sunset in the background. The distributed wind market includes wind turbines and projects of many sizes, from small wind turbines less than 1 kilowatt (kW) to multi-megawatt wind farms. The term "distributed wind" describes off-grid or grid-connected wind turbines at homes, farms and ranches, businesses, public and industrial facilities, and other sites. The turbines can provide all of the power used at

    20. Differences Between Distributed and Parallel Systems

      SciTech Connect (OSTI)

      Brightwell, R.; Maccabe, A.B.; Rissen, R.

      1998-10-01

      Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.