National Library of Energy BETA

Sample records for distributed computing group

  1. Computing Frontier: Distributed Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Frontier: Distributed Computing and Facility Infrastructures Conveners: Kenneth Bloom 1 , Richard Gerber 2 1 Department of Physics and Astronomy, University of Nebraska-Lincoln 2 National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory 1.1 Introduction The field of particle physics has become increasingly reliant on large-scale computing resources to address the challenges of analyzing large datasets, completing specialized computations and

  2. NERSC seeks Computational Systems Group Lead

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    seeks Computational Systems Group Lead NERSC seeks Computational Systems Group Lead January 6, 2011 by Katie Antypas Note: This position is now closed. The Computational Systems Group provides production support and advanced development for the supercomputer systems at NERSC. Manage the Computational Systems Group (CSG) which provides production support and advanced development for the supercomputer systems at NERSC (National Energy Research Scientific Computing Center). These systems, which

  3. Computer Networking Group | Stanford Synchrotron Radiation Lightsource

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer Networking Group Do you need help? For assistance please submit a CNG Help Request ticket. CNG Logo Chris Ramirez SSRL Computer and Networking Group (650) 926-2901 | email ...

  4. Distributed Energy Financial Group | Open Energy Information

    Open Energy Info (EERE)

    Financial Group Jump to: navigation, search Name: Distributed Energy Financial Group Place: Washington, DC, Washington, DC Zip: 20016-25 12 Sector: Services Product: The...

  5. Distributed Energy Systems Integration Group (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2009-10-01

    Factsheet developed to describe the activites of the Distributed Energy Systems Integration Group within NREL's Electricity, Resources, and Buildings Systems Integration center.

  6. Jay Srinivasan! Group Lead, Computational Systems!

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Group Lead, Computational Systems! NUG - Feb 2015 Computational Systems Update NERSC - 2014 --- 2 --- Sponsored C ompute S ystems Carver, P DSF, J GI, K BASE, H EP 8 x F DR I B /global/ scratch 4 PB /project 5 PB /home 250 TB 45 P B s tored, 2 40 P B capacity, 4 0 y ears o f community d ata HPSS 48 GB/s 2.2 P B L ocal Scratch 70 GB/s 6.4 P B L ocal Scratch 140 GB/s 80 GB/s 2 x 10 Gb 1 x 100 Gb Science D ata N etwork Vis & A naly?cs, D ata T ransfer N odes, Adv. A rch., S cience G ateways 80

  7. Distributed Real-Time Computing with Harness

    SciTech Connect (OSTI)

    Di Saverio, Emanuele; Cesati, Marco; Di Biagio, Christian; Pennella, Guido; Engelmann, Christian

    2007-01-01

    Modern parallel and distributed computing solutions are often built onto a ''middleware'' software layer providing a higher and common level of service between computational nodes. Harness is an adaptable, plugin-based middleware framework for parallel and distributed computing. This paper reports recent research and development results of using Harness for real-time distributed computing applications in the context of an industrial environment with the needs to perform several safety critical tasks. The presented work exploits the modular architecture of Harness in conjunction with a lightweight threaded implementation to resolve several real-time issues by adding three new Harness plug-ins to provide a prioritized lightweight execution environment, low latency communication facilities, and local timestamped event logging.

  8. Distributions of methyl group rotational barriers in polycrystalline organic solids

    SciTech Connect (OSTI)

    Beckmann, Peter A. E-mail: wangxianlong@uestc.edu.cn; Conn, Kathleen G.; Division of Education and Human Services, Neumann University, One Neumann Drive, Aston, Pennsylvania 19014-1298 ; Mallory, Clelia W.; Department of Chemistry, Bryn Mawr College, 101 North Merion Ave., Bryn Mawr, Pennsylvania 19010-2899 ; Mallory, Frank B.; Rheingold, Arnold L.; Rotkina, Lolita; Wang, Xianlong E-mail: wangxianlong@uestc.edu.cn

    2013-11-28

    We bring together solid state {sup 1}H spin-lattice relaxation rate measurements, scanning electron microscopy, single crystal X-ray diffraction, and electronic structure calculations for two methyl substituted organic compounds to investigate methyl group (CH{sub 3}) rotational dynamics in the solid state. Methyl group rotational barrier heights are computed using electronic structure calculations, both in isolated molecules and in molecular clusters mimicking a perfect single crystal environment. The calculations are performed on suitable clusters built from the X-ray diffraction studies. These calculations allow for an estimate of the intramolecular and the intermolecular contributions to the barrier heights. The {sup 1}H relaxation measurements, on the other hand, are performed with polycrystalline samples which have been investigated with scanning electron microscopy. The {sup 1}H relaxation measurements are best fitted with a distribution of activation energies for methyl group rotation and we propose, based on the scanning electron microscopy images, that this distribution arises from molecules near crystallite surfaces or near other crystal imperfections (vacancies, dislocations, etc.). An activation energy characterizing this distribution is compared with a barrier height determined from the electronic structure calculations and a consistent model for methyl group rotation is developed. The compounds are 1,6-dimethylphenanthrene and 1,8-dimethylphenanthrene and the methyl group barriers being discussed and compared are in the 212 kJ?mol{sup ?1} range.

  9. Computational social dynamic modeling of group recruitment.

    SciTech Connect (OSTI)

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  10. NERSC seeks Computational Systems Group Lead

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and advanced development for the supercomputer systems at NERSC (National Energy Research Scientific Computing ... workload demands within hiring and budget constraints. ...

  11. Interoperable PKI Data Distribution in Computational Grids

    SciTech Connect (OSTI)

    Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.; Smith, Sean W.

    2008-07-25

    One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Grid Security Infrastructure (GSI).

  12. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group The Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG), launched in October 2006, provides a forum for effective communication and collaboration among participants in DOE Fuel Cell Technologies Office (FCT) cost-shared research directed at distributed bio-liquid reforming. The Working Group includes individuals from DOE, the national laboratories, industry, and academia.

  13. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    (BILIWG), Hydrogen Separation and Purification Working Group (PURIWG) & Hydrogen Production Technical Team | Department of Energy Working Group (BILIWG), Hydrogen Separation and Purification Working Group (PURIWG) & Hydrogen Production Technical Team Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG), Hydrogen Separation and Purification Working Group (PURIWG) & Hydrogen Production Technical Team 2007 Annual and Merit Review Reports compiled for the

  14. High Performance Computational Biology: A Distributed computing Perspective (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Konerding, David [Google, Inc

    2011-06-08

    David Konerding from Google, Inc. gives a presentation on "High Performance Computational Biology: A Distributed Computing Perspective" at the JGI/Argonne HPC Workshop on January 26, 2010.

  15. Perspectives on distributed computing : thirty people, four user types, and the distributed computing user experience.

    SciTech Connect (OSTI)

    Childers, L.; Liming, L.; Foster, I.; Mathematics and Computer Science; Univ. of Chicago

    2008-10-15

    This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software; (2

  16. Distributed Design and Analysis of Computer Experiments

    Energy Science and Technology Software Center (OSTI)

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation

  17. Establishing a group of endpoints in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.; Xue, Hanhong

    2016-02-02

    A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.

  18. Working Group Report: Computing for the Intensity Frontier

    SciTech Connect (OSTI)

    Rebel, B.; Sanchez, M.C.; Wolbers, S.

    2013-10-25

    This is the report of the Computing Frontier working group on Lattice Field Theory prepared for the proceedings of the 2013 Community Summer Study ("Snowmass"). We present the future computing needs and plans of the U.S. lattice gauge theory community and argue that continued support of the U.S. (and worldwide) lattice-QCD effort is essential to fully capitalize on the enormous investment in the high-energy physics experimental program. We first summarize the dramatic progress of numerical lattice-QCD simulations in the past decade, with some emphasis on calculations carried out under the auspices of the U.S. Lattice-QCD Collaboration, and describe a broad program of lattice-QCD calculations that will be relevant for future experiments at the intensity and energy frontiers. We then present details of the computational hardware and software resources needed to undertake these calculations.

  19. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect (OSTI)

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  20. Clock distribution system for digital computers

    DOE Patents [OSTI]

    Wyman, Robert H.; Loomis, Jr., Herschel H.

    1981-01-01

    Apparatus for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse "overtaking" a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component V'.sub.01 (t); an array of N signal characteristic detector means, with detector means No. 1 receiving the timing means signal and producing a change-of-state signal V.sub.1 (t) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal V.sub.n (t) and producing a modified change-of-state signal V'.sub.n (t) (n=1, . . . , N) having a fundamental frequency component that is substantially proportional to V'.sub.01 (t-.theta..sub.n (t) with a cumulative phase shift .theta..sub.n (t) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1.ltoreq.n

  1. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect (OSTI)

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  2. Computation of glint, glare, and solar irradiance distribution

    SciTech Connect (OSTI)

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  3. A directory service for configuring high-performance distributed computations

    SciTech Connect (OSTI)

    Fitzgerald, S.; Kesselman, C.; Foster, I.

    1997-08-01

    High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

  4. Institutional Computing Executive Group Review of Multi-programmatic & Institutional Computing, Fiscal Year 2005 and 2006

    SciTech Connect (OSTI)

    Langer, S; Rotman, D; Schwegler, E; Folta, P; Gee, R; White, D

    2006-12-18

    The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources. Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte

  5. Gaussian distributions, Jacobi group, and Siegel-Jacobi space

    SciTech Connect (OSTI)

    Molitor, Mathieu

    2014-12-15

    Let N be the space of Gaussian distribution functions over ℝ, regarded as a 2-dimensional statistical manifold parameterized by the mean μ and the deviation σ. In this paper, we show that the tangent bundle of N, endowed with its natural Kähler structure, is the Siegel-Jacobi space appearing in the context of Number Theory and Jacobi forms. Geometrical aspects of the Siegel-Jacobi space are discussed in detail (completeness, curvature, group of holomorphic isometries, space of Kähler functions, and relationship to the Jacobi group), and are related to the quantum formalism in its geometrical form, i.e., based on the Kähler structure of the complex projective space. This paper is a continuation of our previous work [M. Molitor, “Remarks on the statistical origin of the geometrical formulation of quantum mechanics,” Int. J. Geom. Methods Mod. Phys. 9(3), 1220001, 9 (2012); M. Molitor, “Information geometry and the hydrodynamical formulation of quantum mechanics,” e-print arXiv (2012); M. Molitor, “Exponential families, Kähler geometry and quantum mechanics,” J. Geom. Phys. 70, 54–80 (2013)], where we studied the quantum formalism from a geometric and information-theoretical point of view.

  6. Scalable error correction in distributed ion trap computers

    SciTech Connect (OSTI)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-11-15

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment.

  7. GAiN: Distributed Array Computation with Python

    SciTech Connect (OSTI)

    Daily, Jeffrey A.

    2009-04-24

    Scientific computing makes use of very large, multidimensional numerical arrays - typically, gigabytes to terabytes in size - much larger than can fit on even the largest single compute node. Such arrays must be distributed across a "cluster" of nodes. Global Arrays is a cluster-based software system from Battelle Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate these arrays. Written in and for the C and FORTRAN programming languages, it takes advantage of high-performance cluster interconnections to allow any node in the cluster to access data on any other node very rapidly. The "numpy" module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. numpy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, numpy is inherently serial. Our system, GAiN (Global Arrays in NumPy), is a parallel extension to Python that accesses Global Arrays through numpy. This allows parallel processing and/or larger problem sizes to be harnessed almost transparently within new or existing numpy programs.

  8. Inferring Group Processes from Computer-Mediated Affective Text Analysis

    SciTech Connect (OSTI)

    Schryver, Jack C; Begoli, Edmon; Jose, Ajith; Griffin, Christopher

    2011-02-01

    Political communications in the form of unstructured text convey rich connotative meaning that can reveal underlying group social processes. Previous research has focused on sentiment analysis at the document level, but we extend this analysis to sub-document levels through a detailed analysis of affective relationships between entities extracted from a document. Instead of pure sentiment analysis, which is just positive or negative, we explore nuances of affective meaning in 22 affect categories. Our affect propagation algorithm automatically calculates and displays extracted affective relationships among entities in graphical form in our prototype (TEAMSTER), starting with seed lists of affect terms. Several useful metrics are defined to infer underlying group processes by aggregating affective relationships discovered in a text. Our approach has been validated with annotated documents from the MPQA corpus, achieving a performance gain of 74% over comparable random guessers.

  9. Reviews of computing technology: Fiber distributed data interface

    SciTech Connect (OSTI)

    Johnson, A.J.

    1991-12-01

    Fiber Distributed Data Interface, more commonly known as FDDI, is the name of the standard that describes a new local area network (LAN) technology for the 90's. This technology is based on fiber optics communications and, at a data transmission rate of 100 million bits per second (mbps), provides a full order of magnitude improvement over previous LAN standards such as Ethernet and Token Ring. FDDI as a standard has been accepted by all major computer manufacturers and is a national standard as defined by the American National Standards Institute (ANSI). FDDI will become part of the US Government Open Systems Interconnection Profile (GOSIP) under Version 3 GOSIP and will become an international standard promoted by the International Standards Organization (ISO). It is important to note that there are no competing standards for high performance LAN's so that FDDI acceptance is nearly universal. This technology report describes FDDI as a technology, looks at the applications of this technology, examine the current economics of using it, and describe activities and plans by the Information Resource Management (IRM) department to implement this technology at the Savannah River Site.

  10. Providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Faraj, Ahmad A.; Inglett, Todd A.; Ratterman, Joseph D.

    2012-10-23

    Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier.

  11. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Background Paper

    Broader source: Energy.gov [DOE]

    Paper by Arlene Anderson and Tracy Carole presented at the Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group, with a focus on key drivers, purpose, and scope.

  12. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Meeting

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    - November 2007 | Department of Energy Meeting - November 2007 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Meeting - November 2007 The Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group participated in a Hydrogen Production Technical Team Research Review on November 6, 2007. The meeting provided the opportunity for researchers to share their experiences in converting bio-derived liquids to hydrogen with members of the Department of Energy Hydrogen

  13. High-performance, distributed computing software libraries and services

    Energy Science and Technology Software Center (OSTI)

    2002-01-24

    The Globus toolkit provides basic Grid software infrastructure (i.e. middleware), to facilitate the development of applications which securely integrate geographically separated resources, including computers, storage systems, instruments, immersive environments, etc.

  14. NERSC User's Group Meeting 2.4.14 Computational Facilities: NERSC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    User's Group Meeting 2.4.14 Computational Facilities: NERSC Collaborators: Martin Karplus, Eric Vanden-Eijnden, Kwangho Nam, Anne Houdusse, Robert Sauer Financial support: NIH Conformational change in biology: from amino acids to enzymes and molecular motors. Victor Ovchinnikov NERSC User's Group Meeting 2.4.14 2 Introduction  Conformational motions in biomolecules define all living things - Transport across membranes - Enzyme reactions (from proton transfer to DNA replication and repair) -

  15. Reviews of computing technology: Fiber distributed data interface

    SciTech Connect (OSTI)

    Johnson, A.J.

    1992-04-01

    This technology report describes Fiber Distributed Data Interface (FDDI) as a technology, looks at the applications of this technology, examines the current economics of using it, and describe activities and plans by the Information Resource Management Department to implement this technology at the Savannah River Site.

  16. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Kick-Off Meeting

    Broader source: Energy.gov [DOE]

    The U.S. Department of Energy held a kick-off meeting for the Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) on October 24, 2006, in Baltimore, Maryland. The Working Group is addressing technical challenges to distributed reforming of biomass-derived, renewable liquid fuels to hydrogen, including the reforming, water-gas shift, and hydrogen recovery and purification steps. The meeting provided the opportunity for researchers to share their experiences in converting bio-derived liquids to hydrogen with each other and with members of the DOE Hydrogen Production Technical Team.

  17. Data-aware distributed scientific computing for big-data problems...

    Office of Scientific and Technical Information (OSTI)

    big-data problems in bio-surveillance Citation Details In-Document Search Title: Data-aware distributed scientific computing for big-data problems in bio-surveillance You are ...

  18. A secure communications infrastructure for high-performance distributed computing

    SciTech Connect (OSTI)

    Foster, I.; Koenig, G.; Tuecke, S.

    1997-08-01

    Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentially of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. The authors address these requirements via a security-enhanced version of the Nexus communication library; which they use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine-grained security/performance tradeoffs. The authors present performance results that quantify the performance of their infrastructure.

  19. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Faraj, Ahmad A; Inglett, Todd A; Ratterman, Joseph D

    2013-04-16

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  20. Configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks

    DOE Patents [OSTI]

    Archer, Charles J.; Inglett, Todd A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-03-02

    Methods, apparatus, and products are disclosed for configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks, the compute nodes in the operational group connected together for data communications through a global combining network, that include: partitioning the compute nodes in the operational group into a plurality of non-overlapping subgroups; designating one compute node from each of the non-overlapping subgroups as a master node; and assigning, to the compute nodes in each of the non-overlapping subgroups, class routing instructions that organize the compute nodes in that non-overlapping subgroup as a collective network such that the master node is a physical root.

  1. Computational study of ion distributions at the air/liquid methanol interface

    SciTech Connect (OSTI)

    Sun, Xiuquan; Wick, Collin D.; Dang, Liem X.

    2011-06-16

    Molecular dynamic simulations with polarizable potentials were performed to systematically investigate the distribution of NaCl, NaBr, NaI, and SrCl2 at the air/liquid methanol interface. The density profiles indicated that there is no substantial enhancement of anions at the interface for the NaX systems in contrast to what was observed at the air/aqueous interface. The surfactant-like shape of the larger more polarizable halide anions is compensated by the surfactant nature of methanol itself. As a result, methanol hydroxy groups strongly interacted with one side of polarizable anions, in which their induced dipole points, and methanol methyl groups were more likely to be found near the positive pole of anion induced dipoles. Furthermore, salts were found to disrupt the surface structure of methanol, reducing the observed enhancement of methyl groups at the outer edge of the air/liquid methanol interface. With the additional of salts to methanol, the computed surface potentials increased, which is in contrast to what is observed in corresponding aqueous systems, where the surface potential decreases with the addition of salts. Both of these trends have been indirectly observed with experiments. This was found to be due to the propensity of anions for the air/water interface that is not present at the air/liquid methanol interface. This work was supported by the US Department of Energy Basic Energy Sciences' Chemical Sciences, Geosciences & Biosciences Division. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.

  2. Probing the structure of complex solids using a distributed computing approach-Applications in zeolite science

    SciTech Connect (OSTI)

    French, Samuel A.; Coates, Rosie; Lewis, Dewi W.; Catlow, C. Richard A.

    2011-06-15

    We demonstrate the viability of distributed computing techniques employing idle desktop computers in investigating complex structural problems in solids. Through the use of a combined Monte Carlo and energy minimisation method, we show how a large parameter space can be effectively scanned. By controlling the generation and running of different configurations through a database engine, we are able to not only analyse the data 'on the fly' but also direct the running of jobs and the algorithms for generating further structures. As an exemplar case, we probe the distribution of Al and extra-framework cations in the structure of the zeolite Mordenite. We compare our computed unit cells with experiment and find that whilst there is excellent correlation between computed and experimentally derived unit cell volumes, cation positioning and short-range Al ordering (i.e. near neighbour environment), there remains some discrepancy in the distribution of Al throughout the framework. We also show that stability-structure correlations only become apparent once a sufficiently large sample is used. - Graphical Abstract: Aluminium distributions in zeolites are determined using e-science methods. Highlights: > Use of e-science methods to search configurationally space. > Automated control of space searching. > Identify key structural features conveying stability. > Improved correlation of computed structures with experimental data.

  3. Efficient computation of stress and load distribution for external cylindrical gears

    SciTech Connect (OSTI)

    Zhang, J.J.; Esat, I.I.; Shi, Y.H.

    1996-12-31

    It has been extensively realized that tooth flank correction is an effective technique to improve load carrying capacity and running behavior of gears. However, the existing analytical methods of load distribution are not very satisfactory. They are either too simplified to produce accurate results or computationally too expensive. In this paper, we propose a new approach which computes the load and stress distribution of external involute gears efficiently and accurately. It adopts the {open_quotes}thin-slice{close_quotes} model and 2D FEA technique and takes into account the varying meshing stiffness.

  4. Acidity of the amidoxime functional group in aqueous solution. A combined experimental and computational study

    SciTech Connect (OSTI)

    Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; Tucker, Lyndsay; Correia, Bruna; Do-Thanh, Chi-Linh; Dai, Sheng; Hancock, Robert D.; Bryantsev, Vyacheslav S.

    2015-01-26

    Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pKa values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pKa values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopic titrations to measure the pKa values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pKa values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pKa units and 0.35 pKa units, respectively, and a root mean square deviation of 0.46 pKa units and 0.45 pKa units, respectively. Finally, we employ our two best methods to predict the pKa values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.

  5. Acidity of the amidoxime functional group in aqueous solution. A combined experimental and computational study

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; Tucker, Lyndsay; Correia, Bruna; Do-Thanh, Chi-Linh; Dai, Sheng; Hancock, Robert D.; Bryantsev, Vyacheslav S.

    2015-01-01

    Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pKa values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pKa values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopicmore » titrations to measure the pKa values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pKa values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pKa units and 0.35 pKa units, respectively, and a root mean square deviation of 0.46 pKa units and 0.45 pKa units, respectively. Finally, we employ our two best methods to predict the pKa values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.« less

  6. Methods and apparatuses for information analysis on shared and distributed computing systems

    DOE Patents [OSTI]

    Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

    2011-02-22

    Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

  7. Microsoft Word - The Essential Role of New Network Services for High Performance Distributed Computing - PARENG.CivilComp.2011.

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Second International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering 12-15 April 2011, Ajaccio - Corsica - France In "Trends in Parallel, Distributed, Grid and Cloud Computing for Engineering," Edited by: P. Iványi, B.H.V. Topping, Civil-Comp Press. Network Services for High Performance Distributed Computing and Data Management W. E. Johnston, C. Guok, J. Metzger, and B. Tierney ESnet and Lawrence Berkeley National Laboratory, Berkeley California, U.S.A

  8. Assigning unique identification numbers to new user accounts and groups in a computing environment with multiple registries

    DOE Patents [OSTI]

    DeRobertis, Christopher V.; Lu, Yantian T.

    2010-02-23

    A method, system, and program storage device for creating a new user account or user group with a unique identification number in a computing environment having multiple user registries is provided. In response to receiving a command to create a new user account or user group, an operating system of a clustered computing environment automatically checks multiple registries configured for the operating system to determine whether a candidate identification number for the new user account or user group has been assigned already to one or more existing user accounts or groups, respectively. The operating system automatically assigns the candidate identification number to the new user account or user group created in a target user registry if the checking indicates that the candidate identification number has not been assigned already to any of the existing user accounts or user groups, respectively.

  9. Targeting Atmospheric Simulation Algorithms for Large Distributed Memory GPU Accelerated Computers

    SciTech Connect (OSTI)

    Norman, Matthew R

    2013-01-01

    Computing platforms are increasingly moving to accelerated architectures, and here we deal particularly with GPUs. In [15], a method was developed for atmospheric simulation to improve efficiency on large distributed memory machines by reducing communication demand and increasing the time step. Here, we improve upon this method to further target GPU accelerated platforms by reducing GPU memory accesses, removing a synchronization point, and better clustering computations. The modification ran over two times faster in some cases even though more computations were required, demonstrating the merit of improving memory handling on the GPU. Furthermore, we discover that the modification also has a near 100% hit rate in fast on-chip L1 cache and discuss the reasons for this. In concluding, we remark on further potential improvements to GPU efficiency.

  10. System design and algorithmic development for computational steering in distributed environments

    SciTech Connect (OSTI)

    Wu, Qishi; Zhu, Mengxia; Gu, Yi; Rao, Nageswara S

    2010-03-01

    Supporting visualization pipelines over wide-area networks is critical to enabling large-scale scientific applications that require visual feedback to interactively steer online computations. We propose a remote computational steering system that employs analytical models to estimate the cost of computing and communication components and optimizes the overall system performance in distributed environments with heterogeneous resources. We formulate and categorize the visualization pipeline configuration problems for maximum frame rate into three classes according to the constraints on node reuse or resource sharing, namely no, contiguous, and arbitrary reuse. We prove all three problems to be NP-complete and present heuristic approaches based on a dynamic programming strategy. The superior performance of the proposed solution is demonstrated with extensive simulation results in comparison with existing algorithms and is further evidenced by experimental results collected on a prototype implementation deployed over the Internet.

  11. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; et al

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system

  12. Models the Electromagnetic Response of a 3D Distribution using MP COMPUTERS

    Energy Science and Technology Software Center (OSTI)

    1999-05-01

    EM3D models the electromagnetic response of a 3D distribution of conductivity, dielectric permittivity and magnetic permeability within the earth for geophysical applications using massively parallel computers. The simulations are carried out in the frequency domain for either electric or magnetic sources for either scattered or total filed formulations of Maxwell''s equations. The solution is based on the method of finite differences and includes absorbing boundary conditions so that responses can be modeled up into themore » radar range where wave propagation is dominant. Recent upgrades in the software include the incorporation of finite size sources, that in addition to dipolar source fields, and a low induction number preconditioner that can significantly reduce computational run times. A graphical user interface (GUI) is bundled with the software so that complicated 3D models can be easily constructed and simulated with the software. The GUI also allows for plotting of the output.« less

  13. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation

    SciTech Connect (OSTI)

    Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn

    2014-11-14

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.

  14. 16th Department of Energy Computer Security Group Training Conference: Proceedings

    SciTech Connect (OSTI)

    Not Available

    1994-04-01

    Various topic on computer security are presented. Integrity standards, smartcard systems, network firewalls, encryption systems, cryptography, computer security programs, multilevel security guards, electronic mail privacy, the central intelligence agency, internet security, and high-speed ATM networking are typical examples of discussed topics. Individual papers are indexed separately.

  15. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) Kick-Off Meeting Proceedings Hilton Garden Inn-BWI,Baltimore, MD October 24, 2006

    Broader source: Energy.gov [DOE]

    Proceedings from the October 24, 2006 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Kick-Off Meeting.

  16. Integrated Computing, Communication, and Distributed Control of Deregulated Electric Power Systems

    SciTech Connect (OSTI)

    Bajura, Richard; Feliachi, Ali

    2008-09-24

    Restructuring of the electricity market has affected all aspects of the power industry from generation to transmission, distribution, and consumption. Transmission circuits, in particular, are stressed often exceeding their stability limits because of the difficulty in building new transmission lines due to environmental concerns and financial risk. Deregulation has resulted in the need for tighter control strategies to maintain reliability even in the event of considerable structural changes, such as loss of a large generating unit or a transmission line, and changes in loading conditions due to the continuously varying power consumption. Our research efforts under the DOE EPSCoR Grant focused on Integrated Computing, Communication and Distributed Control of Deregulated Electric Power Systems. This research is applicable to operating and controlling modern electric energy systems. The controls developed by APERC provide for a more efficient, economical, reliable, and secure operation of these systems. Under this program, we developed distributed control algorithms suitable for large-scale geographically dispersed power systems and also economic tools to evaluate their effectiveness and impact on power markets. Progress was made in the development of distributed intelligent control agents for reliable and automated operation of integrated electric power systems. The methodologies employed combine information technology, control and communication, agent technology, and power systems engineering in the development of intelligent control agents for reliable and automated operation of integrated electric power systems. In the event of scheduled load changes or unforeseen disturbances, the power system is expected to minimize the effects and costs of disturbances and to maintain critical infrastructure operational.

  17. A Distributed OpenCL Framework using Redundant Computation and Data Replication

    SciTech Connect (OSTI)

    Kim, Junghyun; Gangwon, Jo; Jaehoon, Jung; Lee, Jaejin

    2016-01-01

    Applications written solely in OpenCL or CUDA cannot execute on a cluster as a whole. Most previous approaches that extend these programming models to clusters are based on a common idea: designating a centralized host node and coordinating the other nodes with the host for computation. However, the centralized host node is a serious performance bottleneck when the number of nodes is large. In this paper, we propose a scalable and distributed OpenCL framework called SnuCL-D for large-scale clusters. SnuCL-D's remote device virtualization provides an OpenCL application with an illusion that all compute devices in a cluster are confined in a single node. To reduce the amount of control-message and data communication between nodes, SnuCL-D replicates the OpenCL host program execution and data in each node. We also propose a new OpenCL host API function and a queueing optimization technique that significantly reduce the overhead incurred by the previous centralized approaches. To show the effectiveness of SnuCL-D, we evaluate SnuCL-D with a microbenchmark and eleven benchmark applications on a large-scale CPU cluster and a medium-scale GPU cluster.

  18. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOE Patents [OSTI]

    Bhanot, Gyan V.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Vranas, Pavlos M.

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  19. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOE Patents [OSTI]

    Bhanot, Gyan V.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Vranas, Pavlos M.

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  20. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  1. Privacy and Security Research Group workshop on network and distributed system security: Proceedings

    SciTech Connect (OSTI)

    Not Available

    1993-05-01

    This report contains papers on the following topics: NREN Security Issues: Policies and Technologies; Layer Wars: Protect the Internet with Network Layer Security; Electronic Commission Management; Workflow 2000 - Electronic Document Authorization in Practice; Security Issues of a UNIX PEM Implementation; Implementing Privacy Enhanced Mail on VMS; Distributed Public Key Certificate Management; Protecting the Integrity of Privacy-enhanced Electronic Mail; Practical Authorization in Large Heterogeneous Distributed Systems; Security Issues in the Truffles File System; Issues surrounding the use of Cryptographic Algorithms and Smart Card Applications; Smart Card Augmentation of Kerberos; and An Overview of the Advanced Smart Card Access Control System. Selected papers were processed separately for inclusion in the Energy Science and Technology Database.

  2. Data-aware distributed scientific computing for big-data problems...

    Office of Scientific and Technical Information (OSTI)

    Country of Publication: United States Language: English Subject: Mathematics & Computing(97) Computer Science Word Cloud More Like This Full Text File size NAView Full Text View ...

  3. Technologies and tools for high-performance distributed computing. Final report

    SciTech Connect (OSTI)

    Karonis, Nicholas T.

    2000-05-01

    In this project we studied the practical use of the MPI message-passing interface in advanced distributed computing environments. We built on the existing software infrastructure provided by the Globus Toolkit{trademark}, the MPICH portable implementation of MPI, and the MPICH-G integration of MPICH with Globus. As a result of this project we have replaced MPICH-G with its successor MPICH-G2, which is also an integration of MPICH with Globus. MPICH-G2 delivers significant improvements in message passing performance when compared to its predecessor MPICH-G and was based on superior software design principles resulting in a software base that was much easier to make the functional extensions and improvements we did. Using Globus services we replaced the default implementation of MPI's collective operations in MPICH-G2 with more efficient multilevel topology-aware collective operations which, in turn, led to the development of a new timing methodology for broadcasts [8]. MPICH-G2 was extended to include client/server functionality from the MPI-2 standard [23] to facilitate remote visualization applications and, through the use of MPI idioms, MPICH-G2 provided application-level control of quality-of-service parameters as well as application-level discovery of underlying Grid-topology information. Finally, MPICH-G2 was successfully used in a number of applications including an award-winning record-setting computation in numerical relativity. In the sections that follow we describe in detail the accomplishments of this project, we present experimental results quantifying the performance improvements, and conclude with a discussion of our applications experiences. This project resulted in a significant increase in the utility of MPICH-G2.

  4. WITNESSING GAS MIXING IN THE METAL DISTRIBUTION OF THE HICKSON COMPACT GROUP HCG 31

    SciTech Connect (OSTI)

    Torres-Flores, S.; Alfaro-Cuello, M.; De Oliveira, C. Mendes; Amram, P.; Carrasco, E. R.

    2015-01-01

    We present for the first time direct evidence that in a merger of disk galaxies, the pre-existing central metallicities will mix as a result of gas being transported in the merger interface region along the line that joins the two coalescing nuclei. This is shown using detailed two-dimensional kinematics as well as metallicity measurements for the nearby ongoing merger in the center of the compact group HCG 31. We focus on the emission line gas, which is extensive in the system. The two coalescing cores display similar oxygen abundances. While in between the two nuclei, the metallicity changes smoothly from one nucleus to the other indicating a mix of metals in this region, which is confirmed by the high-resolution Hα kinematics (R = 45,900). This nearby system is especially important because it involves the merging of two fairly low-mass and clumpy galaxies (LMC-like galaxies), making it an important system for comparison with high-redshift galaxies.

  5. Fault-tolerant quantum computation and communication on a distributed 2D array of small local systems

    SciTech Connect (OSTI)

    Fujii, K.; Yamamoto, T.; Imoto, N.; Koashi, M.

    2014-12-04

    We propose a scheme for distributed quantum computation with small local systems connected via noisy quantum channels. We show that the proposed scheme tolerates errors with probabilities ∼30% and ∼ 0.1% in quantum channels and local operations, respectively, both of which are improved substantially compared to the previous works.

  6. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing and Storage Requirements Computing and Storage Requirements for FES J. Candy General Atomics, San Diego, CA Presented at DOE Technical Program Review Hilton Washington DC/Rockville Rockville, MD 19-20 March 2013 2 Computing and Storage Requirements Drift waves and tokamak plasma turbulence Role in the context of fusion research * Plasma performance: In tokamak plasmas, performance is limited by turbulent radial transport of both energy and particles. * Gradient-driven: This turbulent

  7. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  8. Computer Simulation of Equilibrium Electron Beam Distribution in the Proximity of 4th Order Single Nonlinear Resonance

    SciTech Connect (OSTI)

    Kuo, C.-C.; Tsai, H.-J.; Ueng, T.-S.; Chao, A.; /SLAC

    2005-05-09

    The beam distribution of particles in a storage ring can be distorted in the presence of nonlinear resonances. Computer simulation is used to study the equilibrium distribution of an electron beam in the presence of a single 4th order nonlinear resonance in a storage ring. Its result is compared with that obtained using an analytical approach by solving the Fokker-Planck equation to first order in the resonance strength. The effect of resonance on quantum lifetime of electron beam is also compared and investigated.

  9. Parallel, distributed and GPU computing technologies in single-particle electron microscopy

    SciTech Connect (OSTI)

    Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger

    2009-07-01

    An introduction to the current paradigm shift towards concurrency in software. Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of todays technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.

  10. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management Programs Advanced Nuclear Energy

  11. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. ! Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13 Garching bei München, Germany, July 17, 2007 ScicomP 13, July 17, 2007, Garching Overview * About Bassi * Memory on Bassi * Large Page Memory (It's Great!) * System Configuration * Large Page

  12. Proceedings of the sixth Berkeley workshop on distributed data management and computer networks

    SciTech Connect (OSTI)

    Various Authors

    1982-01-01

    A distributed data base management system allows data to be stored at multiple locations and to be accessed as a single unified data base. In this workshop, seventeen papers were presented which have been prepared separately for the energy data base. These items deal with data transfer, protocols and management. (GHT)

  13. System and method for secure group transactions

    DOE Patents [OSTI]

    Goldsmith, Steven Y.

    2006-04-25

    A method and a secure system, processing on one or more computers, provides a way to control a group transaction. The invention uses group consensus access control and multiple distributed secure agents in a network environment. Each secure agent can organize with the other secure agents to form a secure distributed agent collective.

  14. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect (OSTI)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  15. DualTrust: A Distributed Trust Model for Swarm-Based Autonomic Computing Systems

    SciTech Connect (OSTI)

    Maiden, Wendy M.; Dionysiou, Ioanna; Frincke, Deborah A.; Fink, Glenn A.; Bakken, David E.

    2011-02-01

    For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, trust management is important for the acceptance of the mobile agent sensors and to protect the system from malicious behavior by insiders and entities that have penetrated network defenses. This paper examines the trust relationships, evidence, and decisions in a representative system and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. We then propose the DualTrust conceptual trust model. By addressing the autonomic manager’s bi-directional primary relationships in the ACS architecture, DualTrust is able to monitor the trustworthiness of the autonomic managers, protect the sensor swarm in a scalable manner, and provide global trust awareness for the orchestrating autonomic manager.

  16. Distribution:

    Office of Legacy Management (LM)

    JAN26 19% Distribution: OR00 Attn: h.H.M.Roth DFMusser ITMM MMMann INS JCRyan FIw(2) Hsixele SRGustavson, Document rocm Formal file i+a@mmm bav@ ~@esiaw*cp Suppl. file 'Br & Div rf's s/health (lic.only) UNITED STATES ATOMIC ENERGY COMMISSION SPECIAL NUCLEAB MATERIAL LICENSE pursuant to the Atomic Energy Act of 1954 and Title 10, Code of Federal Regulations, Chapter 1, P&t 70, "Special Nuclear Material Reg)llatiqm," a license is hereby issued a$hortztng the licensee to rekeive

  17. Nonequilibrium critical relaxation of structurally disordered systems in the short-time regime: Renormalization group description and computer simulation

    SciTech Connect (OSTI)

    Prudnikov, V. V. Prudnikov, P. V.; Kalashnikov, I. A.; Rychkov, M. V.

    2010-02-15

    The influence of nonequilibrium initial states on the evolution of anisotropic systems with quenched uncorrelated structural defects at the critical point is studied. The field-theoretical description of the nonequilibrium critical behavior of 3D systems is obtained for the first time, and the dynamic critical exponent of the short-time evolution in the two-loop approximation without the use of {epsilon} expansion is calculated. The values of dynamic critical exponents calculated using the series resummation methods are compared with the results of computer simulation of nonequilibrium critical behavior of the 3D disordered Ising model in the short-time regime. It is demonstrated that the values of the critical exponents calculated in this paper are in better agreement with the results of computer simulation than the results of application of {epsilon} expansion.

  18. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    SciTech Connect (OSTI)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W.; Senkpeil, Ryan R.; Tlatov, Andrey G.; Nagovitsyn, Yury A.; Pevtsov, Alexei A.; Chapman, Gary A.; Cookson, Angela M.; Yeates, Anthony R.; Watson, Fraser T.; Balmaceda, Laura A.; DeLuca, Edward E.; Martens, Petrus C. H.

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  19. NERSC Enhances PDSF, Genepool Computing Capabilities

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Enhances PDSF, Genepool Computing Capabilities NERSC Enhances PDSF, Genepool Computing Capabilities Linux cluster expansion speeds data access and analysis January 3, 2014 Christmas came early for users of the Parallel Distributed Systems Facility (PDSF) and Genepool systems at Department of Energy's National Energy Research Scientific Computer Center (NERSC). Throughout November members of NERSC's Computational Systems Group were busy expanding the Linux computing resources that support PDSF's

  20. Distributed computing strategies for processing of FT-ICR MS imaging datasets for continuous mode data visualization

    SciTech Connect (OSTI)

    Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco; Kilic, Mehmet; Heeren, Ronald M.

    2015-03-01

    High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (“big data”) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode “Mosaic Datacube” approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, but requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.

    1. Metal distributions out to 0.5 r {sub 180} in the intracluster medium of four galaxy groups observed with Suzaku

      SciTech Connect (OSTI)

      Sasaki, Toru; Matsushita, Kyoko; Sato, Kosuke E-mail: matusita@rs.kagu.tus.ac.jp

      2014-01-20

      We studied the distributions of metal abundances and metal-mass-to-light ratios in the intracluster medium (ICM) of four galaxy groups, MKW 4, HCG 62, the NGC 1550 group, and the NGC 5044 group, out to ?0.5 r {sub 180} observed with Suzaku. The iron abundance decreases with radius and is about 0.2-0.4 solar beyond 0.1 r {sub 180}. At a given radius in units of r {sub 180}, the iron abundance in the ICM of the four galaxy groups was consistent with or smaller than those of clusters of galaxies. The Mg/Fe and Si/Fe ratios in the ICM are nearly constant at the solar ratio out to 0.5 r {sub 180}. We also studied systematic uncertainties in the derived metal abundances, comparing the results from two versions of atomic data for astrophysicists (ATOMDB) and single- and two-temperature model fits. Since the metals have been synthesized in galaxies, we collected K-band luminosities of galaxies from the Two Micron All Sky Survey catalog and calculated the integrated iron-mass-to-light-ratios (IMLR), or the ratios of the iron mass in the ICM to light from stars in galaxies. The groups with smaller gas-mass-to-light ratios have smaller IMLR values and the IMLR is inversely correlated with the entropy excess. Based on these abundance features, we discussed the past history of metal enrichment processes in groups of galaxies.

    2. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    3. Software/Computing | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software/Computing Software/Computing Argonne is the central site for work on database and data management. The group has key responsibilities in the design and implementation of the I/O model which must provided distributed access to many petabytes of data for both event reconstruction and physics analysis. The group deployed a number of HEP packages on the BlueGene/Q supercomputer of the Argonne Leadership Computing Facility, and currently generates CPU-intensive Monte Carlo event samples for

    4. Final report and documentation for the security enabled programmable switch for protection of distributed internetworked computers LDRD.

      SciTech Connect (OSTI)

      Van Randwyk, Jamie A.; Robertson, Perry J.; Durgin, Nancy Ann; Toole, Timothy J.; Kucera, Brent D.; Campbell, Philip LaRoche; Pierson, Lyndon George

      2010-02-01

      An increasing number of corporate security policies make it desirable to push security closer to the desktop. It is not practical or feasible to place security and monitoring software on all computing devices (e.g. printers, personal digital assistants, copy machines, legacy hardware). We have begun to prototype a hardware and software architecture that will enforce security policies by pushing security functions closer to the end user, whether in the office or home, without interfering with users' desktop environments. We are developing a specialized programmable Ethernet network switch to achieve this. Embodied in this device is the ability to detect and mitigate network attacks that would otherwise disable or compromise the end user's computing nodes. We call this device a 'Secure Programmable Switch' (SPS). The SPS is designed with the ability to be securely reprogrammed in real time to counter rapidly evolving threats such as fast moving worms, etc. This ability to remotely update the functionality of the SPS protection device is cryptographically protected from subversion. With this concept, the user cannot turn off or fail to update virus scanning and personal firewall filtering in the SPS device as he/she could if implemented on the end host. The SPS concept also provides protection to simple/dumb devices such as printers, scanners, legacy hardware, etc. This report also describes the development of a cryptographically protected processor and its internal architecture in which the SPS device is implemented. This processor executes code correctly even if an adversary holds the processor. The processor guarantees both the integrity and the confidentiality of the code: the adversary cannot determine the sequence of instructions, nor can the adversary change the instruction sequence in a goal-oriented way.

    5. Distributed computing for signal processing: modeling of asynchronous parallel computation. Appendix C. Fault-tolerant interconnection networks and image-processing applications for the PASM parallel processing systems. Final report

      SciTech Connect (OSTI)

      Adams, G.B.

      1984-12-01

      The demand for very-high-speed data processing coupled with falling hardware costs has made large-scale parallel and distributed computer systems both desirable and feasible. Two modes of parallel processing are single-instruction stream-multiple data stream (SIMD) and multiple instruction stream - multiple data stream (MIMD). PASM, a partitionable SIMD/MIMD system, is a reconfigurable multimicroprocessor system being designed for image processing and pattern recognition. An important component of these systems is the interconnection network, the mechanism for communication among the computation nodes and memories. Assuring high reliability for such complex systems is a significant task. Thus, a crucial practical aspect of an interconnection network is fault tolerance. In answer to this need, the Extra Stage Cube (ESC), a fault-tolerant, multistage cube-type interconnection network, is defined. The fault tolerance of the ESC is explored for both single and multiple faults, routing tags are defined, and consideration is given to permuting data and partitioning the ESC in the presence of faults. The ESC is compared with other fault-tolerant multistage networks. Finally, reliability of the ESC and an enhanced version of it are investigated.

    6. BOC Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: BOC Group Place: United Kingdom Zip: GU20 6HJ Sector: Services Product: UK-based industrial gases, vacuum technologies and distribution...

    7. Development of an Extensible Computational Framework for Centralized Storage and Distributed Curation and Analysis of Genomic Data Genome-scale Metabolic Models

      SciTech Connect (OSTI)

      Stevens, Rick

      2010-08-01

      The DOE funded KBase project of the Stevens group at the University of Chicago was focused on four high-level goals: (i) improve extensibility, accessibility, and scalability of the SEED framework for genome annotation, curation, and analysis; (ii) extend the SEED infrastructure to support transcription regulatory network reconstructions (2.1), metabolic model reconstruction and analysis (2.2), assertions linked to data (2.3), eukaryotic annotation (2.4), and growth phenotype prediction (2.5); (iii) develop a web-API for programmatic remote access to SEED data and services; and (iv) application of all tools to bioenergy-related genomes and organisms. In response to these goals, we enhanced and improved the ModelSEED resource within the SEED to enable new modeling analyses, including improved model reconstruction and phenotype simulation. We also constructed a new website and web-API for the ModelSEED. Further, we constructed a comprehensive web-API for the SEED as a whole. We also made significant strides in building infrastructure in the SEED to support the reconstruction of transcriptional regulatory networks by developing a pipeline to identify sets of consistently expressed genes based on gene expression data. We applied this pipeline to 29 organisms, computing regulons which were subsequently stored in the SEED database and made available on the SEED website (http://pubseed.theseed.org). We developed a new pipeline and database for the use of kmers, or short 8-residue oligomer sequences, to annotate genomes at high speed. Finally, we developed the PlantSEED, or a new pipeline for annotating primary metabolism in plant genomes. All of the work performed within this project formed the early building blocks for the current DOE Knowledgebase system, and the kmer annotation pipeline, plant annotation pipeline, and modeling tools are all still in use in KBase today.

    8. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      been distributed to the Focus Group prior to the meeting. The comments that required editorial changes to the document were made in the working electronic version. b. At the June...

    9. Important role of the non-uniform Fe distribution for the ferromagnetism in group-IV-based ferromagnetic semiconductor GeFe

      SciTech Connect (OSTI)

      Wakabayashi, Yuki K.; Ohya, Shinobu; Ban, Yoshisuke; Tanaka, Masaaki [Department of Electrical Engineering and Information Systems, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

      2014-11-07

      We investigate the growth-temperature dependence of the properties of the group-IV-based ferromagnetic semiconductor Ge{sub 1?x}Fe{sub x} films (x?=?6.5% and 10.5%), and reveal the correlation of the magnetic properties with the lattice constant, Curie temperature (T{sub C}), non-uniformity of Fe atoms, stacking-fault defects, and Fe-atom locations. While T{sub C} strongly depends on the growth temperature, we find a universal relationship between T{sub C} and the lattice constant, which does not depend on the Fe content x. By using the spatially resolved transmission-electron diffractions combined with the energy-dispersive X-ray spectroscopy, we find that the density of the stacking-fault defects and the non-uniformity of the Fe concentration are correlated with T{sub C}. Meanwhile, by using the channeling Rutherford backscattering and particle-induced X-ray emission measurements, we clarify that about 15% of the Fe atoms exist on the tetrahedral interstitial sites in the Ge{sub 0.935}Fe{sub 0.065} lattice and that the substitutional Fe concentration is not correlated with T{sub C}. Considering these results, we conclude that the non-uniformity of the Fe concentration plays an important role in determining the ferromagnetic properties of GeFe.

    10. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader ...

    11. Group X

      SciTech Connect (OSTI)

      Fields, Susannah

      2007-08-16

      This project is currently under contract for research through the Department of Homeland Security until 2011. The group I was responsible for studying has to remain confidential so as not to affect the current project. All dates, reference links and authors, and other distinguishing characteristics of the original group have been removed from this report. All references to the name of this group or the individual splinter groups has been changed to 'Group X'. I have been collecting texts from a variety of sources intended for the use of recruiting and radicalizing members for Group X splinter groups for the purpose of researching the motivation and intent of leaders of those groups and their influence over the likelihood of group radicalization. This work included visiting many Group X websites to find information on splinter group leaders and finding their statements to new and old members. This proved difficult because the splinter groups of Group X are united in beliefs, but differ in public opinion. They are eager to tear each other down, prove their superiority, and yet remain anonymous. After a few weeks of intense searching, a list of eight recruiting texts and eight radicalizing texts from a variety of Group X leaders were compiled.

    12. Galaxy groups

      SciTech Connect (OSTI)

      Brent Tully, R.

      2015-02-01

      Galaxy groups can be characterized by the radius of decoupling from cosmic expansion, the radius of the caustic of second turnaround, and the velocity dispersion of galaxies within this latter radius. These parameters can be a challenge to measure, especially for small groups with few members. In this study, results are gathered pertaining to particularly well-studied groups over four decades in group mass. Scaling relations anticipated from theory are demonstrated and coefficients of the relationships are specified. There is an update of the relationship between light and mass for groups, confirming that groups with mass of a few times 10{sup 12}M{sub ?} are the most lit up while groups with more and less mass are darker. It is demonstrated that there is an interesting one-to-one correlation between the number of dwarf satellites in a group and the group mass. There is the suggestion that small variations in the slope of the luminosity function in groups are caused by the degree of depletion of intermediate luminosity systems rather than variations in the number per unit mass of dwarfs. Finally, returning to the characteristic radii of groups, the ratio of first to second turnaround depends on the dark matter and dark energy content of the universe and a crude estimate can be made from the current observations of ?{sub matter}?0.15 in a flat topology, with a 68% probability of being less than 0.44.

    13. Specific Group Hardware

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Specific Group Hardware Specific Group Hardware ALICE palicevo1 The Virtual Organization (VO) server. Serves as gatekeeper for ALICE jobs. It's duties include getting assignments from ALICE file catalog (at CERN), submitting jobs to pdsfgrid (via condor) which submits jobs to the compute nodes, monitoring the cluster work load, and uploading job information to ALICE file catalog. It is monitored with MonALISA (the monitoring page is here). It's made up of 2 Intel Xeon E5520 processors each with

    14. Computer, Computational, and Statistical Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCS Computer, Computational, and Statistical Sciences Computational physics, computer science, applied mathematics, statistics and the integration of large data streams are central ...

    15. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    16. Link failure detection in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Megerian, Mark G.; Smith, Brian E.

      2010-11-09

      Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

    17. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These colors were

    18. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental health, cleaner energy, and national security. Contact Us Group Leader Carl Gable Deputy Group Leader Gilles Bussod Email Profile pages header Search our Profile pages Hari Viswanathan inspects a microfluidic cell used to study the extraction of hydrocarbon fuels from a complex fracture network. EES-16's Subsurface Flow

    19. Welcome - Modeling and Simulation Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCS Directorate ORNL Modeling and Simulation Group Computational Sciences & Engineering Division Home Organization Chart Staff Research Areas Major Projects Fact Sheets Publications M&S News Awards Contacts Intership Programs ORNL has lots of opportunities for students to conduct research in scientific fields. Check out our Fellowship and Intership programs Fellowships Interships RAMS Program Modeling and Simulation Group The ORNL Modeling and Simulation Group (MSG) develops

    20. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2015-03-30 20:55:24...

    1. Research Groups - Cyclotron Institute

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Research Groups Research Group Homepages: Nuclear Theory Group Dr. Sherry Yennello's Research Group Dr. Dan Melconian's Research Group Dr. Cody Folden's Group...

    2. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    3. Human collective dynamics: Two groups in adversarial encounter. [melete code

      SciTech Connect (OSTI)

      Sandoval, D.L.; Harlow, F.H.; Genin, K.E.

      1988-04-01

      The behavior of a group of people depends strongly on the interaction of personal (individual) traits with the collective moods of the group as a whole. We have developed a computer program to model circumstances of this nature with recognition of the crucial role played by such psychological properties as fear, excitement, peer pressure, moral outrage, and anger, together with the distribution among participants of intrinsic susceptibilities to these emotions. This report extends previous work to consider two groups of people in adversarial encounter, for example, two platoons in battle, a SWAT team against rioting prisoners, or opposing mobs of different ethnic backgrounds. Closely related applications of the modeling include prowling groups of predatory animals interacting with herds of prey, and even the ''slow-mob'' behavior of social or political units in their response to legislative or judicial activities. Examples in this present study emphasize battlefield encounters, with each group characterizzed by its susceptibilities, skills, and other manifestions of both intentional and accidental circumstances. Specifically, we investigate the relative importance of leadership, camaraderie, training level (i.e. skill in firing weapons), bravery, excitability, and dedication in the battle performance of personnel with random or specified distributions of capabilities and susceptibilities in these various regards. The goal is to exhibit the probable outcome of these encounters in circumstances involving specified battle goals and distributions of terrain impediments. A collateral goal is to provide a real-time hands-on battle simulator into which a leadership trainee can insert his own interactive command.

    4. Snowmass Computing Frontier I2: Distributed

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      panelists from different parts of the grid world: operations, technology, security, big thinking Snowmass report will summarize the discussion Listened carefully to...

    5. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information From here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop. Kerberos. AFS. Printing. Recommended applications for various common tasks. Running CPU- or IO-intensive programs (batch jobs) Commonly encountered problems Computing support within BooNE Bringing a computer to FNAL, or purchasing a new one. Laptops. The Computer Security Program Plan for MiniBooNE The

    6. Prabhat Steps In as DAS Group Lead

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Prabhat Steps In as DAS Group Lead Prabhat Steps In as DAS Group Lead September 1, 2014 prabhat Prabhat has been named Group Lead of the Data and Analytics Services (DAS) Group at the Department of Energy's National Energy Research Scientific Computing Center (NERSC). The DAS group helps NERSC's users address data and analytics challenges arising from the increasing size and complexity of data from simulations and experiments. As the DAS Group Lead, Prabhat will play a key role in developing and

    7. Measurements and computations of room airflow with displacement ventilation

      SciTech Connect (OSTI)

      Yuan, X.; Chen, Q.; Glicksman, L.R.; Hu, Y.; Yang, X.

      1999-07-01

      This paper presents a set of detailed experimental data of room airflow with displacement ventilation. These data were obtained from a new environmental test facility. The measurements were conducted for three typical room configurations: a small office, a large office with partitions, and a classroom. The distributions of air velocity, air velocity fluctuation, and air temperature were measured by omnidirectional hot-sphere anemometers, and contaminant concentrations were measured by tracer gas at 54 points in the rooms. Smoke was used to observe airflow. The data also include the wall surface temperature distribution, air supply parameters, and the age of air at several locations in the rooms. A computational fluid dynamics (CFD) program with the Re-Normalization Group (RNG) {kappa}-{epsilon} model was also used to predict the indoor airflow. The agreement between the computed results and measured data of air temperature and velocity is good. However, some discrepancies exist in the computed and measured concentrations and velocity fluctuation.

    8. Security and Policy for Group Collaboration

      SciTech Connect (OSTI)

      Ian Foster; Carl Kesselman

      2006-07-31

      “Security and Policy for Group Collaboration” was a Collaboratory Middleware research project aimed at providing the fundamental security and policy infrastructure required to support the creation and operation of distributed, computationally enabled collaborations. The project developed infrastructure that exploits innovative new techniques to address challenging issues of scale, dynamics, distribution, and role. To reduce greatly the cost of adding new members to a collaboration, we developed and evaluated new techniques for creating and managing credentials based on public key certificates, including support for online certificate generation, online certificate repositories, and support for multiple certificate authorities. To facilitate the integration of new resources into a collaboration, we improved significantly the integration of local security environments. To make it easy to create and change the role and associated privileges of both resources and participants of collaboration, we developed community wide authorization services that provide distributed, scalable means for specifying policy. These services make it possible for the delegation of capability from the community to a specific user, class of user or resource. Finally, we instantiated our research results into a framework that makes it useable to a wide range of collaborative tools. The resulting mechanisms and software have been widely adopted within DOE projects and in many other scientific projects. The widespread adoption of our Globus Toolkit technology has provided, and continues to provide, a natural dissemination and technology transfer vehicle for our results.

    9. EIS Distribution

      Broader source: Energy.gov [DOE]

      This DOE guidance presents a series of recommendations related to the EIS distribution process, which includes creating and updating a distribution list, distributing an EIS, and filing an EIS with the EPA.

    10. # Energy Measuremenfs Group

      Office of Legacy Management (LM)

      ri EECE # Energy Measuremenfs Group SUMMARY REPORT . AiRIAL R4DIOLOGICAL SURVEY - NIAGARA FALLS AREA NIAGARA FALLS, NEh' YORK DATE OF SURVEY: SEPTEMBER 1979 APPROVED FOR DISTRIBUTION: P Stuart, EC&G, Inc. . . Herbirt F. Hahn, Department of Energy PERFDRflED BY EGtf, INC. UNDER CONTRACT NO. DE-AHO&76NV01163 WITH THE UNITED STATES DEPARTMENT OF ENERGY II'AFID 010 November 30, 1979 - The Aerial Measurements System (A%), operated by EC&t, Inc< for the Un i ted States Department of

    11. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and...

    12. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    13. A Component Architecture for High-Performance Scientific Computing

      SciTech Connect (OSTI)

      Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

      2004-12-14

      The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

    14. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computer security Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved

    15. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    16. TEC Working Group Topic Groups Archives Consolidated Grant Topic Group |

      Office of Environmental Management (EM)

      Department of Energy Consolidated Grant Topic Group TEC Working Group Topic Groups Archives Consolidated Grant Topic Group The Consolidated Grant Topic Group arose from recommendations provided by the TEC and other external parties to the DOE Senior Executive Transportation Forum in July 1998. It was proposed that the consolidation of multiple funding streams from numerous DOE sources into a single grant would provide a more equitable and efficient means of assistance to States and Tribes

    17. Manufacturing Energy and Carbon Footprint - Sector: Computer...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computers, Electronics and Electrical Equipment (NAICS 334, 335) Process Energy Electricity and Steam Generation Losses Process Losses 5 Nonprocess Losses 493 46 Steam Distribution ...

    18. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      DesignForward FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Exascale Computing Exascale Computing Moving forward into the exascale era, NERSC users place will place increased demands on NERSC computational facilities. Users will be facing increased complexity in the memory subsystem and node architecture. System designs and programming models will have to evolve to face these new challenges. NERSC staff are active in current initiatives addressing

    19. Computer Accounts | Stanford Synchrotron Radiation Lightsource

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Accounts Each user group must have a computer account. Additionally, all persons using these accounts are responsible for understanding and complying with the terms outlined in the "Use of SLAC Information Resources". Links are provided below for computer account forms and the computer security agreement which must be completed and sent to the appropriate contact person. SSRL does not charge for use of its computer systems. Forms X-ray/VUV Computer Account Request Form

    20. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    1. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      low-overhead operating system optimized for high performance computing called "Cray Linux Environment" (CLE). This OS supports only a limited number of system calls and UNIX...

    2. Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Advanced Materials Laboratory Center for Integrated Nanotechnologies Combustion Research Facility Computational Science Research Institute Joint BioEnergy Institute About EC News ...

    3. Broadcasting a message in a parallel computer

      DOE Patents [OSTI]

      Berg, Jeremy E.; Faraj, Ahmad A.

      2011-08-02

      Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

    4. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and...

    5. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center for Computational Sciences

    6. World-wide distribution automation systems

      SciTech Connect (OSTI)

      Devaney, T.M.

      1994-12-31

      A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems.

    7. Computing Sciences Staff Help East Bay High Schoolers Upgrade...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      from underrepresented groups learn about careers in a variety of IT fields, the Laney College Computer Information Systems Department offered its Upgrade: Computer Science Program. ...

    8. NERSC Hosts 50 Enthusiastic Computer Science Students from Dougherty...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Hosts 50 Enthusiastic Computer Science Students from Dougherty Valley High NERSC Hosts 50 Enthusiastic Computer Science Students from Dougherty Valley High May 31, 2016 A group of ...

    9. Computing at SSRL Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      contents you are looking for have moved. You will be redirected to the new location automatically in 5 seconds. Please bookmark the correct page at http://www-ssrl.slac.stanford.edu/content/staff-resources/computer-networking-group

    10. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    11. Distribution Workshop

      Broader source: Energy.gov [DOE]

      On September 24-26, 2012, the GTT presented a workshop on grid integration on the distribution system at the Sheraton Crystal City near Washington, DC.

    12. Distributed Generation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and regulations such as IEEE (Institute of Electrical and Electronics Engineers) 1547 have come a long way in addressing interconnection standards for distributed generation, ...

    13. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Exascale Computing CoDEx Project: A Hardware/Software Codesign Environment for the Exascale Era The next decade will see a rapid evolution of HPC node architectures as power and cooling constraints are limiting increases in microprocessor clock speeds and constraining data movement. Applications and algorithms will need to change and adapt as node architectures evolve. A key element of the strategy as we move forward is the co-design of applications, architectures and programming

    14. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    15. Jay Srinivasan! NERSC Systems Group!

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      NERSC Systems Group! ! NUG 2014! Feb 6, 2014 Computational Systems Group Update (CSG) What CSG Does- * Manage t he s ystems t hat r un y our j obs: - The L arge M PP s ystems ( Hopper & E dison) - The L inux C lusters ( Carver, Genepool, M endel, P DSF) - Testbeds ( Dirac, J esup, I ntel S B/MIC) * Help improve the user experience (batch system, login e nvironment, s ystem p erformance) * Deploy a nd m aintain s torage ( local, N ERSC---Global) on c ompute p laForms * ParHcipate o n S ystem

    16. Interagency mechanical operations group numerical systems group

      SciTech Connect (OSTI)

      1997-09-01

      This report consists of the minutes of the May 20-21, 1971 meeting of the Interagency Mechanical Operations Group (IMOG) Numerical Systems Group. This group looks at issues related to numerical control in the machining industry. Items discussed related to the use of CAD and CAM, EIA standards, data links, and numerical control.

    17. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    18. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    19. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Resources This page is the repository for sundry items of information relevant to general computing on BooNE. If you have a question or problem that isn't answered here, or a suggestion for improving this page or the information on it, please mail boone-computing@fnal.gov and we'll do our best to address any issues. Note about this page Some links on this page point to www.everything2.com, and are meant to give an idea about a concept or thing without necessarily wading through a whole website

    20. ITP Industrial Distributed Energy: Distributed Energy Program...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      ITP Industrial Distributed Energy: Distributed Energy Program Project Profile: Verizon Central Office Building ITP Industrial Distributed Energy: Distributed Energy Program Project ...

    1. Automatic identification of abstract online groups

      DOE Patents [OSTI]

      Engel, David W; Gregory, Michelle L; Bell, Eric B; Cowell, Andrew J; Piatt, Andrew W

      2014-04-15

      Online abstract groups, in which members aren't explicitly connected, can be automatically identified by computer-implemented methods. The methods involve harvesting records from social media and extracting content-based and structure-based features from each record. Each record includes a social-media posting and is associated with one or more entities. Each feature is stored on a data storage device and includes a computer-readable representation of an attribute of one or more records. The methods further involve grouping records into record groups according to the features of each record. Further still the methods involve calculating an n-dimensional surface representing each record group and defining an outlier as a record having feature-based distances measured from every n-dimensional surface that exceed a threshold value. Each of the n-dimensional surfaces is described by a footprint that characterizes the respective record group as an online abstract group.

    2. Computational trigonometry

      SciTech Connect (OSTI)

      Gustafson, K.

      1994-12-31

      By means of the author`s earlier theory of antieigenvalues and antieigenvectors, a new computational approach to iterative methods is presented. This enables an explicit trigonometric understanding of iterative convergence and provides new insights into the sharpness of error bounds. Direct applications to Gradient descent, Conjugate gradient, GCR(k), Orthomin, CGN, GMRES, CGS, and other matrix iterative schemes will be given.

    3. Nick Wright Named Advanced Technologies Group Lead

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nick Wright Named Advanced Technologies Group Lead Nick Wright Named Advanced Technologies Group Lead February 4, 2013 Nick Nick Wright has been named head of the National Energy Research Scientific Computing Center's (NERSC) Advanced Technologies Group (ATG), which focuses on understanding the requirements of current and emerging applications to make choices in hardware design and programming models that best serve the science needs of NERSC users. ATG specializes in benchmarking, system

    4. The Computational Physics Program of the national MFE Computer Center

      SciTech Connect (OSTI)

      Mirin, A.A.

      1989-01-01

      Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.

    5. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, ... The DOE Office of Science's Advanced Scientific Computing Research (ASCR) program ...

    6. Theory, Simulation, and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer, Computational, and Statistical Sciences (CCS) Division is an international ... and statistics The deployment and integration of computational technology, ...

    7. Secure computing for the 'Everyman'

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Secure computing for the 'Everyman' Secure computing for the 'Everyman' If implemented on a wide scale, quantum key distribution technology could ensure truly secure commerce, banking, communications and data transfer. September 2, 2014 This small device developed at Los Alamos National Laboratory uses the truly random spin of light particles as defined by laws of quantum mechanics to generate a random number for use in a cryptographic key that can be used to securely transmit information

    8. JLF User Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      jlf user group JLF User Group 2015 NIF and JLF User Group Meeting Links: Send request to join the JLF User Group Join the NIF User Group Dr. Carolyn Kuranz - JLF User Group Dr. Carolyn Kuranz received her Ph.D. in Applied Physics from the University of Michigan in 2009. She is currently an Assistant Research Scientist at the Center for Laser Experimental Astrophysical Research and the Center for Radiative Shock Hydrodynamics at the University of Michigan. Her research involves hydrodynamic

    9. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

    10. Bio-Derived Liquids to Hydrogen Distributed Reforming Working...

      Office of Environmental Management (EM)

      Meeting - November 2007 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Meeting - November 2007 The Bio-Derived Liquids to Hydrogen Distributed Reforming ...

    11. Computational Combustion

      SciTech Connect (OSTI)

      Westbrook, C K; Mizobuchi, Y; Poinsot, T J; Smith, P J; Warnatz, J

      2004-08-26

      Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surface and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.

    12. RATIO COMPUTER

      DOE Patents [OSTI]

      Post, R.F.

      1958-11-11

      An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.

    13. Debugging a high performance computing program

      DOE Patents [OSTI]

      Gooding, Thomas M.

      2014-08-19

      Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

    14. Debugging a high performance computing program

      DOE Patents [OSTI]

      Gooding, Thomas M.

      2013-08-20

      Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

    15. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    16. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Events Computing Events Spotlighting the most advanced scientific and technical applications in the world! Featuring exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies will be seen for the first time in Denver. Supercomputing Conference 13 Denver, Colorado November 17-22, 2013 Spotlighting the most advanced scientific and technical applications in the world, SC13 will bring together the international

    17. TEC Working Group Topic Groups Routing

      Broader source: Energy.gov [DOE]

      The Routing Topic Group has been established to examine topics of interest and relevance concerning routing of shipments of spent nuclear fuel (SNF) and high-level radioactive waste (HLW) to a...

    18. TEC Working Group Topic Groups Manual Review

      Broader source: Energy.gov [DOE]

      This group is responsible for the update of DOE Manual 460.2-1, Radioactive Material Transportation Practices Manual.  This manual was issued on September 23, 2002, and establishes a set of...

    19. Constructing the ASCI computational grid

      SciTech Connect (OSTI)

      BEIRIGER,JUDY I.; BIVENS,HUGH P.; HUMPHREYS,STEVEN L.; JOHNSON,WILBUR R.; RHEA,RONALD E.

      2000-06-01

      The Accelerated Strategic Computing Initiative (ASCI) computational grid is being constructed to interconnect the high performance computing resources of the nuclear weapons complex. The grid will simplify access to the diverse computing, storage, network, and visualization resources, and will enable the coordinated use of shared resources regardless of location. To match existing hardware platforms, required security services, and current simulation practices, the Globus MetaComputing Toolkit was selected to provide core grid services. The ASCI grid extends Globus functionality by operating as an independent grid, incorporating Kerberos-based security, interfacing to Sandia's Cplant{trademark},and extending job monitoring services. To fully meet ASCI's needs, the architecture layers distributed work management and criteria-driven resource selection services on top of Globus. These services simplify the grid interface by allowing users to simply request ''run code X anywhere''. This paper describes the initial design and prototype of the ASCI grid.

    20. Spatial distribution of HTO activity in unsaturated soil depth in the vicinity of long-term release source

      SciTech Connect (OSTI)

      Golubev, A.; Golubeva, V.; Mavrin, S.

      2015-03-15

      Previous studies reported about a correlation between HTO activity distribution in unsaturated soil layer and atmospheric long-term releases of HTO in the vicinity of Savannah River Site. The Tritium Working Group of BIOMASS Programme has performed a model-model intercomparison study of HTO transport from atmosphere to unsaturated soil and has evaluated HTO activity distribution in the unsaturated soil layer in the vicinity of permanent atmospheric sources. The Tritium Working Group has also reported about such a correlation, however the conclusion was that experimental data sets are needed to confirm this conclusion and also to validate appropriate computer models. (authors)

    1. Venkatram Vishwanath | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Venkatram Vishwanath Computer Scientist, Data Science Group Lead Venkatram Vishwanath Argonne National Laboratory 9700 S. Cass Avenue Building 240 - Rm. 4141 Argonne, IL 60439 630-252-4971 venkat@anl.gov Venkatram Vishwanath is a computer scientist at Argonne National Laboratory. He is the Data Science group lead at the Argonne leadership computing facility (ALCF). His current focus is on algorithms, system software, and workflows to facilitate data-centric applications on supercomputing

    2. JLab Users Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      JLab Users Group Please upgrade your browser. This site's design is only visible in a graphical browser that supports web standards, but its content is accessible to any browser. Concerns? JLab Users Group User Liaison Home Users Group Program Advisory Committee User/Researcher Information print version UG Resources Background & Purpose Users Group Wiki By Laws Board of Directors Board of Directors Minutes Directory of Members Events At-A-Glance Member Institutions News Users Group Mailing

    3. The Ren Group - Home

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      resolution (1-2 nm). We continue to develop this approach by optimizing through empirical and computational methods to achieve high-resolution structures of single...

    4. Jason Hick! Storage Systems Group! NERSC User Group Meeting!

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Group! ! NERSC User Group Meeting! February 6, 2014 Storage Systems: 2014 and beyond The compute and storage systems 2013 Produc(on C lusters Carver, P DSF, J GI,KBASE,HEP 1 4x Q DR Global Scratch 3.6 PB 5 x S FA12KE /project 5 PB DDN9900 & NexSAN /home 250 TB NetApp 5 460 50 P B s tored, 2 40 PB c apacity, 3 5 years o f community d ata HPSS 16 x Q DR I B 2.2 P B L ocal Scratch 70 GB/s 6.4 P B L ocal Scratch 140 GB/s 16 x F DR I B Ethernet & I B F abric Science F riendly S ecurity

    5. Moltech Power Systems Group MPS Group | Open Energy Information

      Open Energy Info (EERE)

      Moltech Power Systems Group MPS Group Jump to: navigation, search Name: Moltech Power Systems Group (MPS Group) Place: China Product: China-based subsidiary of Shanghai Huayi Group...

    6. Hanergy Holdings Group Company Ltd formerly Farsighted Group...

      Open Energy Info (EERE)

      Hanergy Holdings Group Company Ltd formerly Farsighted Group aka Huarui Group Jump to: navigation, search Name: Hanergy Holdings Group Company Ltd (formerly Farsighted Group, aka...

    7. Computing and Computational Sciences Directorate - Computer Science and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics Division Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Our mission includes basic research in computational sciences and application of advanced computing systems, computational, mathematical and analysis techniques to the solution of scientific problems of national importance. We seek to work

    8. Distributed Merge Trees

      SciTech Connect (OSTI)

      Morozov, Dmitriy; Weber, Gunther

      2013-01-08

      Improved simulations and sensors are producing datasets whose increasing complexity exhausts our ability to visualize and comprehend them directly. To cope with this problem, we can detect and extract significant features in the data and use them as the basis for subsequent analysis. Topological methods are valuable in this context because they provide robust and general feature definitions. As the growth of serial computational power has stalled, data analysis is becoming increasingly dependent on massively parallel machines. To satisfy the computational demand created by complex datasets, algorithms need to effectively utilize these computer architectures. The main strength of topological methods, their emphasis on global information, turns into an obstacle during parallelization. We present two approaches to alleviate this problem. We develop a distributed representation of the merge tree that avoids computing the global tree on a single processor and lets us parallelize subsequent queries. To account for the increasing number of cores per processor, we develop a new data structure that lets us take advantage of multiple shared-memory cores to parallelize the work on a single node. Finally, we present experiments that illustrate the strengths of our approach as well as help identify future challenges.

    9. NERSC Users Group Monthly Meeting

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      August 25, 2016 Agenda ● Cori Phase II Update ● Data Day debrief ● NESAP & resources for porting to KNL ● Edison Scratch Filesystem Updates ● AY 2017 ERCAP Allocation Requests Cori Phase II Update Tina Declerck Computational Systems Group August 25, 2016 ● Prep for Cori Phase 2 ● Cori Phase 2 Installation ● System Arrival & Installation ● Current Status ● Projected Timeline ● NERSC pre-merge testing ● Merge plan ● Post Merge ● Acceptance Testing Agenda 4 ●

    10. MiniBooNE Pion Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Pion Group

    11. NERSC Intern Wins Award for Computing Achievement

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Intern Wins Award for Computing Achievement NERSC Intern Wins Award for Computing Achievement March 27, 2013 Linda Vu, lvu@lbl.gov, +1 510 495 2402 ncwit1 Stephanie Cabanela, a student intern in the National Energy Research Scientific Computing Center's (NERSC) Operation Technologies Group was honored with the Bay Area Affiliate National Center for Women and Information Technology (NCWIT) Aspirations in Computing award on Saturday, March 16, 2013 in a ceremony in San Jose, CA. The award honors

    12. Distributed Optimization System

      DOE Patents [OSTI]

      Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

      2004-11-30

      A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

    13. Distributed Generation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Untapped Value of Backup Generation While new guidelines and regulations such as IEEE (Institute of Electrical and Electronics Engineers) 1547 have come a long way in addressing interconnection standards for distributed generation, utilities have largely overlooked the untapped potential of these resources. Under certain conditions, these units (primarily backup generators) represent a significant source of power that can deliver utility services at lower costs than traditional centralized

    14. Distribution Category:

      Office of Legacy Management (LM)

      - Distribution Category: Remedial Action and Decommissioning Program (UC-70A) DOE/EV-0005/48 ANL-OHS/HP-84-104 ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 FORMERLY UTILIZED MXD/AEC SITES REMEDIAL ACTION PROGRAM RADIOLOGICAL SURVEY OF THE HARSHAW CHEMICAL COMPANY CLEVELAND. OHIO Prepared by R. A. Wynveen Associate Division Director, OHS W. H. Smith Senior Health Physicist C. M. Sholeen Health Physicist A. L. Justus Health Physicist K. F. Flynn Health Physicist

    15. Exploratory Experimentation and Computation

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2010-02-25

      We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

    16. Running Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Running Jobs by Group Running Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2011-04-05 13:59:48...

    17. Pending Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Pending Jobs by Group Pending Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2011-04-05 14:00:14...

    18. UFD Working Group 2015

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Working Group 2015 - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare UFD Working Group 2015 HomeStationary ...

    19. Pending Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Pending Jobs by Group Pending Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2016-04-29 11:35:04

    20. Running Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Running Jobs by Group Running Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2016-04-29 11:34:43

    1. HEP Computing | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HEP Computing A number of computing resources are available for HEP employees and visitors. Problem Report or Service Request - Send email to the computing group and log it on the Problem Report Page. (Note: You need to be connected to the ANL network or to be running VPN to submit a problem report.) New Users or Visitors - Start here if you are new to Argonne HEP. Password Help Email Windows Desktops Laptops Linux Users HEP Division FAQs - Find answers for commonly requested information here.

    2. Introduction to computers: Reference guide

      SciTech Connect (OSTI)

      Ligon, F.V.

      1995-04-01

      The ``Introduction to Computers`` program establishes formal partnerships with local school districts and community-based organizations, introduces computer literacy to precollege students and their parents, and encourages students to pursue Scientific, Mathematical, Engineering, and Technical careers (SET). Hands-on assignments are given in each class, reinforcing the lesson taught. In addition, the program is designed to broaden the knowledge base of teachers in scientific/technical concepts, and Brookhaven National Laboratory continues to act as a liaison, offering educational outreach to diverse community organizations and groups. This manual contains the teacher`s lesson plans and the student documentation to this introduction to computer course.

    3. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      17, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on July 17, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Glen Clark, Robert Elkins, Scot Fitzgerald, Larry Markel, Cindy Taylor, Sam Vega, Rich Weiss and Eric Wyse. I. Huei Meznarich requested comments on the minutes from the June 12, 2012 meeting. No HASQARD Focus Group members present stated any

    4. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      8, 2013 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on June 18, 2013 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Glen Clark, Scot Fitzgerald, Joan Kessner, Larry Markel, Karl Pool, Chris Sutton, Amanda Tuttle, Rich Weiss and Eric Wyse. I. Huei Meznarich requested comments on the minutes from the May 21, 2013 meeting. No HASQARD Focus Group members present

    5. NIF User Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      group NIF User Group The National Ignition Facility User Group provides an organized framework and independent vehicle for interaction between the scientists who use NIF for "Science Use of NIF" experiments and NIF management. Responsibility for NIF and the research programs carried out at NIF resides with the NIF Director. The NIF User Group advises the NIF Director on matters of concern to users, as well as providing a channel for communication for NIF users with funding agencies and

    6. TEC Communications Topic Group

      Office of Environmental Management (EM)

      procurement - Routing criteriaemergency preparedness Tribal Issues Topic Group * TEPP Navajo Nation (Tom Clawson) - 1404 - Needs Assessment * Identified strengths and...

    7. Tritium Focus Group- INEL

      Broader source: Energy.gov [DOE]

      Presentation from the 34th Tritium Focus Group Meeting held in Idaho Falls, Idaho on September 23-25, 2014.

    8. Interagency Sustainability Working Group

      Broader source: Energy.gov [DOE]

      The Interagency Sustainability Working Group (ISWG) is the coordinating body for sustainable buildings in the federal government.

    9. SSRL ETS Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      STANFORD SYNCHROTRON RADIATION LABORATORY Stanford Linear Accelerator Center Engineering & Technical Services Groups: Mechanical Services Group Mechanical Services Group Sharepoint ASD: Schedule Priorites Accelerator tech support - Call List Documentation: Engineering Notes, Drawings, and Accelerator Safety Documents Mechanical Systems: Accelerator Drawings Accelerator Pictures Accelerator Vacuum Systems (SSRL) LCW Vacuum Projects: Last Updated: February 8, 2007 Ben Scott

    10. Large Group Visits

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Large Group Visits Large Group Visits All tours of the Museum are self-guided, but please schedule in advance so we can best accommodate your group. Contact Us thumbnail of 1350 Central Avenue (505) 667-4444 Email Let us know if you plan to bring a group of 10 or more. All tours of the Museum are self-guided, but please schedule in advance so we can best accommodate your group. Parking for buses and RVs is available on Iris Street behind the Museum off of 15th St. See attached map (pdf). Contact

    11. Grouped exposed metal heaters

      DOE Patents [OSTI]

      Vinegar, Harold J.; Coit, William George; Griffin, Peter Terry; Hamilton, Paul Taylor; Hsu, Chia-Fu; Mason, Stanley Leroy; Samuel, Allan James; Watkins, Ronnie Wade

      2012-07-31

      A system for treating a hydrocarbon containing formation is described. The system includes two or more groups of elongated heaters. The group includes two or more heaters placed in two or more openings in the formation. The heaters in the group are electrically coupled below the surface of the formation. The openings include at least partially uncased wellbores in a hydrocarbon layer of the formation. The groups are electrically configured such that current flow through the formation between at least two groups is inhibited. The heaters are configured to provide heat to the formation.

    12. Grouped exposed metal heaters

      DOE Patents [OSTI]

      Vinegar, Harold J.; Coit, William George; Griffin, Peter Terry; Hamilton, Paul Taylor; Hsu, Chia-Fu; Mason, Stanley Leroy; Samuel, Allan James; Watkins, Ronnie Wade

      2010-11-09

      A system for treating a hydrocarbon containing formation is described. The system includes two or more groups of elongated heaters. The group includes two or more heaters placed in two or more openings in the formation. The heaters in the group are electrically coupled below the surface of the formation. The openings include at least partially uncased wellbores in a hydrocarbon layer of the formation. The groups are electrically configured such that current flow through the formation between at least two groups is inhibited. The heaters are configured to provide heat to the formation.

    13. Secure key storage and distribution

      SciTech Connect (OSTI)

      Agrawal, Punit

      2015-06-02

      This disclosure describes a distributed, fault-tolerant security system that enables the secure storage and distribution of private keys. In one implementation, the security system includes a plurality of computing resources that independently store private keys provided by publishers and encrypted using a single security system public key. To protect against malicious activity, the security system private key necessary to decrypt the publication private keys is not stored at any of the computing resources. Rather portions, or shares of the security system private key are stored at each of the computing resources within the security system and multiple security systems must communicate and share partial decryptions in order to decrypt the stored private key.

    14. Computing Resources | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Computing Resources Theory and Computing Sciences Building Argonne's Theory and Computing Sciences (TCS) building houses a wide variety of computing systems including some of the most powerful supercomputers in the world. The facility has 25,000 square feet of raised computer floor space and a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. The building also

    15. Avanced Large-scale Integrated Computational Environment

      Energy Science and Technology Software Center (OSTI)

      1998-10-27

      The ALICE Memory Snooper is a software applications programming interface (API) and library for use in implementing computational steering systems. It allows distributed memory parallel programs to publish variables in the computation that may be accessed over the Internet. In this way, users can examine and even change the variables in their running application remotely. The API and library ensure the consistency of the variables across the distributed memory system.

    16. An integrated distributed processing interface for supercomputers and workstations

      SciTech Connect (OSTI)

      Campbell, J.; McGavran, L.

      1989-01-01

      Access to documentation, communication between multiple processes running on heterogeneous computers, and animation of simulations of engineering problems are typically weak in most supercomputer environments. This presentation will describe how we are improving this situation in the Computer Research and Applications group at Los Alamos National Laboratory. We have developed a tool using UNIX filters and a SunView interface that allows users simple access to documentation via mouse driven menus. We have also developed a distributed application that integrated a two point boundary value problem on one of our Cray Supercomputers. It is controlled and displayed graphically by a window interface running on a workstation screen. Our motivation for this research has been to improve the usual typewriter/static interface using language independent controls to show capabilities of the workstation/supercomputer combination. 8 refs.

    17. Specific Group Hardware

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      jobs to pdsfgrid (via condor) which submits jobs to the compute nodes, monitoring the cluster work load, and uploading job information to ALICE file catalog. It is monitored with...

    18. Fermilab Steering Group Report

      SciTech Connect (OSTI)

      Steering Group, Fermilab; /Fermilab

      2007-12-01

      The Fermilab Steering Group has developed a plan to keep U.S. accelerator-based particle physics on the pathway to discovery, both at the Terascale with the LHC and the ILC and in the domain of neutrinos and precision physics with a high-intensity accelerator. The plan puts discovering Terascale physics with the LHC and the ILC as Fermilab's highest priority. While supporting ILC development, the plan creates opportunities for exciting science at the intensity frontier. If the ILC remains near the Global Design Effort's technically driven timeline, Fermilab would continue neutrino science with the NOvA experiment, using the NuMI (Neutrinos at the Main Injector) proton plan, scheduled to begin operating in 2011. If ILC construction must wait somewhat longer, Fermilab's plan proposes SNuMI, an upgrade of NuMI to create a more powerful neutrino beam. If the ILC start is postponed significantly, a central feature of the proposed Fermilab plan calls for building an intense proton facility, Project X, consisting of a linear accelerator with the currently planned characteristics of the ILC combined with Fermilab's existing Recycler Ring and the Main Injector accelerator. The major component of Project X is the linac. Cryomodules, radio-frequency distribution, cryogenics and instrumentation for the linac are the same as or similar to those used in the ILC at a scale of about one percent of a full ILC linac. Project X's intense proton beams would open a path to discovery in neutrino science and in precision physics with charged leptons and quarks. World-leading experiments would allow physicists to address key questions of the Quantum Universe: How did the universe come to be? Are there undiscovered principles of nature: new symmetries, new physical laws? Do all the particles and forces become one? What happened to the antimatter? Building Project X's ILC-like linac would offer substantial support for ILC development by accelerating the industrialization of ILC components

    19. Fermilab Steering Group Report

      SciTech Connect (OSTI)

      Beier, Eugene; Butler, Joel; Dawson, Sally; Edwards, Helen; Himel, Thomas; Holmes, Stephen; Kim, Young-Kee; Lankford, Andrew; McGinnis, David; Nagaitsev, Sergei; Raubenheimer, Tor; /SLAC /Fermilab

      2007-01-01

      The Fermilab Steering Group has developed a plan to keep U.S. accelerator-based particle physics on the pathway to discovery, both at the Terascale with the LHC and the ILC and in the domain of neutrinos and precision physics with a high-intensity accelerator. The plan puts discovering Terascale physics with the LHC and the ILC as Fermilab's highest priority. While supporting ILC development, the plan creates opportunities for exciting science at the intensity frontier. If the ILC remains near the Global Design Effort's technically driven timeline, Fermilab would continue neutrino science with the NOVA experiment, using the NuMI (Neutrinos at the Main Injector) proton plan, scheduled to begin operating in 2011. If ILC construction must wait somewhat longer, Fermilab's plan proposes SNuMI, an upgrade of NuMI to create a more powerful neutrino beam. If the ILC start is postponed significantly, a central feature of the proposed Fermilab plan calls for building an intense proton facility, Project X, consisting of a linear accelerator with the currently planned characteristics of the ILC combined with Fermilab's existing Recycler Ring and the Main Injector accelerator. The major component of Project X is the linac. Cryomodules, radio-frequency distribution, cryogenics and instrumentation for the linac are the same as or similar to those used in the ILC at a scale of about one percent of a full ILC linac. Project X's intense proton beams would open a path to discovery in neutrino science and in precision physics with charged leptons and quarks. World-leading experiments would allow physicists to address key questions of the Quantum Universe: How did the universe come to be? Are there undiscovered principles of nature: new symmetries, new physical laws? Do all the particles and forces become one? What happened to the antimatter? Building Project X's ILC-like linac would offer substantial support for ILC development by accelerating the industrialization of ILC components

    20. Proceedings of the April 2011 Computational Needs for the Next...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      computational challenges associated with the operation and planning of the electric power system. ... Final Report and Other Materials from 2014 Resilient Electric Distribution Grid ...

    1. Supercomputing on a Shoestring: Cluster Computers at JLab | Jefferson...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      which describe the distribution of electric charge and current inside the nucleon. Apple To calculate the solution to a science problem, a cluster computer slices space up...

    2. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    3. The computational physics program of the National MFE Computer Center

      SciTech Connect (OSTI)

      Mirin, A.A.

      1988-01-01

      The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generation of supercomputers. The computational physics group is involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to compact toroids. Another major area is the investigation of kinetic instabilities using a 3-D particle code. This work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence are being examined. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers.

    4. Logistical Multicast for Data Distribution linkbordercolor

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Logistical Multicast for Data Distribution Jason Zurawski, Martin Swany Micah Beck, Ying Ding Department of Computer and Information Sciences Department of Computer Science University of Delaware, Newark, DE 19716 University of Tennessee, Knoxville, TN 37996 {zurawski, swany}@cis.udel.edu {mbeck, ying}@cs.utk.edu Abstract This paper describes a simple scheduling procedure for use in multicast data distribution within a logistical networking infrastructure. The goal of our scheduler is to

    5. TEC Working Group Topic Groups Rail Key Documents Intermodal Subgroup |

      Office of Environmental Management (EM)

      Department of Energy Intermodal Subgroup TEC Working Group Topic Groups Rail Key Documents Intermodal Subgroup Intermodal Subgroup Draft Work Plan (206.83 KB) More Documents & Publications TEC Working Group Topic Groups Rail Key Documents Radiation Monitoring Subgroup TEC Working Group Topic Groups Rail Conference Call Summaries Intermodal Subgroup TEC Working Group Topic Groups Rail Conference Call Summaries Rail Topic Group

    6. and Control of Power Systems Using Distributed Synchrophasors

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      ... be offered through the Electrical & Computer Engineering ... program focused on distribution systems, substation ... of Synchrophasors in transmission-level power systems, and ...

    7. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      January 15, 2013 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:02 PM on January 15, 2013 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Glen Clark, Scot Fitzgerald, Larry Markel, Karl Pool, Dave St. John, Chris Sutton, Chris Thompson, Steve Trent, Amanda Tuttle and Eric Wyse. I. Huei Meznarich requested comments on the minutes from the December 18, 2012 meeting. One issue

    8. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7, 2013 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:09 PM on December 17, 2013 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Taffy Almeida, Joe Archuleta, Jeff Cheadle, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Karl Pool, Chris Sutton, Amanda Tuttle, Rich Weiss and Eric Wyse. I. Huei Meznarich asked if there were any comments on the minutes from the

    9. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      22, 2015 The meeting was called to order by Cliff Watkins, HASQARD Focus Group Secretary at 2:05 PM on October 22, 2015 in Conference Room 328 at 2420 Stevens. Those attending were: Jonathan Sanwald (Mission Support Alliance (MSA), Focus Group Chair), Cliff Watkins (Corporate Allocation Services, DOE-RL Support Contractor, Focus Group Secretary), Glen Clark (Washington River Protection Solution (WRPS)), Fred Dunhour (DOE-ORP), Joan Kessner (Washington Closure Hanford (WCH)), Karl Pool (Pacific

    10. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6, 2016 The meeting was called to order by Jonathan Sanwald, HASQARD Focus Group Chair at 2:05 PM on January 26, 2016 in Conference Room 308 at 2420 Stevens. Those attending were: Jonathan Sanwald (Mission Support Alliance (MSA), Focus Group Chair), Cliff Watkins (Corporate Allocation Services, DOE-RL Support Contractor, Focus Group Secretary), Taffy Almeida (Pacific Northwest National Laboratory (PNNL)), Jeff Cheadle (DOE-ORP), Glen Clark (Washington River Protection Solution (WRPS)), Fred

    11. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 The meeting was called to order by Jonathan Sanwald, HASQARD Focus Group Chair at 2:10 PM on April 19, 2016 in Conference Room 308 at 2420 Stevens. Those attending were: Jonathan Sanwald (Mission Support Alliance (Mission Support Alliance (MSA)), Focus Group Chair), Cliff Watkins (Corporate Allocation Services, DOE-RL Support Contractor, Focus Group Secretary), Marcus Aranda (Wastren Advantage Inc. Wastren Hanford Laboratory (WHL)), Joe Archuleta (CH2M HILL Plateau Remediation Company

    12. TEC Communications Topic Group

      Office of Environmental Management (EM)

      Tribal Issues Topic Group Judith Holm, Chair April 21, 2004 Albuquerque, NM Tribal Issues Topic Group * February Tribal Summit with Secretary of Energy (Kristen Ellis, CI) - Held in conjunction with NCAI mid-year conference - First Summit held in response to DOE Indian Policy - Addressed barriers to communication and developing framework for interaction Tribal Issues Topic Group * Summit (continued) - Federal Register Notice published in March soliciting input on how to improve summit process

    13. ALS Communications Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ALS Communications Group Print From left: Ashley White, Lori Tamura, Keri Troutman, and Carina Braun. The ALS Communications staff maintain the ALS Web site; write and edit all...

    14. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:04 PM on October 16, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Jeff Cheadle, Glen Clark, Robert Elkins, Larry Markel, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Chris Sutton, Steve Trent, Amanda Tuttle, Sam Vega, Rich Weiss and Eric Wyse. New personnel have joined the Focus Group since the last

    15. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      27, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:09 PM on November 27, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Glen Clark, Robert Elkins, Joan Kessner, Larry Markel, Mary McCormick-Barger, Steve Trent, and Rich Weiss. I. Huei Meznarich requested comments on the minutes from the October 16, 2012 meeting. No HASQARD Focus Group members present stated any

    16. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      0, 2013 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on August 20, 2013 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Taffy Almeida, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Steve Smith, Rich Weiss and Eric Wyse. I. Huei Meznarich asked if there were any comments on the minutes from the July 23, 2013 meeting. No Focus Group members stated they had

    17. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      5, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:10 PM on April 15, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Glen Clark, Robert Elkins, Scot Fitzgerald, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, and Eric Wyse. I. Huei Meznarich asked if there were any comments on the minutes from the March 18, 2014 meeting. No Focus Group members stated they

    18. Hydrogen Technologies Group

      SciTech Connect (OSTI)

      Not Available

      2008-03-01

      The Hydrogen Technologies Group at the National Renewable Energy Laboratory advances the Hydrogen Technologies and Systems Center's mission by researching a variety of hydrogen technologies.

    19. The Chaninik Wind Group

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Energy Chaninik Wind Group Villages Kongiganak pop.359 Kwigillingok pop. 388 Kipnuk pop.644 Tuntutuliak pop. 370 On average, 24% of families are below the poverty line. ...

    20. Buildings Sector Working Group

      U.S. Energy Information Administration (EIA) Indexed Site

      Group Forrestal 2E-069 July 22, 2013 2 * Residential projects - RECS update - Lighting model - Equipment, shell subsidies - ENERGY STAR benchmarking - Housing stock formation ...

    1. Tritium Focus Group Meeting

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Meeting Information Tritium Focus Group Charter (pdf) Hotel Information Classified Session Information Los Alamos Restaurants (pdf) LANL Information Visiting Los Alamos Area Map ...

    2. SCM Working Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Modeling Working Group Translator Update Shaocheng Xie Lawrence Livermore National Laboratory Outline 1. Data development in support of CMWG * Climate modeling best estimate data * ...

    3. Mobile computing device configured to compute irradiance, glint, and glare of the sun

      DOE Patents [OSTI]

      Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib

      2014-03-11

      Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.

    4. Unix File Groups at NERSC

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      A user's default group is the same as their username. NERSC users usually belong to ... Useful Unix Group Commands Command Description groups username List group membership id ...

    5. Exascale Hardware Architectures Working Group

      SciTech Connect (OSTI)

      Hemmert, S; Ang, J; Chiang, P; Carnes, B; Doerfler, D; Leininger, M; Dosanjh, S; Fields, P; Koch, K; Laros, J; Noe, J; Quinn, T; Torrellas, J; Vetter, J; Wampler, C; White, A

      2011-03-15

      The ASC Exascale Hardware Architecture working group is challenged to provide input on the following areas impacting the future use and usability of potential exascale computer systems: processor, memory, and interconnect architectures, as well as the power and resilience of these systems. Going forward, there are many challenging issues that will need to be addressed. First, power constraints in processor technologies will lead to steady increases in parallelism within a socket. Additionally, all cores may not be fully independent nor fully general purpose. Second, there is a clear trend toward less balanced machines, in terms of compute capability compared to memory and interconnect performance. In order to mitigate the memory issues, memory technologies will introduce 3D stacking, eventually moving on-socket and likely on-die, providing greatly increased bandwidth but unfortunately also likely providing smaller memory capacity per core. Off-socket memory, possibly in the form of non-volatile memory, will create a complex memory hierarchy. Third, communication energy will dominate the energy required to compute, such that interconnect power and bandwidth will have a significant impact. All of the above changes are driven by the need for greatly increased energy efficiency, as current technology will prove unsuitable for exascale, due to unsustainable power requirements of such a system. These changes will have the most significant impact on programming models and algorithms, but they will be felt across all layers of the machine. There is clear need to engage all ASC working groups in planning for how to deal with technological changes of this magnitude. The primary function of the Hardware Architecture Working Group is to facilitate codesign with hardware vendors to ensure future exascale platforms are capable of efficiently supporting the ASC applications, which in turn need to meet the mission needs of the NNSA Stockpile Stewardship Program. This issue is

    6. Jason Hick! Storage Systems Group NERSC User Group Storage Update

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      NERSC User Group Storage Update Feb 2 6, 2 014 The compute and storage systems 2014 Sponsored C ompute S ystems Carver, P DSF, J GI, K BASE, H EP 8 x F DR I B /global/ scratch 4 PB /project 5 PB /home 250 TB 45 P B s tored, 2 40 P B capacity, 4 0 y ears o f community d ata HPSS 48 GB/s 2.2 P B L ocal Scratch 70 GB/s 6.4 P B L ocal Scratch 140 GB/s 80 GB/s Ethernet & I B F abric Science F riendly S ecurity ProducKon M onitoring Power E fficiency WAN 2 x 10 Gb 1 x 100 Gb Science D ata N etwork

    7. TEC Working Group Topic Groups Routing Meeting Summaries | Department of

      Office of Environmental Management (EM)

      Energy Meeting Summaries TEC Working Group Topic Groups Routing Meeting Summaries MEETING SUMMARIES Atlanta TEC Meeting, Routing Topic Group Summary (101.72 KB) More Documents & Publications TEC Meeting Summaries - January - February 2007 TEC Working Group Topic Groups Rail Meeting Summaries TEC Working Group Topic Groups Rail Conference Call Summaries Rail Topic Group

    8. TEC Working Group Topic Groups Rail Conference Call Summaries...

      Office of Environmental Management (EM)

      Summaries Rail Topic Group TEC Working Group Topic Groups Rail Conference Call Summaries Rail Topic Group Rail Topic Group PDF icon May 17, 2007 PDF icon January 16, 2007 PDF icon...

    9. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:06 PM on June 12, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Jeff Cheadle, Glen Clark, Shannan Johnson, Joan Kessner, Larry Markel, Karl Pool, Steve Smith, Noe'l Smith-Jackson, Chris Sutton, Cindy Taylor, Chris Thomson, Amanda Tuttle, Sam Vega, Rick Warriner and Eric Wyse. I. Huei Meznarich requested comments on the

    10. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      1, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:10 PM on August 21, 2012 in an alternate Conference Room in 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Lynn Albin, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Larry Markel, Steve Smith, Chris Sutton. Chris Thompson, Amanda Tuttle, and Rich Weiss. I. Because the meeting was scheduled to take place in Room 308 and a glitch in

    11. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6, 2013 The beginning of the meeting was delayed due to an unannounced loss of the conference room scheduled for the meeting. After securing another meeting location, the meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:18 PM on April 16, 2013 in Conference Room 156 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Jeff Cheadle, Glen Clark, Joan Kessner, Larry Markel, Mary McCormick-Barger, Karl Pool,

    12. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      19, 2013 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on November 19, 2013 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Taffy Almeida, Joe Archuleta, Mike Barnes, Jeff Cheadle, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Mary McCormick-Barger, Noe'l Smith-Jackson, Chris Sutton, Amanda Tuttle, Rich Weiss and Eric Wyse. I. Huei Meznarich asked if

    13. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      January 28, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:04 PM on January 28, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Joe Archuleta, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Chris Sutton, Chris Thompson, Rich Weiss and Eric Wyse. I. Huei Meznarich asked if there were any comments on

    14. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      5, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:07 PM on February 25, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Lynn Albin, Taffy Almeida, Joe Archuleta, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Chris Sutton, Chris Thompson, and Eric Wyse. I. Huei Meznarich asked if there were any

    15. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      8, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on March 18, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Joe Archuleta, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Rich Weiss, and Eric Wyse. I. Huei Meznarich asked if there were any comments on the minutes from the February 25, 2014

    16. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      0, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on May 20, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Lynn Albin, Taffy Almeida, Joe Archuleta, Glen Clark, Robert Elkins, Scot Fitzgerald, Shannan Johnson, Joan Kessner, Mary McCormick-Barger, Craig Perkins, Karl Pool, Noe'l Smith-Jackson, Chris Sutton, Chris Thompson and Eric Wyse. I. Acknowledging the

    17. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      4 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:07 PM on June 12, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Joe Archuleta, Sara Champoux, Glen Clark, Jim Douglas, Robert Elkins, Scot Fitzgerald, Joan Kessner, Jan McCallum, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Rich Weiss and Eric Wyse. I. Acknowledging the presence of new and/or infrequent

    18. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:10 PM on June 17, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Robert Elkins, Shannan Johnson, Joan Kessner, Jan McCallum, Craig Perkins, Karl Pool, Chris Sutton and Rich Weiss. I. Because of the short time since the last meeting, Huei Meznarich stated that the minutes from the June 12, 2014 meeting have not yet

    19. Trails Working Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Working Group Trails Working Group Our mission is to inventory, map, and prepare historical reports on the many trails used at LANL. Contact Environmental Communication & Public Involvement P.O. Box 1663 MS M996 Los Alamos, NM 87545 (505) 667-0216 Email The LANL Trails Working Group inventories, maps, and prepares historical reports on the many trails used at LANL. Some of these trails are ancient pueblo footpaths that continue to be used for recreational hiking today. Some serve as quiet

    20. Group key management

      SciTech Connect (OSTI)

      Dunigan, T.; Cao, C.

      1997-08-01

      This report describes an architecture and implementation for doing group key management over a data communications network. The architecture describes a protocol for establishing a shared encryption key among an authenticated and authorized collection of network entities. Group access requires one or more authorization certificates. The implementation includes a simple public key and certificate infrastructure. Multicast is used for some of the key management messages. An application programming interface multiplexes key management and user application messages. An implementation using the new IP security protocols is postulated. The architecture is compared with other group key management proposals, and the performance and the limitations of the implementation are described.

    1. Beyond moore computing research challenge workshop report.

      SciTech Connect (OSTI)

      Huey, Mark C.; Aidun, John Bahram

      2013-10-01

      We summarize the presentations and break out session discussions from the in-house workshop that was held on 11 July 2013 to acquaint a wider group of Sandians with the Beyond Moore Computing research challenge.

    2. Distributed processor allocation for launching applications in a massively connected processors complex

      DOE Patents [OSTI]

      Pedretti, Kevin

      2008-11-18

      A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.

    3. System-wide power management control via clock distribution network

      DOE Patents [OSTI]

      Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.

      2015-05-19

      An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.

    4. Bio-Derived Liquids to Hydrogen Distributed Reforming Targets (Presentation)

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Distributed Reforming Targets Arlene F. Anderson Technology Development Manager, U.S. DOE Office of Energy Efficiency and Renewable Energy Hydrogen, Fuel Cells and Infrastructure Technologies Program Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group and Hydrogen Production Technical Team Review November 6, 2007 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) The Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG), launched

    5. Tritium Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      matters related to tritium. Contacts Mike Rogers (505) 665-2513 Email Chandra Savage Marsden (505) 664-0183 Email The Tritium Focus Group consists of participants from member...

    6. Strategic Initiatives Work Group

      Broader source: Energy.gov [DOE]

      The Work Group, comprised of members representing DOE, contractor and worker representatives, provides a forum for information sharing; data collection and analysis; as well as, identifying best practices and initiatives to enhance safety performance and safety culture across the Complex.

    7. InterGroup Protocols

      Energy Science and Technology Software Center (OSTI)

      2003-04-02

      Existing reliable ordered group communication protocols have been developed for local-area networks and do not in general scale well to a large number of nodes and wide-area networks. The InterGroup suite of protocols is a scalable group communication system that introduces an unusual approach to handling group membership, and supports a receiver-oriented selection of service. The protocols are intended for a wide-area network, with a large number of nodes, that has highly variable delays andmore » a high message loss rate, such as the Internet. The levels of the message delivery service range from unreliable unordered to reliable timestamp ordered.« less

    8. Date Times Group Speakers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Group Research Meeting Toms Arias Mon, 3-10 2:30-3:30pm Faculty Meeting Richard Robinson Fri, 3-14 12:30-1:30pm Student & Postdoc Mtg Michael Zachman (Kourkoutis) & Deniz...

    9. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Markel, Huei Meznarich, Karl Pool, Noe'l Smith-Jackson, Andrew Stevens, Genesis Thomas, ... the radar of the DOE- HQ QA group. Noe'l Smith-Jackson commented that Ecology was always ...

    10. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Elkins, Mary McCormick-Barger, Noe'l Smith-Jackson, Chris Sutton, Amanda Tuttle, Rick ... Noe'l Smith-Jackson stated that the HASQARD document is the work of the Focus Group not ...

    11. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Markel, Mary McCormick-Barger, Dave St. John, Steve Smith, Steve Trent and Eric Wyse. ... On January 31, the Secretary received a call from the QA Sub-Group Chair, Steve Smith. ...

    12. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6, 2010 The meeting was called to order by Dave Crawford, Focus Group Chairman at 2:03 PM on November 16, 2010 in Conference Room 208 at 2425 Stevens. Those attending were: Dave Crawford (Chair), Cliff Watkins (Secretary), Lynn Albin, Heather Anastos, Paula Ciszak, Glen Clark, Doug Duvon, Kathi Dunbar, Robert Elkins, Scot Fitzgerald, Joan Kessner, Larry Markel, Huei Meznarich, Steve Smith, Chris Sutton, Noe'l Smith-Jackson, Chris Thompson, Eric Wyse. New members to the Focus Group were

    13. ALS Communications Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Communications Group Print From left: Ashley White, Lori Tamura, and Keri Troutman. The ALS Communications staff maintain the ALS Web site; write and edit all print and electronic publications for the ALS, including Science Highlights, Science Briefs, brochures, handouts, and the monthly newsletter ALSNews; and create educational and scientific outreach materials. In addition, members of the group organize bi-monthly Science Cafés, create conference and workshop Web sites and publicity, and

    14. DOE STGWG Group

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      STGWG Group The State and Tribal Government Working Group (STGWG) is one of the intergovernmental organizations with which the DOE EM office works with. They meet twice yearly for updates to the EM projects. They were formed in 1989. It is comprised of several state legislators and tribal staff and leadership from states in proximity to DOE's environmental cleanup sites of the following states: New York, South Carolina, Ohio, Washington, New Mexico, Idaho, California, Colorado, Georgia,

    15. ALS Communications Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Communications Group Print From left: Ashley White, Lori Tamura, and Keri Troutman. The ALS Communications staff maintain the ALS Web site; write and edit all print and electronic publications for the ALS, including Science Highlights, Science Briefs, brochures, handouts, and the monthly newsletter ALSNews; and create educational and scientific outreach materials. In addition, members of the group organize bi-monthly Science Cafés, create conference and workshop Web sites and publicity, and

    16. Introduction to High Performance Computing Using GPUs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC Using GPUs Introduction to High Performance Computing Using GPUs July 11, 2013 NERSC, NVIDIA, and The Portland Group presented a one-day workshop "Introduction to High Performance Computing Using GPUs" on July 11, 2013 in Room 250 of Sutardja Dai Hall on the University of California, Berkeley, campus. Registration was free and open to all NERSC users; Berkeley Lab Researchers; UC students, faculty, and staff; and users of the Oak Ridge Leadership Computing Facility. This workshop

    17. Computational Electronics and Electromagnetics

      SciTech Connect (OSTI)

      DeFord, J.F.

      1993-03-01

      The Computational Electronics and Electromagnetics thrust area is a focal point for computer modeling activities in electronics and electromagnetics in the Electronics Engineering Department of Lawrence Livermore National Laboratory (LLNL). Traditionally, they have focused their efforts in technical areas of importance to existing and developing LLNL programs, and this continues to form the basis for much of their research. A relatively new and increasingly important emphasis for the thrust area is the formation of partnerships with industry and the application of their simulation technology and expertise to the solution of problems faced by industry. The activities of the thrust area fall into three broad categories: (1) the development of theoretical and computational models of electronic and electromagnetic phenomena, (2) the development of useful and robust software tools based on these models, and (3) the application of these tools to programmatic and industrial problems. In FY-92, they worked on projects in all of the areas outlined above. The object of their work on numerical electromagnetic algorithms continues to be the improvement of time-domain algorithms for electromagnetic simulation on unstructured conforming grids. The thrust area is also investigating various technologies for conforming-grid mesh generation to simplify the application of their advanced field solvers to design problems involving complicated geometries. They are developing a major code suite based on the three-dimensional (3-D), conforming-grid, time-domain code DSI3D. They continue to maintain and distribute the 3-D, finite-difference time-domain (FDTD) code TSAR, which is installed at several dozen university, government, and industry sites.

    18. TEC Working Group Topic Groups Section 180(c) Meeting Summaries |

      Office of Environmental Management (EM)

      Department of Energy Section 180(c) Meeting Summaries TEC Working Group Topic Groups Section 180(c) Meeting Summaries Meeting Summaries Washington, DC TEC Meeting - 180(c) Group Summary - March 15, 2006 (29.33 KB) More Documents & Publications TEC Working Group Topic Groups Tribal Meeting Summaries TEC Meeting Summaries - July 2007 TEC Working Group Topic Groups Tribal Conference Call Summaries

    19. WLCG and IPv6 - The HEPiX IPv6 working group

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Campana, S.; K. Chadwick; Chen, G.; Chudoba, J.; Clarke, P.; Elias, M.; Elwell, A.; Fayer, S.; Finnern, T.; Goossens, L.; et al

      2014-01-01

      The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 (http://www.ietf.org/rfc/rfc2460.txt) networking protocols in High Energy Physics (HEP) Computing, in particular in the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, the European Regional Internet Registry (RIR), ran out ofIPv4 addresses in September 2012. The North and South America RIRs are expected to run out soon. In recent months it has become more clear that some WLCG sites, including CERN, are running short of IPv4 address space, now without the possibility of applyingmore » for more. This has increased the urgency for the switch-on of dual-stack IPv4/IPv6 on all outward facing WLCG services to allow for the eventual support of IPv6-only clients. The activities of the group include the analysis and testing of the readiness for IPv6 and the performance of many required components, including the applications, middleware, management and monitoring tools essential for HEP computing. Many WLCG Tier 1/2 sites are participants in the group's distributed IPv6 testbed and the major LHC experiment collaborations are engaged in the testing. We are constructing a group web/wiki which will contain useful information on the IPv6 readiness of the various software components and a knowledge base (http://hepix-ipv6.web.cern.ch/knowledge-base). Furthermore, this paper describes the work done by the working group and its future plans.« less

    20. EIA - Coal Distribution

      Gasoline and Diesel Fuel Update (EIA)

      Annual Coal Distribution Report > Annual Coal Distribution Archives Annual Coal Distribution Archive Release Date: February 17, 2011 Next Release Date: December 2011 Domestic coal ...

    1. Facilities removal working group

      SciTech Connect (OSTI)

      1997-03-01

      This working group`s first objective is to identify major economic, technical, and regulatory constraints on operator practices and decisions relevant to offshore facilities removal. Then, the group will try to make recommendations as to regulatory and policy adjustments, additional research, or process improvements and/or technological advances, that may be needed to improve the efficiency and effectiveness of the removal process. The working group will focus primarily on issues dealing with Gulf of Mexico platform abandonments. In order to make the working group sessions as productive as possible, the Facilities Removal Working Group will focus on three topics that address a majority of the concerns and/or constraints relevant to facilities removal. The three areas are: (1) Explosive Severing and its Impact on Marine Life, (2) Pile and Conductor Severing, and (3) Deep Water Abandonments This paper will outline the current state of practice in the offshore industry, identifying current regulations and specific issues encountered when addressing each of the three main topics above. The intent of the paper is to highlight potential issues for panel discussion, not to provide a detailed review of all data relevant to the topic. Before each panel discussion, key speakers will review data and information to facilitate development and discussion of the main issues of each topic. Please refer to the attached agenda for the workshop format, key speakers, presentation topics, and panel participants. The goal of the panel discussions is to identify key issues for each of the three topics above. The working group will also make recommendations on how to proceed on these key issues.

    2. Presentation of the MERC work-flow for the computation of a 2D radial reflector in a PWR

      SciTech Connect (OSTI)

      Clerc, T.; Hebert, A.; Leroyer, H.; Argaud, J. P.; Poncot, A.; Bouriquet, B.

      2013-07-01

      This paper presents a work-flow for computing an equivalent 2D radial reflector in a pressurized water reactor (PWR) core, in adequacy with a reference power distribution, computed with the method of characteristics (MOC) of the lattice code APOLLO2. The Multi-modelling Equivalent Reflector Computation (MERC) work-flow is a coherent association of the lattice code APOLLO2 and the core code COCAGNE, structured around the ADAO (Assimilation de Donnees et Aide a l'Optimisation) module of the SALOME platform, based on the data assimilation theory. This study leads to the computation of equivalent few-groups reflectors, that can be spatially heterogeneous, which have been compared to those obtained with the OPTEX similar methodology developed with the core code DONJON, as a first validation step. Subsequently, the MERC work-flow is used to compute the most accurate reflector in consistency with all the R and D choices made at Electricite de France (EDF) for the core modelling, in terms of number of energy groups and simplified transport solvers. We observe important reductions of the power discrepancies distribution over the core when using equivalent reflectors obtained with the MERC work-flow. (authors)

    3. TEC Working Group Topic Groups Rail Meeting Summaries | Department...

      Office of Environmental Management (EM)

      TEC Working Group Topic Groups Rail Meeting Summaries MEETING SUMMARIES PDF icon Kansas City TEC Meeting, Rail Topic Group Summary - July 25, 2007 PDF icon Atlanta TEC...

    4. TEC Working Group Topic Groups Security | Department of Energy

      Office of Environmental Management (EM)

      TEC Working Group Topic Groups Security The Security Topic group is comprised of regulators, law enforcement officials, labor and industry representatives and other subject matter ...

    5. Good Energy Group Plc previously Monkton Group Plc | Open Energy...

      Open Energy Info (EERE)

      Plc previously Monkton Group Plc Jump to: navigation, search Name: Good Energy Group Plc (previously Monkton Group Plc) Place: Chippenham, Wiltshire, United Kingdom Zip: SN15 1EE...

    6. Illinois Wind Workers Group

      SciTech Connect (OSTI)

      David G. Loomis

      2012-05-28

      The Illinois Wind Working Group (IWWG) was founded in 2006 with about 15 members. It has grown to over 200 members today representing all aspects of the wind industry across the State of Illinois. In 2008, the IWWG developed a strategic plan to give direction to the group and its activities. The strategic plan identifies ways to address critical market barriers to the further penetration of wind. The key to addressing these market barriers is public education and outreach. Since Illinois has a restructured electricity market, utilities no longer have a strong control over the addition of new capacity within the state. Instead, market acceptance depends on willing landowners to lease land and willing county officials to site wind farms. Many times these groups are uninformed about the benefits of wind energy and unfamiliar with the process. Therefore, many of the project objectives focus on conferences, forum, databases and research that will allow these stakeholders to make well-educated decisions.

    7. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained

    8. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained

    9. Computing for Finance

      SciTech Connect (OSTI)

      2010-03-24

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has

    10. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      20, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on March 20, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Chair), Cliff Watkins (Secretary), Jeff Cheadle, Glen Clark, Scot Fitzgerald, Larry Markel, Noe'l Smith-Jackson, Chris Sutton, Amanda Tuttle, Sam Vega, Rick Warriner and Eric Wyse. I. Huei Meznarich requested comments on the minutes from the February 21, 2012 meeting. No HASQARD Focus Group members present

    11. SUB ZERO GROUP, INC.

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      SUB ZERO GROUP, INC. 4717 Hammersley Road. Madison, WI 53711 P: 800.532.7820 P: 608.271.2233 F: 608.270.3362 Memorandum To: David Foster, Senior Advisor, Office of the Secretary of Energy CQ Michael Lafave, Director of Production Workers, SMART Union Workers Marc Norberg, Assistant to the General President, SMART Union Workers From: Christopher Jessup, Corporate Compliance Manager, Sub-Zero Group, Inc. Date: June 21, 2016 Re: June 15, 2016 Meeting at Department of Energy Forrestal Building in

    12. Upgraded Coal Interest Group

      SciTech Connect (OSTI)

      Evan Hughes

      2009-01-08

      The Upgraded Coal Interest Group (UCIG) is an EPRI 'users group' that focuses on clean, low-cost options for coal-based power generation. The UCIG covers topics that involve (1) pre-combustion processes, (2) co-firing systems and fuels, and (3) reburn using coal-derived or biomass-derived fuels. The UCIG mission is to preserve and expand the economic use of coal for energy. By reducing the fuel costs and environmental impacts of coal-fired power generation, existing units become more cost effective and thus new units utilizing advanced combustion technologies are more likely to be coal-fired.

    13. Bell, group and tangle

      SciTech Connect (OSTI)

      Solomon, A. I.

      2010-03-15

      The 'Bell' of the title refers to bipartite Bell states, and their extensions to, for example, tripartite systems. The 'Group' of the title is the Braid Group in its various representations; while 'Tangle' refers to the property of entanglement which is present in both of these scenarios. The objective of this note is to explore the relation between Quantum Entanglement and Topological Links, and to show that the use of the language of entanglement in both cases is more than one of linguistic analogy.

    14. Shane Canon! Group Leader for Technology Integration Biosciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Canon Group Leader for Technology Integration Biosciences Computing and Storage for JGI --- 1 --- February 1 2, 2 013 Why Biology in DOE * Biofuels - Engineering b e+er p lants f...

    15. ENN Group aka XinAo Group | Open Energy Information

      Open Energy Info (EERE)

      ENN Group aka XinAo Group Jump to: navigation, search Name: ENN Group (aka XinAo Group) Place: Langfang, Hebei Province, China Zip: 65001 Product: Chinese private industrial...

    16. Applications of Parallel Computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course...

    17. Greenko Group | Open Energy Information

      Open Energy Info (EERE)

      Greenko Group Jump to: navigation, search Name: Greenko Group Place: Hyderabad, India Zip: 500 033 Product: Focused on clean energy projects in Asia. References: Greenko Group1...

    18. Sinocome Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Sinocome Group Place: Beijing Municipality, China Sector: Solar Product: A Chinese high tech group with business in solar PV sector...

    19. Valesul Group | Open Energy Information

      Open Energy Info (EERE)

      Valesul Group Jump to: navigation, search Name: Valesul Group Place: Brazil Product: Brazilian ethanol producer. References: Valesul Group1 This article is a stub. You can help...

    20. Angeleno Group | Open Energy Information

      Open Energy Info (EERE)

      Angeleno Group Jump to: navigation, search Logo: Angeleno Group Name: Angeleno Group Address: 2029 Century Park East, Suite 2980 Place: Los Angeles, California Zip: 90067 Region:...

    1. MTorres Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: MTorres Group Place: Murcia, Spain Zip: 30320 Sector: Wind energy Product: Wind turbine manufacturer References: MTorres Group1 This...

    2. Ferrari Group | Open Energy Information

      Open Energy Info (EERE)

      Ferrari Group Jump to: navigation, search Name: Ferrari Group Place: Sao Paulo, Brazil Product: Sao Paulo-based ethanol producer. References: Ferrari Group1 This article is a...

    3. TEC Working Group Topic Groups Archives Communications Meeting Summaries |

      Office of Environmental Management (EM)

      Department of Energy Archives Communications Meeting Summaries TEC Working Group Topic Groups Archives Communications Meeting Summaries Meeting Summaries Milwaukee TEC Meeting, Communications Topic Group Summary - July 1998 (58.3 KB) Inaugural Group Meeting - April 1998 (83.34 KB) More Documents & Publications TEC Working Group Topic Groups Archives Communications Conference Call Summaries TEC Meeting Summaries - January 1997 TEC Working Group Topic Groups Tribal Conference Call

    4. TEC Working Group Topic Groups Rail Conference Call Summaries Inspections

      Office of Environmental Management (EM)

      Subgroup | Department of Energy Summaries Inspections Subgroup TEC Working Group Topic Groups Rail Conference Call Summaries Inspections Subgroup Inspections Subgroup April 6, 2006 (14.05 KB) February 23, 2006 Draft (20.29 KB) January 24, 2006 (27.44 KB) More Documents & Publications TEC Working Group Topic Groups Rail Conference Call Summaries Planning Subgroup TEC Working Group Topic Groups Rail Conference Call Summaries Tracking Subgroup TEC Working Group Topic Groups Rail Conference

    5. TEC Working Group Topic Groups Rail Key Documents Radiation Monitoring

      Office of Environmental Management (EM)

      Subgroup | Department of Energy Radiation Monitoring Subgroup TEC Working Group Topic Groups Rail Key Documents Radiation Monitoring Subgroup Radiation Monitoring Subgroup Draft Work Plan - February 4, 2008 (114.02 KB) More Documents & Publications TEC Working Group Topic Groups Rail Meeting Summaries TEC Working Group Topic Groups Rail Conference Call Summaries Radiation Monitoring Subgroup TEC Working Group Topic Groups Rail Key Documents Intermodal Subgroup

    6. Theory, Modeling and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Theory, Modeling and Computation Theory, Modeling and Computation The sophistication of modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme computing. CONTACT Jack Shlachter (505) 665-1888 Email Extreme Computing to Power Accurate Atomistic Simulations Advances in high-performance computing and theory allow longer and larger atomistic simulations than currently possible.

    7. MEA BREAKOUT GROUP

      Office of Environmental Management (EM)

      MEA BREAKOUT GROUP TOPICS FOCUSED ON CCMs * IONOMER * CATALYST LAYER * PERFORMANCE * DEGRADATION * FUNDAMENTAL STUDIES IONOMER * DEVELOP IMPROVED IONOMERS: PERFLUORINATED IONOMERS (O2 SOLUBILITY) HYDROCARBON IONOMERS * ANODE FLOODING ISSUES, CATHODE DRYOUT ISSUES: - DEVELOP SEPARATE IONOMERS FOR ANODE/CATHODE - IONOMER CHEMISTRY * IONOMER/CATALYST INTERACTION * CL / MEMBRANE INTERACTION * IMPROVED CL/M INTERFACES - IONOMER CROSSLINKING CATALYST LAYER * CATALYST CHALLENGES IN ANODE SIDE * FOCUS

    8. Helms Research Group - Home

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Helms Group Home Research Members Publications Collaborations Connect Physical Organic Materials Chemistry Our research is devoted to understanding transport phenomena in mesostructured systems assembled from organic, organometallic, polymeric and nanocrystalline components. Enhanced capabilities relevant to energy, health, water, and food quality are enabled by our unique approaches to the modular design of their architectures and interfaces.

    9. Abandoning wells working group

      SciTech Connect (OSTI)

      1997-03-01

      The primary objective of this working group is to identify major technical, regulatory, and environmental issues that are relevant to the abandonment of offshore wellbores. Once the issues have been identified, the working group also has the objective of making recommendations or providing potential solutions for consideration. Areas for process improvement will be identified and {open_quotes}best practices{close_quotes} will be discussed and compared to {open_quotes}minimum standards.{close_quotes} The working group will primarily focus on wellbore abandonment in the Gulf of Mexico. However, workshop participants are encouraged to discuss international issues which may be relevant to wellbore abandonment practices in the Gulf of Mexico. The Abandoning Wells Group has identified several major areas for discussion that have concerns related to both operators and service companies performing wellbore abandonments in the Gulf of Mexico. The following broad topics were selected for the agenda: (1) MMS minimum requirements and state regulations. (2) Co-existence of best practices, new technology, and P & A economics. (3) Liability and environmental issues relating to wellbore abandonment.

    10. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    11. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy ...

    12. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    13. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

    14. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    15. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental ...

    16. Electrostatic Cooperativity of Hydroxyl Groups at Metal Oxide Surfaces

      SciTech Connect (OSTI)

      Boily, Jean F.; Lins, Roberto D.

      2009-09-24

      The O-H bond distribution of hydroxyl groups at the {110} goethite (R-FeOOH) surface was investigated by molecular dynamics. This distribution was strongly affected by electrostatic interactions with neighboring oxo and hydroxo groups. The effects of proton surface loading, simulated by emplacing two protons at different distances of separation, were diverse and generated several sets of O-H bond distributions. DFT calculations of a representative molecular cluster were also carried out to demonstrate the impact of these effects on the orientation of oxygen lone pairs in neighboring oxo groups. These effects should have strong repercussions on O-H stretching vibrations of metal oxide surfaces.h

    17. Climate Modeling using High-Performance Computing

      SciTech Connect (OSTI)

      Mirin, A A

      2007-02-05

      The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

    18. Working Group Report: Lattice Field Theory

      SciTech Connect (OSTI)

      Blum, T.; et al.,

      2013-10-22

      This is the report of the Computing Frontier working group on Lattice Field Theory prepared for the proceedings of the 2013 Community Summer Study ("Snowmass"). We present the future computing needs and plans of the U.S. lattice gauge theory community and argue that continued support of the U.S. (and worldwide) lattice-QCD effort is essential to fully capitalize on the enormous investment in the high-energy physics experimental program. We first summarize the dramatic progress of numerical lattice-QCD simulations in the past decade, with some emphasis on calculations carried out under the auspices of the U.S. Lattice-QCD Collaboration, and describe a broad program of lattice-QCD calculations that will be relevant for future experiments at the intensity and energy frontiers. We then present details of the computational hardware and software resources needed to undertake these calculations.

    19. Focus Group | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Outreach Forums » Focus Group and Work Group Activities » Focus Group Focus Group The Focus Group was formed in March 2007 to initiate dialogue and interface with labor unions, DOE Program Secretarial Offices, and stakeholders in areas of mutual interest and concern related to health, safety, security, and the environment. Meeting Documents Available for Download November 13, 2012 Work Group Leadership Meetings: Transition Elements This Focus Group Work Group telecom was held with the Work

    20. TRIDAC host computer functional specification

      SciTech Connect (OSTI)

      Hilbert, S.M.; Hunter, S.L.

      1983-08-23

      The purpose of this document is to outline the baseline functional requirements for the Triton Data Acquisition and Control (TRIDAC) Host Computer Subsystem. The requirements presented in this document are based upon systems that currently support both the SIS and the Uranium Separator Technology Groups in the AVLIS Program at the Lawrence Livermore National Laboratory and upon the specific demands associated with the extended safe operation of the SIS Triton Facility.

    1. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mentors Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email 2016: Mentors Bob Robey Bob Robey XCP-2: EULERIAN CODES Bob Robey is a Research Scientist in the Eulerian Applications group at Los Alamos National Laboratory. He is the

    2. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      8, 2011 The meeting was called to order by Dave Crawford, Focus Group Chairman at 2:08 PM on January 18, 2011 in Conference Room 208 at 2425 Stevens. Those attending were: Dave Crawford (Chair), Cliff Watkins (Secretary), Heather Anastos, Paula Ciszak, Jim Conca, Scott Conley, Glen Clark, Scott Conley, Jim Douglas, Scot Fitzgerald, Stewart Huggins, Jim Jewett, Joan Kessner, Larry Markel, Huei Meznarich, Karl Pool, Dave Shea, Steve Smith, Chris Sutton, Amanda Tuttle, Rich Weiss, Eric Wyse. Dave

    3. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      1 The meeting was called to order by Huei Meznarich who was acting for the absent Dave Crawford, Focus Group Chairman at 2:04 PM on April 19, 2011 in Conference Room 208 at 2425 Stevens. Those attending were: Huei Meznarich (Acting Chair), Cliff Watkins (Secretary), Taffy Almeida, Heather Anastos, Courtney Blanchard, Jeff Cheadle, Glen Clark, Kathie Dunbar, Robert Elkins, Scot Fitzgerald, Greg Holte, Joan Kessner, Noe'l Smith- Jackson, Chris Sutton, Cindy Taylor, Chris Thompson, Amanda Tuttle,

    4. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7, 2011 The meeting was called to order by Dave Crawford, Focus Group Chairman at 2:03 PM on May 17, 2011 in Conference Room 208 at 2425 Stevens. Those attending were: Dave Crawford (Chair), Cliff Watkins (Secretary), Taffy Almeida, Courtney Blanchard, Jeff Cheadle, Glen Clark, Robert Elkins, Scot Fitzgerald, Al Hawkins, Greg Holte, Kris Kuhl-Klinger, Larry Markel, Huei Meznarich, Noe'l Smith-Jackson, Chris Sutton, Cindy Taylor, Chris Thompson, Amanda Tuttle, Eric Wyse. I. Dave Crawford

    5. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      8, 2011 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:04 PM on November 8, 2011 in Conference Room 126 at 2420 Stevens. Those attending were: Huei Meznarich (Chair), Cliff Watkins (Secretary), Lynn Albin, Heather Anastos, Courtney Blanchard, Jeff Cheadle, Scot Fitzgerald, Jim Jewett, Shannan Johnson, Kris Kuhl-Klinger, Joan Kessner, Larry Markel, Karl Pool, Noe'l Smith-Jackson, Steve Smith, Chris Sutton, Cindy Taylor, Chris Thompson, Amanda Tuttle and Eric

    6. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:04 PM on January 17, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Chair), Cliff Watkins (Secretary), Mike Barnes, Jeff Cheadle, Glen Clark, Scot Fitzgerald, Shannan Johnson, Joan Kessner, Larry Markel, Cindy Taylor, Chris Thompson, Amanda Tuttle, Sam Vega, Rich Weiss and Eric Wyse. I. Huei Meznarich requested comments on the minutes from the December 13, 2011 meeting.

    7. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      1, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:02 PM on February 21, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Chair), Cliff Watkins (Secretary), Lynn Albin, Taffy Almeida, Courtney Blanchard, Glen Clark, Scot Fitzgerald, Shannan Johnson, Kris Kuhl-Klinger, Larry Markel, Karl Pool, Steve Smith, Cindy Taylor, Amanda Tuttle, Sam Vega, Rick Warriner, Rich Weiss and Eric Wyse. I. Huei Meznarich requested comments on

    8. Working Group Reports

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      5 Working Group Reports Special Working Session on the Role of Buoy Observations in the Tropical Western Pacific Measurement Scheme J. Downing Marine Sciences Laboratory Sequim, Washington R. M. Reynolds Brookhaven National Laboratory Upton, New York Attending W. Clements (TWPPO) F. Barnes (TWPPO) T. Ackerman (TWP Site Scientist) M. Ivey (ARCS Manager) H. Church J. Curry J. del Corral B. DeRoos S. Kinne J. Mather J. Michalsky M. Miller P. Minnett B. Porch J. Sheaffer P. Webster M. Wesely K.

    9. Yennello Group Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cyclotron Chemistry Dept. Physics Dept. College of Science Texas A&M University The Group Activities Publications Articles Talks and Posters Detectors Links Pictures Women in Nuclear Science Internal Documents Contacts run photos people photos equipment photos Copyright © 2009 Texas A&M University Cyclotron Institute MS #3366 College Station TX 77843-3366 Phone: 979-845-1411 Fax: 979-845-1899

    10. Tritium Focus Group Meeting:

      Office of Environmental Management (EM)

      32 nd Tritium Focus Group Meeting: Tritium research activities in Safety and Tritium Applied Research (STAR) facility, Idaho National Laboratory Masashi Shimada Fusion Safety Program, Idaho National Laboratory April 25 th 2013, Germantown, MD STI #: INL/MIS-13-28975 Outlines 1. Motivation of tritium research activity in STAR facility 2. Unique capabilities in STAR facility 3. Research highlights from tritium retention in HFIR neutron- irradiated tungsten April 25th 2013 Germantown, MD STAR

    11. Environmental/Interest Groups

      Office of Legacy Management (LM)

      Environmental/Interest Groups Miamisburg Mound Community Improvement Corporation (MMCIC) Mike J. Grauwelman President P.O. Box 232 Miamisburg, OH 45343-0232 (937) 865-4462 Email: mikeg@mound.com Mound Reuse Committee See MMCIC Mound Environmental Safety and Health Sharon Cowdrey President 5491 Weidner Road Springboro, OH 45066 (937) 748-4757 No email address available Mound Museum Association Dr. Don Sullenger President Mound Advanced Technology Center 720 Mound Road Miamisburg, OH 45342-6714

    12. TEC Working Group Topic Groups | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Topic Groups TEC Working Group Topic Groups TEC Topic Groups were formed in 1991 following an evaluation of the TEC program. Interested members, DOE and other federal agency staff meet to examine specific issues related to radioactive materials transportation. TEC Topic Groups enable a small number of participants to focus intensively on key issues at a level of detail that is unattainable during the TEC semiannual meetings due to time and group size constraints. Topic Groups meet individually

    13. BILIWG Meeting: DOE Hydrogen Quality Working Group Update and Recent

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Progress (Presentation) | Department of Energy DOE Hydrogen Quality Working Group Update and Recent Progress (Presentation) BILIWG Meeting: DOE Hydrogen Quality Working Group Update and Recent Progress (Presentation) Presented at the 2007 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group held November 6, 2007 in Laurel, Maryland. 12_anl_h2_quality_working_group_update.pdf (683.47 KB) More Documents & Publications Effects of Fuel and Air Impurities on PEM Fuel Cell

    14. Evaluation of distributed ANSYS for high performance computing...

      Office of Scientific and Technical Information (OSTI)

      DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource Relation: Conference: Proposed for presentation at the Seventh Biennial Tri-Laboratory Engineering Conference ...

    15. Distributed Reforming of Biomass Pyrolysis Oils (Presentation)

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Working Group Meeting Presentation Guidance at a Glance Distributed Reforming of Biomass Pyrolysis Oils DOE Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Meeting November 6 and 7 2007 R. J. Evans, NREL D. M. Steward, NREL Innovation / Overview Biomass pyrolysis produces a liquid product, bio-oil, which contains a wide spectrum of components that can be efficiently, stored, and shipped, to a site for renewable hydrogen production and converted to H2 at moderate severity

    16. Cosmic Reionization On Computers | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      its Cosmic Reionization On Computers (CROC) project, using the Adaptive Refinement Tree (ART) code as its main simulation tool. An important objective of this research is to make...

    17. Computational Science and Engineering

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science and Engineering NETL's Computational Science and Engineering competency consists of conducting applied scientific research and developing physics-based simulation models, methods, and tools to support the development and deployment of novel process and equipment designs. Research includes advanced computations to generate information beyond the reach of experiments alone by integrating experimental and computational sciences across different length and time scales. Specific

    18. Sandia National Laboratories: Careers: Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Science Red Storm photo Sandia's supercomputing research is reaching for tomorrow's exascale performance while solving real-world problems today. Computer scientists and engineers at Sandia work on a variety of projects that range from research to full life-cycle product development and support. For example, their research activities cover both "bits and bytes" operating systems-level research and leading-edge information technology research in areas such as distributed

    19. Parallel computing works

      SciTech Connect (OSTI)

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    20. TEC Working Group Topic Groups Archives | Department of Energy

      Office of Environmental Management (EM)

      Archives TEC Working Group Topic Groups Archives The following Topic Groups are no longer active; however, related documents and notes for these archived Topic Groups are available through the following links: Communications Consolidated Grant Topic Group Training - Medical Training Protocols Route Identification Process Mechanics of Funding and Technical Assistance

    1. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    2. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained

    3. Effects of local information on group behavior

      SciTech Connect (OSTI)

      Roychowdhury, S.; Arora, N.; Sen, S.

      1996-12-31

      Researchers in the field of Distributed Artificial Intelligence have studied the effects of local decision-making on overall system performance in both cooperative and self-interested agent groups. The performance of individual agents depends critically on the quality of information available to it about local and global goals and resources. Whereas in general it is assumed that the more accurate and up-to-date the available information, the better is the expected performance of the individual and the group, this conclusion can be challenged in a number of scenarios.

    4. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      18, 2010 The meeting was called to order by Don Hart, Focus Group Chairman, at 2:00 PM on February 18, 2010 in Conference Room 199 at 2430 Stevens. Those attending were: Lynn Albin, Taffy Almeida, Heather Anastos, Glen Clark, Doug Duvon, Kathi Dunbar, Robert Elkins, Cindy English, Kris Kuhl-Klinger, Joan Kessner, Larry Markel, Huei Meznarich, Karl Pool, Steve Smith, Noe'l Smith-Jackson, Andrew Stevens, Chris Sutton, Chris Thompson, Wendy Thompson, Rich Weis, and Cliff Watkins. I. Because new

    5. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      0 The meeting was called to order by Dave Crawford, Focus Group Chairman at 2:10 PM on December 13, 2010 in Conference Room 199 at 2430 Stevens. Those attending were: Dave Crawford (Chair), Cliff Watkins (Secretary), Jeff Cheadle, Glen Clark, Robert Elkins, Scot Fitzgerald, Kris Kuhl-Klinger, Larry Markel, Huei Meznarich, Noe'l Smith-Jackson, Dave Shea, Chris Sutton, Cindy Taylor, Chris Thompson, Rich Weiss, Eric Wyse. I. Dave Crawford requested approval of the minutes from the November 16

    6. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      16, 2011 The meeting was called to order by Dave Crawford, HASQARD Focus Group Chairman at 2:07 PM on August 16, 2011 in Conference Room 208 at 2425 Stevens. Those attending were: (Chair), Cliff Watkins (Secretary), Lynn Albin, Heather Anastos, Jeff Cheadle, Kathi Dunbar, Robert Elkins, Scot Fitzgerald, Jim Jewett, Kris Kuhl-Klinger, Joan Kessner, Larry Markel, Huei Meznarich, Noe'l Smith-Jackson, Cindy Taylor, Amanda Tuttle, Rich Weiss and Eric Wyse. I. Dave Crawford requested comments on the

    7. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      4, 2011 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:04 PM on October 4, 2011 in Conference Room 208 at 2425 Stevens. Those attending were: Huei Meznarich (Chair), Cliff Watkins (Secretary), Lynn Albin, Heather Anastos, Jeff Cheadle, Glen Clark, Scot Fitzgerald, Shannan Johnson, Kris Kuhl-Klinger, Joan Kessner, Larry Markel, Karl Pool, Noe'l Smith-Jackson, Dave Shea, Cindy Taylor, Amanda Tuttle, Mary Ryan, Rich Weiss and Eric Wyse. I. Huei Meznarich requested

    8. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      1 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:04 PM on December 13, 2011 in Conference Room 126 at 2420 Stevens. Those attending were: Huei Meznarich (Chair), Cliff Watkins (Secretary), Lynn Albin, Heather Anastos, Jeff Cheadle, Glen Clark, Scot Fitzgerald, Shannan Johnson, Kris Kuhl-Klinger, Joan Kessner, Karl Pool, Dave St. John, Noe'l Smith-Jackson, Chris Sutton, Cindy Taylor, Amanda Tuttle, Rich Weiss and Eric Wyse. I. Huei Meznarich requested comments

    9. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:06 PM on April 17, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Chair), Cliff Watkins (Secretary), Lynn Albin, Taffy Almeida, Jeff Cheadle, Glen Clark, Scot Fitzgerald, Kris Kuhl-Klinger, Joan Kessner, Larry Markel, Noe'l Smith-Jackson, Cindy Taylor, Amanda Tuttle, Rich Weiss and Eric Wyse. I. Huei Meznarich requested comments on the minutes from the March 20, 2012

    10. TEC Working Group Topic Groups Rail Conference Call Summaries | Department

      Office of Environmental Management (EM)

      of Energy Rail Conference Call Summaries TEC Working Group Topic Groups Rail Conference Call Summaries CONFERENCE CALL SUMMARIES Rail Topic Group Inspections Subgroup Planning Subgroup Tracking Subgroup TRAGIS Subgroup Radiation Monitoring Subgroup Intermodel Subgroup

    11. Distributed resource management: garbage collection

      SciTech Connect (OSTI)

      Bagherzadeh, N.

      1987-01-01

      In recent years, there has been a great interest in designing high-performance distributed symbolic-processing computers. These architectures have special needs for resource management and dynamic reclamation of unused memory cells and objects. The memory management or garbage-collection aspects of these architectures are studied. Also introduced is a synchronous distributed algorithm for garbage collection. A special data structure is defined to handle the distributed nature of the problem. The author formally expresses the algorithm and shows the results of a synchronous garbage-collection simulation and its effect on the interconnection-network message to traffic. He presents an asynchronous distributed garbage collection to handle the resource management for a system that does not require a global synchronization mechanism. The distributed data structure is modified to include the asynchronous aspects of the algorithm. This method is extended to a multiple-mutator scheme, and the problem of having several processors share portion of a cyclical graph is discussed. Two models for the analytical study of the garbage-collection algorithms discussed are provided.

    12. TEC Working Group Topic Groups Archives Communications Conference Call

      Office of Environmental Management (EM)

      Summaries | Department of Energy Communications Conference Call Summaries TEC Working Group Topic Groups Archives Communications Conference Call Summaries Conference Call Summaries Conference Call Summary April 2000 (91.86 KB) Conference Call Summary February 1999 (11.81 KB) Conference Call Summary November 1998 (54.77 KB) More Documents & Publications TEC Working Group Topic Groups Archives Communications Meeting Summaries TEC Working Group Topic Groups Tribal Conference Call Summaries

    13. TEC Working Group Topic Groups Archives Protocols Meeting Summaries |

      Office of Environmental Management (EM)

      Department of Energy Protocols Meeting Summaries TEC Working Group Topic Groups Archives Protocols Meeting Summaries Meeting Summaries Philadelphia TEC Meeting, Protocols Topic Group Summary - July 1999 (110.63 KB) Jacksonville TEC Meeting, Protocols Topic Group Summary - January 1999 (102.04 KB) More Documents & Publications TEC Working Group Topic Groups Archives Protocols Conference Call Summaries TEC Meeting Summaries - July 1997 TEC Meeting Summaries - January 1997

    14. TEC Working Group Topic Groups Rail Archived Documents | Department of

      Office of Environmental Management (EM)

      Energy Archived Documents TEC Working Group Topic Groups Rail Archived Documents ARCHIVED DOCUMENTS Inspections Summary Matrix (49.36 KB) TEC Transportation Safety WIPP-PIG Rail Comparison (130.46 KB) Regulatory Summary Matrix (62.08 KB) More Documents & Publications TEC Working Group Topic Groups Rail Key Documents TEC Working Group Topic Groups Rail Meeting Summaries TEC Meeting Summaries - September 2005 Presentations

    15. TEC Working Group Topic Groups Security Key Documents | Department of

      Office of Environmental Management (EM)

      Energy Key Documents TEC Working Group Topic Groups Security Key Documents Key Documents Security TG Work Plan August 7, 2006 (24.31 KB) Security Lessons Learned Document August 2, 2006 (40.77 KB) Security Module (635.1 KB) STG Terms and Definitions from DOE 470.4 (18.54 KB) More Documents & Publications TEC Working Group Topic Groups Security Meeting Summaries TEC Meeting Summaries - April 2005 Presentations TEC Working Group Topic Groups Security Conference Call Summaries

    16. TEC Working Group Topic Groups Security Meeting Summaries | Department of

      Office of Environmental Management (EM)

      Energy Meeting Summaries TEC Working Group Topic Groups Security Meeting Summaries Meeting Summaries Green Bay STG Meeting Summary- September 14, 2006 (28.22 KB) Washington STG Meeting Summary - March 14, 2006 (25.61 KB) Pueblo STG Meeting Summary - September 22, 2005 (18.7 KB) More Documents & Publications TEC Working Group Topic Groups Security Conference Call Summaries TEC Meeting Summaries - September 2006 TEC Working Group Topic Groups Security Key Documents

    17. Traffic information computing platform for big data

      SciTech Connect (OSTI)

      Duan, Zongtao Li, Ying Zheng, Xibin Liu, Yan Dai, Jiting Kang, Jun

      2014-10-06

      Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

    18. TEC Working Group Topic Groups Rail Key Documents | Department...

      Office of Environmental Management (EM)

      Rail Key Documents TEC Working Group Topic Groups Rail Key Documents KEY DOCUMENTS Radiation Monitoring Subgroup Intermodal Subgroup Planning Subgroup Current FRA State Rail Safety ...

    19. TEC Working Group Topic Groups Rail Conference Call Summaries...

      Office of Environmental Management (EM)

      Summaries Inspections Subgroup TEC Working Group Topic Groups Rail Conference Call Summaries Inspections Subgroup Inspections Subgroup PDF icon April 6, 2006 PDF icon February 23,...

    20. TEC Working Group Topic Groups Archives Mechanics of Funding...

      Office of Environmental Management (EM)

      Mechanics of Funding and Techical Assistance TEC Working Group Topic Groups Archives Mechanics of Funding and Techical Assistance Mechanics of Funding and Techical Assistance Items...

    1. TEC Working Group Topic Groups Tribal Conference Call Summaries...

      Office of Environmental Management (EM)

      Conference Call Summaries TEC Working Group Topic Groups Tribal Conference Call Summaries Conference Call Summaries PDF icon March 12, 2008 PDF icon October 3, 2007 PDF icon...

    2. TEC Working Group Topic Groups Archives Communications Conference...

      Office of Environmental Management (EM)

      Communications Conference Call Summaries TEC Working Group Topic Groups Archives Communications Conference Call Summaries Conference Call Summaries PDF icon Conference Call Summary...

    3. TEC Working Group Topic Groups Archives Communications Meeting...

      Office of Environmental Management (EM)

      Archives Communications Meeting Summaries TEC Working Group Topic Groups Archives Communications Meeting Summaries Meeting Summaries PDF icon Milwaukee TEC Meeting, Communications...

    4. TEC Working Group Topic Groups Section 180(c) Key Documents ...

      Office of Environmental Management (EM)

      Key Documents TEC Working Group Topic Groups Section 180(c) Key Documents Key Documents Briefing Package for Section 180(c) Implementation - July 2005 PDF icon Executive Summary...

    5. TEC Working Group Topic Groups Rail Key Documents Radiation Monitoring...

      Office of Environmental Management (EM)

      Radiation Monitoring Subgroup TEC Working Group Topic Groups Rail Key Documents Radiation Monitoring Subgroup Radiation Monitoring Subgroup PDF icon Draft Work Plan - February 4,...

    6. September 2012, HSS Focus Group Strategic Initiatives Work Group...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Strategic Initiatives Work Group Status Overview Accomplishments: 1. June 26. Telecom with ... reporting improvements are planned for the next Strategic Initiatives Work Group meeting. ...

    7. CORRELATION BETWEEN GROUP LOCAL DENSITY AND GROUP LUMINOSITY

      SciTech Connect (OSTI)

      Deng Xinfa; Yu Guisheng

      2012-11-10

      In this study, we investigate the correlation between group local number density and total luminosity of groups. In four volume-limited group catalogs, we can conclude that groups with high luminosity exist preferentially in high-density regions, while groups with low luminosity are located preferentially in low-density regions, and that in a volume-limited group sample with absolute magnitude limit M{sub r} = -18, the correlation between group local number density and total luminosity of groups is the weakest. These results basically are consistent with the environmental dependence of galaxy luminosity.

    8. Bio-Derived Liquids to Hydrogen Distributed Reforming Targets (Presentation)

      Broader source: Energy.gov [DOE]

      Presented at the 2007 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group held November 6, 2007 in Laurel, Maryland.

    9. Distributed Hydrogen Fueling Station Based on GEGR SCPO Technology (Presentation)

      Broader source: Energy.gov [DOE]

      Presented at the 2007 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group held November 6, 2007 in Laurel, Maryland.

    10. Bio-Derived Liquids to Hydrogen Distributed Reforming Working...

      Broader source: Energy.gov (indexed) [DOE]

      The Working Group is addressing technical challenges to distributed reforming of biomass-derived, renewable liquid fuels to hydrogen, including the reforming, water-gas shift, and ...

    11. Bio-Derived Liquids to Hydrogen Distributed Reforming Targets

      Broader source: Energy.gov [DOE]

      Presentation by Arlene Anderson at the October 24, 2006 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Kick-Off Meeting.

    12. Thermal Hydraulic Computer Code System.

      Energy Science and Technology Software Center (OSTI)

      1999-07-16

      Version 00 RELAP5 was developed to describe the behavior of a light water reactor (LWR) subjected to postulated transients such as loss of coolant from large or small pipe breaks, pump failures, etc. RELAP5 calculates fluid conditions such as velocities, pressures, densities, qualities, temperatures; thermal conditions such as surface temperatures, temperature distributions, heat fluxes; pump conditions; trip conditions; reactor power and reactivity from point reactor kinetics; and control system variables. In addition to reactor applications,more » the program can be applied to transient analysis of other thermal‑hydraulic systems with water as the fluid. This package contains RELAP5/MOD1/029 for CDC computers and RELAP5/MOD1/025 for VAX or IBM mainframe computers.« less

    13. Fall 2012 Working Groups

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 C STEC W orking G roup S chedule Thrust I --- s elected Thursdays; M SE C onference R oom ( 3062 H H D ow) October 1 1 Dylan B ayerl ( Kioupakis g roup) 3:00---4:00pm November 1 Andy M artin ( Millunchick g roup) 2:00---3:00pm December 1 3 Brian R oberts ( Ku g roup) 2:00---3:00pm Thrust II --- s elected T hursdays, 3 :30---4:30pm; M SE C onference R oom ( 3062 H H D ow) September 2 7 Hang C hi ( Uher g roup) October 1 8 Reddy g roup November 2 9 Gunho Kim (Pipe group) Thrust III --- s elected

    14. Working Group Report: Sensors

      SciTech Connect (OSTI)

      Artuso, M.; et al.,

      2013-10-18

      Sensors play a key role in detecting both charged particles and photons for all three frontiers in Particle Physics. The signals from an individual sensor that can be used include ionization deposited, phonons created, or light emitted from excitations of the material. The individual sensors are then typically arrayed for detection of individual particles or groups of particles. Mounting of new, ever higher performance experiments, often depend on advances in sensors in a range of performance characteristics. These performance metrics can include position resolution for passing particles, time resolution on particles impacting the sensor, and overall rate capabilities. In addition the feasible detector area and cost frequently provides a limit to what can be built and therefore is often another area where improvements are important. Finally, radiation tolerance is becoming a requirement in a broad array of devices. We present a status report on a broad category of sensors, including challenges for the future and work in progress to solve those challenges.

    15. Former NERSC Consultant Mentors Math, Computer Science Students

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Former NERSC Consultant Mentors Math, Computer Science Students Former NERSC Consultant Mentors Math, Computer Science Students March 10, 2015 Frank Hale, a former consultant in NERSC's User Services Group (USG) who currently tutors math at Diablo Valley College (DVC) in Pleasant Hill, CA, recently brought a group of computer science enthusiasts from the college to NERSC for a tour. Hale, the first person hired into the USG when NERSC relocated from Lawrence Livermore National Laboratory to

    16. Information Technology Advisory Group (iTAG) | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Committees Information Technology Advisory Group (iTAG) The Information Technology Advisory Group (iTAG) is a standing Ames Laboratory committee consisting of Ames Lab scientists and IT professionals working together to look at and advise the computing needs for researchers. iTAG Charter The committee consists of: Diane Den Adel (Information Services Representative) Terry Herrman (Engineering Services Group Representative) Linlin Wang (Science and Technology Representative) Cynthia Jenks

    17. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw; Gokhale, Maya B.; McCabe, Kevin Peter

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    18. The Magellan Final Report on Cloud Computing

      SciTech Connect (OSTI)

      ,; Coghlan, Susan; Yelick, Katherine

      2011-12-21

      The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.

    19. Distribution Grid Integration

      Broader source: Energy.gov [DOE]

      The DOE Systems Integration team funds distribution grid integration research and development (R&D) activities to address the technical issues that surround distribution grid planning,...

    20. Annual Coal Distribution Report

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Annual Coal Distribution Report Release Date: April 16, 2015 | Next Release Date: March 2016 | full report | RevisionCorrection Revision to the Annual Coal Distribution Report ...

    1. About Industrial Distributed Energy

      Broader source: Energy.gov [DOE]

      The Advanced Manufacturing Office's (AMO's) Industrial Distributed Energy activities build on the success of predecessor DOE programs on distributed energy and combined heat and power (CHP) while...

    2. Coal Distribution Database, 2006

      U.S. Energy Information Administration (EIA) Indexed Site

      Domestic Distribution of U.S. Coal by Origin State, Consumer, Destination and Method of Transportation, 2009 Final February 2011 2 Overview of 2009 Coal Distribution Tables...

    3. Tecate Group | Open Energy Information

      Open Energy Info (EERE)

      Tecate Group Jump to: navigation, search Name: Tecate Group Place: San Diego, California Zip: 92108-4400 Product: The Tecate Group is a global supplier of electronic components and...

    4. USJ Group | Open Energy Information

      Open Energy Info (EERE)

      USJ Group Jump to: navigation, search Name: USJ Group Place: So Paulo, Sao Paulo, Brazil Zip: 04534 000 Product: Sao Paulo based ethanol producer. References: USJ Group1 This...

    5. Rowan Group | Open Energy Information

      Open Energy Info (EERE)

      Rowan Group Place: United Kingdom Product: ( Private family-controlled ) References: Rowan Group1 This article is a stub. You can help OpenEI by expanding it. Rowan Group is a...

    6. ERIC Group | Open Energy Information

      Open Energy Info (EERE)

      ERIC Group Jump to: navigation, search Name: ERIC Group Place: Italy Product: Italian project developer of PV power plants. References: ERIC Group1 This article is a stub. You...

    7. Westly Group | Open Energy Information

      Open Energy Info (EERE)

      Westly Group Jump to: navigation, search Name: Westly Group Place: Menlo Park, California Zip: 94025 Product: Clean technology-oriented venture capital firm. References: Westly...

    8. Enerbio Group | Open Energy Information

      Open Energy Info (EERE)

      Enerbio Group Jump to: navigation, search Name: Enerbio Group Place: Porto Alegre, Rio Grande do Sul, Brazil Zip: 90480-003 Sector: Renewable Energy, Services Product: Brazilian...

    9. Jinglong Group | Open Energy Information

      Open Energy Info (EERE)

      Jinglong Group Jump to: navigation, search Name: Jinglong Group Place: Ningjin, Hebei Province, China Product: Chinese manufacturer and supplier of monocrystalline silicon and...

    10. Verdeo Group | Open Energy Information

      Open Energy Info (EERE)

      Verdeo Group Jump to: navigation, search Name: Verdeo Group Place: Washington, DC Zip: 20006 Sector: Carbon Product: Washington based integrated carbon solutions company....

    11. Bazan Group | Open Energy Information

      Open Energy Info (EERE)

      Bazan Group Jump to: navigation, search Name: Bazan Group Place: Pontal, Brazil Zip: 14180-000 Product: Bioethanol production company Coordinates: -21.023149, -48.037099 Show...

    12. Delaney Group | Open Energy Information

      Open Energy Info (EERE)

      Delaney Group Jump to: navigation, search Name: Delaney Group Place: Gloversville, New York Zip: 12078 Sector: Services, Wind energy Product: Services company focused on...

    13. Ramky Group | Open Energy Information

      Open Energy Info (EERE)

      Ramky Group Jump to: navigation, search Name: Ramky Group Place: Andhra Pradesh, India Zip: 500082 Product: Focussed on construction, infrastructure development and waste...

    14. Samaras Group | Open Energy Information

      Open Energy Info (EERE)

      Samaras Group Jump to: navigation, search Name: Samaras Group Place: Greece Sector: Renewable Energy, Services Product: Greek consultancy services provider with specialization in...

    15. Altira Group | Open Energy Information

      Open Energy Info (EERE)

      Altira Group Jump to: navigation, search Name: Altira Group Address: 1675 Broadway, Suite 2400 Place: Denver, Colorado Zip: 80202 Region: Rockies Area Product: Venture Capital...

    16. Sunvim Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Sunvim Group Place: Gaomi, Shandong Province, China Zip: 261500 Product: Sunvim, a Chinese home textile maker, is also engaged in the...

    17. Balta Group | Open Energy Information

      Open Energy Info (EERE)

      Balta Group Jump to: navigation, search Name: Balta Group Place: Sint Baafs Vijve, Belgium Zip: 8710 Product: Belgium-based manufacturer of broadloom carpets, rugs and laminate...

    18. Noribachi Group | Open Energy Information

      Open Energy Info (EERE)

      Noribachi Group Jump to: navigation, search Name: Noribachi Group Place: Albuquerque, New Mexico Zip: 87104 Product: New Mexico-based private equity firm focused on investing in...

    19. Lucas Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Lucas Group Place: Chicago, Illinois Sector: Services Product: Renewable Energy Recruiters Year Founded: 1970 Coordinates: 41.850033,...

    20. Humus Group | Open Energy Information

      Open Energy Info (EERE)

      search Name: Humus Group Place: Brazil Product: Stakeholder in the Vertente ethanol mill in Brazil. References: Humus Group1 This article is a stub. You can help...

    1. Bumlai Group | Open Energy Information

      Open Energy Info (EERE)

      Jump to: navigation, search Name: Bumlai Group Place: Brazil Product: Investor in ethanol plant So Fernando Acar e lcool. References: Bumlai Group1 This...

    2. Paro group | Open Energy Information

      Open Energy Info (EERE)

      Paro group Jump to: navigation, search Name: Paro group Place: Brazil Product: Ethanol producer that plans to jointly own an ethanol plant in Minas Gerais. References: Paro...

    3. Reservoir Modeling Working Group Meeting

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Reservoir Modeling Working Group Meeting 2012 GEOTHERMAL TECHNOLOGIES PROGRAM PEER REVIEW ... History Past Meetings: March 2010 IPGT Modeling Working Group Meeting May 2010 GTP Peer ...

    4. Mouratoglou Group | Open Energy Information

      Open Energy Info (EERE)

      Mouratoglou Group Jump to: navigation, search Name: Mouratoglou Group Place: France Sector: Renewable Energy Product: Investment parent-company of EDF Energies Nouvelles, involved...

    5. Poyry Group | Open Energy Information

      Open Energy Info (EERE)

      Poyry Group Jump to: navigation, search Name: Poyry Group Place: Vantaa, Finland Zip: 1621 Product: Vantaa-based consulting and engineering firm, specialising in issues regarding...

    6. Richway Group | Open Energy Information

      Open Energy Info (EERE)

      by expanding it. Richway Group is a company based in Richmond, British Columbia. FROM WASTE TO ENERGY, YOUR WISE CHOICE Vision and Objectives Richway Group (Richway) is located...

    7. Copisa Group | Open Energy Information

      Open Energy Info (EERE)

      Copisa Group Jump to: navigation, search Name: Copisa Group Place: Barcelona, Spain Zip: 8029 Product: Barcelona-based, construction company. Copisa is involved in building three...

    8. Emte Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Emte Group Place: Spain Sector: Renewable Energy, Services Product: String representation "EMTE is the ben ... ctor companies." is too long....

    9. Schaffner Group | Open Energy Information

      Open Energy Info (EERE)

      Schaffner Group Jump to: navigation, search Name: Schaffner Group Place: Switzerland Zip: 4542 Product: Switzerland-based company supplier of components that support the efficient...

    10. Schulthess Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Schulthess Group Place: Wolfhausen, Switzerland Zip: CH-8633 Product: A company with activities in regenerative energy production,...

    11. TRITEC Group | Open Energy Information

      Open Energy Info (EERE)

      TRITEC Group Jump to: navigation, search Name: TRITEC Group Place: Basel, Switzerland Zip: CH-4123 Product: Basel-based installer and distributor for PV products. Coordinates:...

    12. Swatch Group | Open Energy Information

      Open Energy Info (EERE)

      Swatch Group Jump to: navigation, search Name: Swatch Group Place: Switzerland Product: String representation "The Swatch Grou ... ther industries" is too long. References: Swatch...

    13. Anel Group | Open Energy Information

      Open Energy Info (EERE)

      Anel Group Jump to: navigation, search Name: Anel Group Place: ISTANBUL, Turkey Zip: 34768 Sector: Solar, Wind energy Product: Istanbul-based technological and engineering...

    14. Aksa Group | Open Energy Information

      Open Energy Info (EERE)

      Aksa Group Jump to: navigation, search Name: Aksa Group Place: Istanbul, Turkey Zip: 34212 Sector: Wind energy Product: Turkey-based international company recently involved in the...

    15. Daesung Group | Open Energy Information

      Open Energy Info (EERE)

      Daesung Group Place: Jongno-Gu Seoul, Korea (Republic) Zip: 110-300 Sector: Hydro, Hydrogen Product: Daesung Group, a Korea-based energy provider and electric machinary...

    16. Electrocell Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Electrocell Group Place: Sao Paolo, Brazil Zip: 05508-000 Product: Producer of fuel cells, accessories and controls. The company...

    17. Pohlen Group | Open Energy Information

      Open Energy Info (EERE)

      Pohlen Group Jump to: navigation, search Name: Pohlen Group Place: Geilenkirchen, Germany Product: Specialises in roof engineering, including installing and maintaining PV systems...

    18. Vaillant Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Vaillant Group Place: Remscheid, Germany Zip: 42859 Product: For nearly 130 years Vaillant has been at the forefront of heating technology....

    19. Ostwind Group | Open Energy Information

      Open Energy Info (EERE)

      Ostwind Group Jump to: navigation, search Name: Ostwind Group Place: Regensburg, Germany Zip: D-93047 Sector: Biomass, Hydro, Wind energy Product: Develops wind projects, and also...

    20. Shenergy Group | Open Energy Information

      Open Energy Info (EERE)

      Shenergy Group Place: Shanghai Municipality, China Product: Gas and power project investor and developer based in Shanghai. References: Shenergy Group1 This article is a stub....

    1. GEA Group | Open Energy Information

      Open Energy Info (EERE)

      Jump to: navigation, search Name: GEA Group Place: Bochum, Germany Zip: 44809 Sector: Biofuels, Solar Product: Bochum-based, engineering group specialising in process engineering...

    2. Ralos Group | Open Energy Information

      Open Energy Info (EERE)

      Ralos Group Jump to: navigation, search Name: Ralos Group Place: Michelstadt, Germany Zip: D-64720 Sector: Solar Product: Germany-based solar project developer that specialises in...

    3. Enovos Group | Open Energy Information

      Open Energy Info (EERE)

      Enovos Group Jump to: navigation, search Name: Enovos Group Place: Germany Sector: Solar Product: Germany-based utility. The utility has interests in solar energy. References:...

    4. Rioglass Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Rioglass Group Place: Spain Product: A Spanish glass company supplying the automotive sector, who has recently announced to launch...

    5. Training Work Group | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Outreach Forums Focus Group and Work Group Activities Focus Group Training Work Group 10 CFR 851 Implementation Work Group Workforce Retention Work Group Strategic Initiatives Work ...

    6. TEC Working Group Topic Groups Archives Protocols | Department of Energy

      Office of Environmental Management (EM)

      Protocols TEC Working Group Topic Groups Archives Protocols The Transportation Protocols Topic Group serves as an important vehicle for DOE senior managers to assess and incorporate stakeholder input into the protocols process. The Topic Group was formed to review a series of transportation protocols developed in response to a request for DOE to be more consistent in its approach to transportation.

    7. Agenda for the Derived Liquids to Hydrogen Distributed Reforming Working

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Group (BILIWG) Hydrogen Production Technical Team Research Review | Department of Energy Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) Hydrogen Production Technical Team Research Review Agenda for the Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) Hydrogen Production Technical Team Research Review This is the agenda for the working group sessions held in Laurel, Maryland on November 6, 2007. biliwg_agenda.pdf (145.59 KB) More Documents

    8. TEC Working Group Topic Groups Tribal | Department of Energy

      Office of Environmental Management (EM)

      Energy Meeting Summaries TEC Working Group Topic Groups Tribal Meeting Summaries Meeting Summaries Kansas City TEC Meeting - Tribal Group Summary - July 25, 2007 (29.33 KB) Atlanta TEC Meeting - Tribal Group Summary - March 6, 2007 (27.82 KB) Green Bay TEC Meeting -- Tribal Group Summary - October 26, 2006 (31.56 KB) Washington TEC Meeting - Tribal Topic Group Summary - March 14, 2006 (39.76 KB) Pueblo TEC Meeting - Tribal Topic Group Summary, September 22, 2005 (40.34 KB) Phoenix TEC

    9. TEC Working Group Topic Groups Tribal Meeting Summaries | Department of

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Energy Tribal Meeting Summaries TEC Working Group Topic Groups Tribal Meeting Summaries Meeting Summaries Kansas City TEC Meeting - Tribal Group Summary - July 25, 2007 (29.33 KB) Atlanta TEC Meeting - Tribal Group Summary - March 6, 2007 (27.82 KB) Green Bay TEC Meeting -- Tribal Group Summary - October 26, 2006 (31.56 KB) Washington TEC Meeting - Tribal Topic Group Summary - March 14, 2006 (39.76 KB) Pueblo TEC Meeting - Tribal Topic Group Summary, September 22, 2005 (40.34 KB) Phoenix TEC

    10. Cognitive Computing for Security.

      SciTech Connect (OSTI)

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    11. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    12. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    13. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost 3 on the TRACC Cluster Oct. ... with an emphasis on applying these capabilities to build computationally efficient models. ...

    14. Computational Studies of Nucleosome Stability | Argonne Leadership

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Facility nucleosome 1KX5 Image of the nucleosome 1KX5 from the Protein Data Bank (from X. Zhu, TACC). This DNA/protein complex will serve as the primary target of simulation studies to be performed by the Schatz group as part of the INCITE program. Computational Studies of Nucleosome Stability PI Name: George Schatz PI Email: schatz@chem.northwestern.edu Institution: Northwestern University Allocation Program: INCITE Allocation Hours at ALCF: 20 Million Year: 2013 Research Domain:

    15. Focus Group Training Work Group Meeting | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Date: September 13, 2012 In conjunction with the HAMMER Steering Committee meeting the HSS Focus Group Training Working Group Meeting was conducted from 2:00 PM to 4:30 PM at the HAMMER Training Facility in Richland, WA. Documents Available for Download Meeting Agenda (43.92 KB) Meeting Summary (1.22 MB) More Documents & Publications Focus Group Training Work Group Meeting DOE Training Reciprocity Program Training Work Group Charter

    16. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      which would collect more data than any computing center in existence could process. ... consortium grid called Open Science Grid, so they initiated a project known as FermiGrid. ...

    17. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    18. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    19. Computers-BSA.ppt

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Energy Computers, Electronics and Electrical Equipment (2010 MECS) Computers, Electronics and Electrical Equipment (2010 MECS) Manufacturing Energy and Carbon Footprint for Computers, Electronics and Electrical Equipment Sector (NAICS 334, 335) Energy use data source: 2010 EIA MECS (with adjustments) Footprint Last Revised: February 2014 View footprints for other sectors here. Manufacturing Energy and Carbon Footprint Computers, Electronics and Electrical Equipment (123.71 KB) More Documents

    20. NERSC Hosts 50 Enthusiastic Computer Science Students from Dougherty Valley

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Hosts 50 Enthusiastic Computer Science Students from Dougherty Valley High NERSC Hosts 50 Enthusiastic Computer Science Students from Dougherty Valley High May 31, 2016 A group of 50 enthusiastic computer science students from Dougherty Valley High School in San Ramon, CA visited NERSC May 26, where they toured the computer room and participated in lively discussions about the facility and how supercomputers work. They asked great questions, such as "In the future, will there be

    1. Distributed optimization system and method

      DOE Patents [OSTI]

      Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

      2003-06-10

      A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

    2. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences and Engineering The Computational Sciences and Engineering Division (CSED) is ORNL's premier source of basic and applied research in the field of data sciences and knowledge discovery. CSED's science agenda is focused on research and development related to knowledge discovery enabled by the explosive growth in the availability, size, and variability of dynamic and disparate data sources. This science agenda encompasses data sciences as well as advanced modeling and

    3. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Technology Information Technology (IT) at ORNL serves a diverse community of stakeholders and interests. From everyday operations like email and telecommunications to institutional cluster computing and high bandwidth networking, IT at ORNL is responsible for planning and executing a coordinated strategy that ensures cost-effective, state-of-the-art computing capabilities for research and development. ORNL IT delivers leading-edge products to users in a risk-managed portfolio of

    4. Buildings Working Group Meeting AEO2016 Preliminary Results

      U.S. Energy Information Administration (EIA) Indexed Site

      Buildings Working Group Meeting Office of Energy Consumption and Efficiency Analysis February 18, 2016 | Washington, DC By Buildings Energy Analysis Team AEO2016 Preliminary Results Discussion purposes only - do not cite or circulate Overview * Key policies - Clean Power Plan - Federal standards and ENERGY STAR specifications * Sector drivers - Fuel prices - Weather - Commercial floorspace * Distributed generation * Residential and commercial consumption AEO2016 Buildings Working Group,

    5. Mathematical and Computational Epidemiology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    6. BNL ATLAS Grid Computing

      ScienceCinema (OSTI)

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    7. Computing environment logbook

      DOE Patents [OSTI]

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    8. Bio-Derived Liquid Distributed Reforming Outcomes Map | Department of

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Energy Liquid Distributed Reforming Outcomes Map Bio-Derived Liquid Distributed Reforming Outcomes Map This is a "pre-decisional draft of the Bio-Derived Liquid Distributed Reforming Outcomes Map. biliwg06_schlasner.pdf (36.88 KB) More Documents & Publications Agenda for the Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) Hydrogen Production Technical Team Research Review Distributed Reforming of Biomass Pyrolysis Oils (Presentation) Bio-Derived Liquids to

    9. Annual Coal Distribution

      Reports and Publications (EIA)

      2016-01-01

      The Annual Coal Distribution Report (ACDR) provides detailed information on domestic coal distribution by origin state, destination state, consumer category, and method of transportation. Also provided is a summary of foreign coal distribution by coal-producing state. All data for the report year are final and this report supersedes all data in the quarterly distribution reports.

    10. Annual Coal Distribution

      Reports and Publications (EIA)

      2015-01-01

      The Annual Coal Distribution Report (ACDR) provides detailed information on domestic coal distribution by origin state, destination state, consumer category, and method of transportation. Also provided is a summary of foreign coal distribution by coal-producing state. All data for the report year are final and this report supersedes all data in the quarterly distribution reports.

    11. Parallel, Distributed Scripting with Python

      SciTech Connect (OSTI)

      Miller, P J

      2002-05-24

      Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI library gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.

    12. Distributed Wind Ordinances: Slides

      Wind Powering America (EERE)

      an introduction to distributed wind projects and a brief overview of topics to consider when developing a distributed wind energy ordinance. Distributed Wind Ordinances Photo from Byers and Renier Construction, NREL 18820 Distributed Wind Ordinances The U.S. Department of Energy defines distributed wind projects as: (a) The use of wind turbines, on- or off-grid, at homes, farms and ranches, businesses, public and industrial facilities, or other sites to offset all or a portion of the local

    13. EIA -Quarterly Coal Distribution

      U.S. Energy Information Administration (EIA) Indexed Site

      - Coal Distribution Quarterly Coal Distribution Archives Release Date: August 17, 2016 Next Release Date: December 22, 2016 The Quarterly Coal Distribution Report (QCDR) provides detailed quarterly data on U.S. domestic coal distribution by coal origin, coal destination, mode of transportation and consuming sector. All data are preliminary and superseded by the final Coal Distribution - Annual Report. Year/Quarters By origin State By destination State Report Data File Report Data File 2009

    14. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    15. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    16. Human perceptual deficits as factors in computer interface test and evaluation

      SciTech Connect (OSTI)

      Bowser, S.E.

      1992-06-01

      Issues related to testing and evaluating human computer interfaces are usually based on the machine rather than on the human portion of the computer interface. Perceptual characteristics of the expected user are rarely investigated, and interface designers ignore known population perceptual limitations. For these reasons, environmental impacts on the equipment will more likely be defined than will user perceptual characteristics. The investigation of user population characteristics is most often directed toward intellectual abilities and anthropometry. This problem is compounded by the fact that some deficits capabilities tend to be found in higher-than-overall population distribution in some user groups. The test and evaluation community can address the issue from two primary aspects. First, assessing user characteristics should be extended to include tests of perceptual capability. Secondly, interface designs should use multimode information coding.

    17. Energy optimization of water distribution system

      SciTech Connect (OSTI)

      Not Available

      1993-02-01

      In order to analyze pump operating scenarios for the system with the computer model, information on existing pumping equipment and the distribution system was collected. The information includes the following: component description and design criteria for line booster stations, booster stations with reservoirs, and high lift pumps at the water treatment plants; daily operations data for 1988; annual reports from fiscal year 1987/1988 to fiscal year 1991/1992; and a 1985 calibrated KYPIPE computer model of DWSD`s water distribution system which included input data for the maximum hour and average day demands on the system for that year. This information has been used to produce the inventory database of the system and will be used to develop the computer program to analyze the system.

    18. Science Education Group | Princeton Plasma Physics Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science Education Group View larger image Sci Ed Group 15 View larger image Group 21

    19. Sandia Energy - Distribution Grid Integration

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Distribution Grid Integration Home Stationary Power Energy Conversion Efficiency Solar Energy Photovoltaics Grid Integration Distribution Grid Integration Distribution Grid...

    20. Deformations of polyhedra and polygons by the unitary group

      SciTech Connect (OSTI)

      Livine, Etera R.

      2013-12-15

      We introduce the set of framed (convex) polyhedra with N faces as the symplectic quotient C{sup 2N}//SU(2). A framed polyhedron is then parametrized by N spinors living in C{sup 2} satisfying suitable closure constraints and defines a usual convex polyhedron plus extra U(1) phases attached to each face. We show that there is a natural action of the unitary group U(N) on this phase space, which changes the shape of faces and allows to map any (framed) polyhedron onto any other with the same total (boundary) area. This identifies the space of framed polyhedra to the Grassmannian space U(N)/ (SU(2)U(N?2)). We show how to write averages of geometrical observables (polynomials in the faces' area and the angles between them) over the ensemble of polyhedra (distributed uniformly with respect to the Haar measure on U(N)) as polynomial integrals over the unitary group and we provide a few methods to compute these integrals systematically. We also use the Itzykson-Zuber formula from matrix models as the generating function for these averages and correlations. In the quantum case, a canonical quantization of the framed polyhedron phase space leads to the Hilbert space of SU(2) intertwiners (or, in other words, SU(2)-invariant states in tensor products of irreducible representations). The total boundary area as well as the individual face areas are quantized as half-integers (spins), and the Hilbert spaces for fixed total area form irreducible representations of U(N). We define semi-classical coherent intertwiner states peaked on classical framed polyhedra and transforming consistently under U(N) transformations. And we show how the U(N) character formula for unitary transformations is to be considered as an extension of the Itzykson-Zuber to the quantum level and generates the traces of all polynomial observables over the Hilbert space of intertwiners. We finally apply the same formalism to two dimensions and show that classical (convex) polygons can be described in a

    1. Lisa Gerhardt NERSC User Services Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Gerhardt NERSC User Services Group NUG Training August 11, 2015 Data Management at NERSC Where Do I Put My Data? - 2 - * Overview of NERSC file systems - Local vs. Global - Permanent vs. Purged - Personal vs. Shared * HPSS Archive System - What is it and how to use it NERSC File Systems - 3 - The compute and storage systems 2015 Production Clusters Carver, PDSF, JGI, MatComp, Planck /global/scr atch 4 PB /project 5 PB /home 250 TB 70 PB stored, 240 PB capacity, 40 years of community data HPSS 16

    2. SIAM Conference on Computational Science and Engineering

      SciTech Connect (OSTI)

      2003-01-01

      &E Education; Meshing and Adaptivity; Multiscale and Multiphysics Problems; Numerical Algorithms for CS&E; Discrete and Combinatorial Algorithms for CS&E; Inverse Problems; Optimal Design, Optimal Control, and Inverse Problems; Parallel and Distributed Computing; Problem-Solving Environments; Software and Wddleware Systems; Uncertainty Estimation and Sensitivity Analysis; and Visualization and Computer Graphics.

    3. September 13, 2012, HSS Focus Group Training Working Group (TWG...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      3 082912 HSS Focus Group Training Working Group (TWG) Meeting September 13, 2012 Room 67 HAMMER 2:00 PM - 4:30 PM Time Topic Lead 2:00 p.m. Safety Minute Welcome and ...

    4. Method and structure for skewed block-cyclic distribution of...

      Office of Scientific and Technical Information (OSTI)

      A method and structure of distributing elements of an array of data in a computer memory to a specific processor of a multi-dimensional mesh of parallel processors includes ...

    5. TEC Working Group Topic Groups Archives Communications | Department of

      Office of Environmental Management (EM)

      Energy Communications TEC Working Group Topic Groups Archives Communications The Communications Topic Group was convened in April 1998 to improve internal and external strategic level communications regarding DOE shipments of radioactive and other hazardous materials. Major issues under consideration by this Topic Group include: - Examination of DOE external and internal communications processes; - Roles and responsibilities when communicating with a diverse range of stakeholders; and -

    6. TEC Working Group Topic Groups Archives Training - Medical Training |

      Office of Environmental Management (EM)

      Department of Energy Training - Medical Training TEC Working Group Topic Groups Archives Training - Medical Training The TEC Training and Medical Training Issues Topic Group was formed to address the training issues for emergency responders in the event of a radioactive material transportation incident. The Topic Group first met in 1996 to assist DOE in developing an approach to address radiological emergency response training needs and to avoid redundancy of existing training materials. The

    7. Sandia Energy - High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    8. Computing and Computational Sciences Directorate - Computer Science and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics Division - Meetings and Workshops Awards Awards Night 2012 R&D LEADERSHIP, DIRECTOR LEVEL Winner: Brian Worley Organization: Computational Sciences & Engineering Division Citation: For exemplary program leadership of a successful and growing collaboration with the Department of Defense and for successfully initiating and providing oversight of a new data program with the Centers for Medicare and Medicaid Services. TECHNICAL SUPPORT Winner: Michael Matheson Organization:

    9. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

    10. Research Group Websites - Links - Cyclotron Institute

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Research Group Websites Dr. Sherry J. Yennello's Research Group Nuclear Theory Group Dr. Dan Melconian's Research Group Dr. Cody Folden's Research Group...

    11. Method for distributed agent-based non-expert simulation of manufacturing process behavior

      DOE Patents [OSTI]

      Ivezic, Nenad; Potok, Thomas E.

      2004-11-30

      A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

    12. Computational Model of Magnesium Deposition and Dissolution for Property

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Determination via Cyclic Voltammetry - Joint Center for Energy Storage Research June 23, 2016, Research Highlights Computational Model of Magnesium Deposition and Dissolution for Property Determination via Cyclic Voltammetry Top: Example distributions of the charge transfer coefficient and standard heterogeneous rate constant, obtained from fitting Bottom: Comparison between experimental and simulated voltammograms, demonstrating good agreement Scientific Achievement A computationally

    13. Sova Group | Open Energy Information

      Open Energy Info (EERE)

      Sova Group Jump to: navigation, search Name: Sova Group Place: Kolkata, West Bengal, India Zip: 700012 Product: Kolkatta-based iron and steel major. The firm plans to foray into PV...

    14. Minoan Group | Open Energy Information

      Open Energy Info (EERE)

      Minoan Group Jump to: navigation, search Name: Minoan Group Place: Kent, England, United Kingdom Zip: BR5 1XB Sector: Solar Product: UK-based developer of resorts in Greece that...

    15. ESV Group | Open Energy Information

      Open Energy Info (EERE)

      ESV Group Jump to: navigation, search Name: ESV Group Place: London, England, United Kingdom Zip: W1K 4QH Sector: Biofuels Product: UK-based investment agri-business involved in...

    16. Ensus Group | Open Energy Information

      Open Energy Info (EERE)

      Ensus Group Jump to: navigation, search Name: Ensus Group Place: Stockton-on-Tees, England, United Kingdom Zip: TS15 9BW Product: North Yorkshire-based developer & operator of...

    17. Camco Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: Camco Group Place: Jersey, United Kingdom Zip: JE2 4UH Sector: Carbon, Renewable Energy, Services Product: UK-based firm that provides...

    18. Weighted Running Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Weighted Running Jobs by Group Weighted Running Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2016-04-29 11:34:54

    19. Klebl Group | Open Energy Information

      Open Energy Info (EERE)

      Zip: 6388 Product: Construction and engineering group with some experience building PV plants. References: Klebl Group1 This article is a stub. You can help OpenEI by expanding...

    20. Scientific Cloud Computing Misconceptions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Scientific Cloud Computing Misconceptions Scientific Cloud Computing Misconceptions July 1, 2011 Part of the Magellan project was to understand both the possibilities and the limitations of cloud computing in the pursuit of science. At a recent conference, Magellan investigator Shane Canon outlined some persistent misconceptions about doing science in the cloud - and what Magellan has taught us about them. » Read the ISGTW story. » Download the slides (PDF, 4.1MB

    1. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    2. Energy Aware Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy required to compute is to reduce the power usage on a node. One way to accomplish this is by lowering the frequency at which the CPU operates. However, reducing the clock speed increases the time to solution, creating a potential tradeoff. NERSC continues to examine how such methods impact its operations and its

    3. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or modification. Among NERSC's security goal are: 1. To protect NERSC systems from unauthorized access. 2. To prevent the interruption of services to its users. 3. To prevent misuse or abuse of NERSC resources. Security Incidents If you think there has been a computer security incident you should contact NERSC Security as soon as

    4. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Exascale Computing » CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in research and development into energy efficient and effective processor and memory architectures for DOE's Exascale program. CAL coordinates hardware architecture R&D activities across the DOE. CAL is a joint NNSA/SC activity involving Sandia National Laboratories (CAL-Sandia) and

    5. Doubly Distributed Transactions

      Energy Science and Technology Software Center (OSTI)

      2014-08-25

      Doubly Distributed Transactions (D2T) offers a technique for managing operations from a set of parallel clients with a collection of distributed services. It detects and manages faults. Example code with a test harness is also provided

    6. Distributed Wind 2015

      Broader source: Energy.gov [DOE]

      Distributed Wind 2015 is committed to the advancement of both distributed and community wind energy. This two day event includes a Business Conference with sessions focused on advancing the...

    7. Citizenre Group | Open Energy Information

      Open Energy Info (EERE)

      19809 Product: A company planning to set up an integrated wafer, cell and module manufacturing plant, and then take part in the distribution and installation, and even asset...

    8. Breakout Group 3: Water Management

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      3: Water Management Participants Name Organization Tom Benjamin Argonne National ... National Laboratory Breakout Group 3: Water Management GAPSBARRIERS The Water ...

    9. Secure computing for the 'Everyman' goes to market

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Secure computing for the 'Everyman' goes to market Secure computing for the 'Everyman' goes to market Quantum key distribution technology could ensure truly secure commerce, banking, communications and data transfer December 22, 2014 Secure computing for the 'Everyman' goes to market This small device developed at Los Alamos National Laboratory uses the truly random spin of light particles as defined by laws of quantum mechanics to generate a random number for use in a cryptographic key that can

    10. Personal Computer Inventory System

      Energy Science and Technology Software Center (OSTI)

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    11. Scientific computations section monthly report, November 1993

      SciTech Connect (OSTI)

      Buckner, M.R.

      1993-12-30

      This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

    12. Integrated Transmission and Distribution Control

      SciTech Connect (OSTI)

      Kalsi, Karanjit; Fuller, Jason C.; Tuffner, Francis K.; Lian, Jianming; Zhang, Wei; Marinovici, Laurentiu D.; Fisher, Andrew R.; Chassin, Forrest S.; Hauer, Matthew L.

      2013-01-16

      allows the load flow interactions between the bulk power system and end-use loads to be explicitly modeled. Power system interactions are modeled down to time intervals as short as 1-second. Another practical issue is that the size and complexity of typical distribution systems makes direct integration with transmission models computationally intractable. Hence, the focus of the next main task is to develop reduced-order controllable models for some of the smart grid assets. In particular, HVAC units, which are a type of Thermostatically Controlled Loads (TCLs), are considered. The reduced-order modeling approach can be extended to other smart grid assets, like water heaters, PVs and PHEVs. Closed-loop control strategies are designed for a population of HVAC units under realistic conditions. The proposed load controller is fully responsive and achieves the control objective without sacrificing the end-use performance. Finally, using the T&D simulation platform, the benefits to the bulk power system are demonstrated by controlling smart grid assets under different demand response closed-loop control strategies.

    13. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      60 Years of Computing 60 Years of Computing

    14. Overview of the Distributed Generation Interconnection Collaborative

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      December 17, 2013 Overview presentation for group call, 1:00-2:30EST 2 October 21,2013 NREL and EPRI facilitated workshop of electric utilities, PV developers, PUCs, and other stakeholders to discuss the formulation of a collaborative effort focused on distributed PV interconnection: - Data and informational gaps/needs - Persistent challenges - Replicable innovation - Informed decision making and planning for anticipated rise in distributed PV interconnection Based on stakeholder input and

    15. Software and High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational physics, computer science, applied mathematics, statistics and the ... a fully operational supercomputing environment Providing Current Capability Scientific ...

    16. FRIB cryogenic distribution system

      SciTech Connect (OSTI)

      Ganni, Venkatarao; Dixon, Kelly D.; Laverdure, Nathaniel A.; Knudsen, Peter N.; Arenius, Dana M.; Barrios, Matthew N.; Jones, S.; Johnson, M.; Casagrande, Fabio

      2014-01-01

      The Michigan State University Facility for Rare Isotope Beams (MSU-FRIB) helium distribution system has been revised to include bayonet/warm valve type disconnects between each cryomodule and the transfer line distribution system, similar to the Thomas Jefferson National Accelerator Facility (JLab) and the Spallation Neutron Source (SNS) cryogenic distribution systems. The heat loads at various temperature levels and some of the features in the design of the distribution system are outlined. The present status, the plans for fabrication, and the procurement approach for the helium distribution system are also included.

    17. THE ABUNDANCE OF BULLET GROUPS IN ?CDM

      SciTech Connect (OSTI)

      Fernndez-Trincado, J. G.; Forero-Romero, J. E.; Foex, G.; Motta, V.; Verdugo, T. E-mail: je.forero@uniandes.edu.co

      2014-06-01

      We estimate the expected distribution of displacements between the two dominant dark matter (DM) peaks (DM-DM displacements) and between the DM and gaseous baryon peak (DM-gas displacements) in DM halos with masses larger than 10{sup 13}h {sup 1} M {sub ?}. As a benchmark, we use the observation of SL2S J085440121, which is the lowest mass system (1.0 10{sup 14}h {sup 1} M {sub ?}) observed so far, featuring a bi-modal DM distribution with a dislocated gas component. We find that (50 10)% of the DM halos with circular velocities in the range 300-700 km s{sup 1} (groups) show DM-DM displacements equal to or larger than 186 30 h {sup 1}kpc as observed in SL2S J085440121. For DM halos with circular velocities larger than 700 km s{sup 1} (clusters) this fraction rises to (70 10)%. Using the same simulation, we estimate the DM-gas displacements and find that 0.1%-1.0% of the groups should present separations equal to or larger than 87 14 h {sup 1}kpc, corresponding to our observational benchmark; for clusters, this fraction rises to (7 3)%, consistent with previous studies of DM to baryon separations. Considering both constraints on the DM-DM and DM-gas displacements, we find that the number density of groups similar to SL2S J085440121 is ?6.0 10{sup 7} Mpc{sup 3}, three times larger than the estimated value for clusters. These results open up the possibility for a new statistical test of ?CDM by looking for DM-gas displacements in low mass clusters and groups.

    18. ELECTRONIC DIGITAL COMPUTER

      DOE Patents [OSTI]

      Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.

      1957-10-01

      The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.

    19. Seismology Group Leader, Lawrence Livermore National Laboratory | National

      National Nuclear Security Administration (NNSA)

      Nuclear Security Administration | (NNSA) Seismology Group Leader, Lawrence Livermore National Laboratory Artie Rogers demonstrating seismology modeling. Artie Rogers August 2009 Fulbright Scholarship Artie Rodgers, Seismology Group Leader at Lawrence Livermore National Laboratory, was recently awarded a Fulbright Scholarship. In January he will be heading to Grenoble, France to study the relationship between topography and seismology with computer modeling at Laboratoire de Géohysique

    20. Jefferson Lab Groups Encourage Digital Literacy Through Worldwide 'Hour

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      of Code' Campaign | Jefferson Lab Groups Encourage Digital Literacy Through Worldwide 'Hour of Code' Campaign Dana Cochran, Jefferson Lab staff member, helps students as they participate in a coding activity. Dana Cochran, Jefferson Lab staff member, helps students as they participate in a coding activity. Jefferson Lab Groups Encourage Digital Literacy Through Worldwide 'Hour of Code' Campaign To raise awareness of the need for digital literacy and a basic understanding of computer science,

    1. TEC Working Group Topic Groups Rail | Department of Energy

      Office of Environmental Management (EM)

      The group's current task is to examine different aspects of rail transportation including inspections, tracking and radiation monitoring, planning and process, and review of ...

    2. TEC Working Group Topic Groups Routing Conference Call Summaries |

      Office of Environmental Management (EM)

      Department of Energy Routing Conference Call Summaries TEC Working Group Topic Groups Routing Conference Call Summaries CONFERENCE CALL SUMMARIES January 31, 2008 (11.6 KB) December 6, 2007 (11.96 KB) October 4, 2007 (16.46 KB) August 23, 2007 (26.38 KB) June 21, 2007 (41.02 KB) May 31, 2007 (31.04 KB) January 18, 2007 (93.16 KB) December 19, 2006 (28.83 KB) November 9, 2006 (19.84 KB) More Documents & Publications TEC Working Group Topic Groups Rail Conference Call Summaries Rail Topic

    3. TEC Working Group Topic Groups Security Conference Call Summaries |

      Office of Environmental Management (EM)

      Department of Energy Conference Call Summaries TEC Working Group Topic Groups Security Conference Call Summaries Conference Call Summaries August 17, 2006 (Draft) (17.12 KB) July 18, 2006 (Draft) (14.08 KB) June 20, 2006 (Draft) (16.18 KB) April 18, 2006 (27.83 KB) February 21, 2006 (32.98 KB) January 24, 2006 (19.36 KB) December 20, 2005 (13.79 KB) November 17, 2005 (17.52 KB) October 18, 2005 (18.51 KB) May 8, 2005 (29.42 KB) More Documents & Publications TEC Working Group Topic Groups

    4. MAGNIFICATION BY GALAXY GROUP DARK MATTER HALOS

      SciTech Connect (OSTI)

      Ford, Jes; Hildebrandt, Hendrik; Van Waerbeke, Ludovic; Leauthaud, Alexie; Tanaka, Masayuki; Capak, Peter; Finoguenov, Alexis; George, Matthew R.; Rhodes, Jason

      2012-08-01

      We report on the detection of gravitational lensing magnification by a population of galaxy groups, at a significance level of 4.9{sigma}. Using X-ray-selected groups in the COSMOS 1.64 deg{sup 2} field, and high-redshift Lyman break galaxies as sources, we measure a lensing-induced angular cross-correlation between the samples. After satisfying consistency checks that demonstrate we have indeed detected a magnification signal, and are not suffering from contamination by physical overlap of samples, we proceed to implement an optimally weighted cross-correlation function to further boost the signal to noise of the measurement. Interpreting this optimally weighted measurement allows us to study properties of the lensing groups. We model the full distribution of group masses using a composite-halo approach, considering both the singular isothermal sphere and Navarro-Frenk-White profiles, and find our best-fit values to be consistent with those recovered using the weak-lensing shear technique. We argue that future weak-lensing studies will need to incorporate magnification along with shear, both to reduce residual systematics and to make full use of all available source information, in an effort to maximize scientific yield of the observations.

    5. Executing a gather operation on a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Ratterman, Joseph D.

      2012-03-20

      Methods, apparatus, and computer program products are disclosed for executing a gather operation on a parallel computer according to embodiments of the present invention. Embodiments include configuring, by the logical root, a result buffer or the logical root, the result buffer having positions, each position corresponding to a ranked node in the operational group and for storing contribution data gathered from that ranked node. Embodiments also include repeatedly for each position in the result buffer: determining, by each compute node of an operational group, whether the current position in the result buffer corresponds with the rank of the compute node, if the current position in the result buffer corresponds with the rank of the compute node, contributing, by that compute node, the compute node's contribution data, if the current position in the result buffer does not correspond with the rank of the compute node, contributing, by that compute node, a value of zero for the contribution data, and storing, by the logical root in the current position in the result buffer, results of a bitwise OR operation of all the contribution data by all compute nodes of the operational group for the current position, the results received through the global combining network.

    6. Indirection and computer security.

      SciTech Connect (OSTI)

      Berg, Michael J.

      2011-09-01

      The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyze common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions.

    7. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect (OSTI)

      GLIMM,J.

      2001-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    8. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect (OSTI)

      GLIMM,J.

      2003-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    9. THE CENTER FOR DATA INTENSIVE COMPUTING

      SciTech Connect (OSTI)

      GLIMM,J.

      2002-11-01

      CDIC will provide state-of-the-art computational and computer science for the Laboratory and for the broader DOE and scientific community. We achieve this goal by performing advanced scientific computing research in the Laboratory's mission areas of High Energy and Nuclear Physics, Biological and Environmental Research, and Basic Energy Sciences. We also assist other groups at the Laboratory to reach new levels of achievement in computing. We are ''data intensive'' because the production and manipulation of large quantities of data are hallmarks of scientific research in the 21st century and are intrinsic features of major programs at Brookhaven. An integral part of our activity to accomplish this mission will be a close collaboration with the University at Stony Brook.

    10. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Oak Ridge Climate Change Science Institute Jim Hack Oak Ridge National Laboratory (ORNL) has formed the Oak Ridge Climate Change Science Institute (ORCCSI) that will develop and execute programs for the multi-agency, multi-disciplinary climate change research partnerships at ORNL. Led by Director Jim Hack and Deputy Director Dave Bader, the Institute will integrate scientific projects in modeling, observations, and experimentation with ORNL's powerful computational and informatics capabilities

    11. Computational Nuclear Structure | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Excellent scaling is achieved by the production Automatic Dynamic Load Balancing (ADLB) library on the BG/P. Computational Nuclear Structure PI Name: David Dean Hai Nam PI Email: namha@ornl.gov deandj@ornl.gov Institution: Oak Ridge National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 15 Million Year: 2010 Research Domain: Physics Researchers from Oak Ridge and Argonne national laboratories are using complementary techniques, including Green's Function Monte Carlo, the No

    12. Groups

      Open Energy Info (EERE)

      groupbig-clean-data" target"blank">read more

      Big Data Concentrated Solar Power DataAnalysis energy efficiency energy storage expert systems machine learning...

    13. Natural Gas Transmission and Distribution Module

      U.S. Energy Information Administration (EIA) Indexed Site

      www.eia.gov Joe Benneche July 31, 2012, Washington, DC Major assumption changes for AEO2013 Oil and Gas Working Group Natural Gas Transmission and Distribution Module DRAFT WORKING GROUP PRESENTATION DO NOT QUOTE OR CITE Overview 2 Joe Benneche, Washington, DC, July 31, 2012 * Replace regional natural gas wellhead price projections with regional spot price projections * Pricing of natural gas vehicles fuels (CNG and LNG) * Methodology for modeling exports of LNG * Assumptions on charges related

    14. Distributed Reforming of Biomass Pyrolysis Oils (Presentation) | Department

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      of Energy Biomass Pyrolysis Oils (Presentation) Distributed Reforming of Biomass Pyrolysis Oils (Presentation) Presented at the 2007 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group held November 6, 2007 in Laurel, Maryland. 06_nrel_distributed_reforming_biomass_pyrolysis_oils.pdf (301.5 KB) More Documents & Publications Distributed Bio-Oil Reforming Bioenergy Technologies Office R&D Pathways: In-Situ Catalytic Fast Pyrolysis Bioenergy Technologies Office R&D

    15. Proceedings of the IMOG (Interagency Manufacturing Operations Group) Numerical Systems Group. 62nd Meeting

      SciTech Connect (OSTI)

      Maes, G.J.

      1993-10-01

      This document contains the proceedings of the 62nd Interagency Manufacturing Operations Group (IMOG) Numerical Systems Group. Included are the minutes of the 61st meeting and the agenda for the 62nd meeting. Presentations at the meeting are provided in the appendices to this document. Presentations were: 1992 NSG Annual Report to IMOG Steering Committee; Charter for the IMOG Numerical Systems Group; Y-12 Coordinate Measuring Machine Training Project; IBH NC Controller; Automatically Programmed Metrology Update; Certification of Anvil-5000 for Production Use at the Y-12 Plant; Accord Project; Sandia National Laboratories {open_quotes}Accord{close_quotes}; Demo/Anvil Tool Path Generation 5-Axis; Demo/Video Machine/Robot Animation Dynamics; Demo/Certification of Anvil Tool Path Generation; Tour of the M-60 Inspection Machine; Distributed Numerical Control Certification; Spline Usage Method; Y-12 NC Engineering Status; and Y-12 Manufacturing CAD Systems.

    16. TEC Working Group Topic Groups Archives Route Identification Process |

      Office of Environmental Management (EM)

      Department of Energy Route Identification Process TEC Working Group Topic Groups Archives Route Identification Process Route Identification Process Items Available for Download Routing Discussion Paper (April 1998) (71.87 KB) More Documents & Publications TEC Meeting Summaries - January 1997 TEC Meeting Summaries - July 1997 TEC Meeting Summaries - January 1998

    17. Absolute nuclear material assay using count distribution (LAMBDA) space

      DOE Patents [OSTI]

      Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.

      2015-12-01

      A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

    18. Absolute nuclear material assay using count distribution (LAMBDA) space

      DOE Patents [OSTI]

      Prasad, Manoj K.; Snyderman, Neal J.; Rowland, Mark S.

      2012-06-05

      A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

    19. HASQARD Focus Group - Hanford Site

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Contracting Wastren Advantage, Inc. HASQARD Focus Group Contracting ORP Contracts and Procurements RL Contracts and Procurements CH2M HILL Plateau Remediation Company Mission Support Alliance Washington Closure Hanford HPM Corporation (HPMC) Wastren Advantage, Inc. Analytical Services HASQARD Focus Group Bechtel National, Inc. Washington River Protection Solutions HASQARD Focus Group Email Email Page | Print Print Page | Text Increase Font Size Decrease Font Size HASQARD Document HASQARD

    20. Creating Los Alamos Women's Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Raeanna Sharp-Geiger-Creating a cleaner, greener environment March 28, 2014 Creating Los Alamos Women's Group Inspired by their informal dinner discussions, Raeanna Sharp-Geiger and a few of her female colleagues decided to create a new resource a few years ago, the Los Alamos Women's Group. They wanted to create a comfortable environment where women from all across the diverse Lab could network, collaborate, share ideas and gain a broader perspective of the Lab's mission. The Women's Group has