National Library of Energy BETA

Sample records for distributed computing group

  1. Computing Frontier: Distributed Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Frontier: Distributed Computing and Facility Infrastructures Conveners: Kenneth Bloom 1 , Richard Gerber 2 1 Department of Physics and Astronomy, University of Nebraska-Lincoln 2 National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory 1.1 Introduction The field of particle physics has become increasingly reliant on large-scale computing resources to address the challenges of analyzing large datasets, completing specialized computations and

  2. NERSC seeks Computational Systems Group Lead

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    seeks Computational Systems Group Lead NERSC seeks Computational Systems Group Lead January 6, 2011 by Katie Antypas Note: This position is now closed. The Computational Systems Group provides production support and advanced development for the supercomputer systems at NERSC. Manage the Computational Systems Group (CSG) which provides production support and advanced development for the supercomputer systems at NERSC (National Energy Research Scientific Computing Center). These systems, which

  3. Computer Networking Group | Stanford Synchrotron Radiation Lightsource

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer Networking Group Do you need help? For assistance please submit a CNG Help Request ticket. CNG Logo Chris Ramirez SSRL Computer and Networking Group (650) 926-2901 | email ...

  4. Distributed Energy Financial Group | Open Energy Information

    Open Energy Info (EERE)

    Financial Group Jump to: navigation, search Name: Distributed Energy Financial Group Place: Washington, DC, Washington, DC Zip: 20016-25 12 Sector: Services Product: The...

  5. Distributed Energy Systems Integration Group (Fact Sheet)

    SciTech Connect (OSTI)

    Not Available

    2009-10-01

    Factsheet developed to describe the activites of the Distributed Energy Systems Integration Group within NREL's Electricity, Resources, and Buildings Systems Integration center.

  6. Jay Srinivasan! Group Lead, Computational Systems!

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Group Lead, Computational Systems! NUG - Feb 2015 Computational Systems Update NERSC - 2014 --- 2 --- Sponsored C ompute S ystems Carver, P DSF, J GI, K BASE, H EP 8 x F DR I B /global/ scratch 4 PB /project 5 PB /home 250 TB 45 P B s tored, 2 40 P B capacity, 4 0 y ears o f community d ata HPSS 48 GB/s 2.2 P B L ocal Scratch 70 GB/s 6.4 P B L ocal Scratch 140 GB/s 80 GB/s 2 x 10 Gb 1 x 100 Gb Science D ata N etwork Vis & A naly?cs, D ata T ransfer N odes, Adv. A rch., S cience G ateways 80

  7. Distributed Real-Time Computing with Harness

    SciTech Connect (OSTI)

    Di Saverio, Emanuele; Cesati, Marco; Di Biagio, Christian; Pennella, Guido; Engelmann, Christian

    2007-01-01

    Modern parallel and distributed computing solutions are often built onto a ''middleware'' software layer providing a higher and common level of service between computational nodes. Harness is an adaptable, plugin-based middleware framework for parallel and distributed computing. This paper reports recent research and development results of using Harness for real-time distributed computing applications in the context of an industrial environment with the needs to perform several safety critical tasks. The presented work exploits the modular architecture of Harness in conjunction with a lightweight threaded implementation to resolve several real-time issues by adding three new Harness plug-ins to provide a prioritized lightweight execution environment, low latency communication facilities, and local timestamped event logging.

  8. Distributions of methyl group rotational barriers in polycrystalline organic solids

    SciTech Connect (OSTI)

    Beckmann, Peter A. E-mail: wangxianlong@uestc.edu.cn; Conn, Kathleen G.; Division of Education and Human Services, Neumann University, One Neumann Drive, Aston, Pennsylvania 19014-1298 ; Mallory, Clelia W.; Department of Chemistry, Bryn Mawr College, 101 North Merion Ave., Bryn Mawr, Pennsylvania 19010-2899 ; Mallory, Frank B.; Rheingold, Arnold L.; Rotkina, Lolita; Wang, Xianlong E-mail: wangxianlong@uestc.edu.cn

    2013-11-28

    We bring together solid state {sup 1}H spin-lattice relaxation rate measurements, scanning electron microscopy, single crystal X-ray diffraction, and electronic structure calculations for two methyl substituted organic compounds to investigate methyl group (CH{sub 3}) rotational dynamics in the solid state. Methyl group rotational barrier heights are computed using electronic structure calculations, both in isolated molecules and in molecular clusters mimicking a perfect single crystal environment. The calculations are performed on suitable clusters built from the X-ray diffraction studies. These calculations allow for an estimate of the intramolecular and the intermolecular contributions to the barrier heights. The {sup 1}H relaxation measurements, on the other hand, are performed with polycrystalline samples which have been investigated with scanning electron microscopy. The {sup 1}H relaxation measurements are best fitted with a distribution of activation energies for methyl group rotation and we propose, based on the scanning electron microscopy images, that this distribution arises from molecules near crystallite surfaces or near other crystal imperfections (vacancies, dislocations, etc.). An activation energy characterizing this distribution is compared with a barrier height determined from the electronic structure calculations and a consistent model for methyl group rotation is developed. The compounds are 1,6-dimethylphenanthrene and 1,8-dimethylphenanthrene and the methyl group barriers being discussed and compared are in the 212 kJ?mol{sup ?1} range.

  9. Computational social dynamic modeling of group recruitment.

    SciTech Connect (OSTI)

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  10. NERSC seeks Computational Systems Group Lead

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and advanced development for the supercomputer systems at NERSC (National Energy Research Scientific Computing ... workload demands within hiring and budget constraints. ...

  11. Interoperable PKI Data Distribution in Computational Grids

    SciTech Connect (OSTI)

    Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.; Smith, Sean W.

    2008-07-25

    One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Grid Security Infrastructure (GSI).

  12. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group |

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Department of Energy Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group The Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG), launched in October 2006, provides a forum for effective communication and collaboration among participants in DOE Fuel Cell Technologies Office (FCT) cost-shared research directed at distributed bio-liquid reforming. The Working Group includes individuals from DOE, the national laboratories, industry, and academia.

  13. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    (BILIWG), Hydrogen Separation and Purification Working Group (PURIWG) & Hydrogen Production Technical Team | Department of Energy Working Group (BILIWG), Hydrogen Separation and Purification Working Group (PURIWG) & Hydrogen Production Technical Team Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG), Hydrogen Separation and Purification Working Group (PURIWG) & Hydrogen Production Technical Team 2007 Annual and Merit Review Reports compiled for the

  14. High Performance Computational Biology: A Distributed computing Perspective (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Konerding, David [Google, Inc

    2011-06-08

    David Konerding from Google, Inc. gives a presentation on "High Performance Computational Biology: A Distributed Computing Perspective" at the JGI/Argonne HPC Workshop on January 26, 2010.

  15. Perspectives on distributed computing : thirty people, four user types, and the distributed computing user experience.

    SciTech Connect (OSTI)

    Childers, L.; Liming, L.; Foster, I.; Mathematics and Computer Science; Univ. of Chicago

    2008-10-15

    This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software; (2

  16. Distributed Design and Analysis of Computer Experiments

    Energy Science and Technology Software Center (OSTI)

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation

  17. Establishing a group of endpoints in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.; Xue, Hanhong

    2016-02-02

    A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.

  18. Working Group Report: Computing for the Intensity Frontier

    SciTech Connect (OSTI)

    Rebel, B.; Sanchez, M.C.; Wolbers, S.

    2013-10-25

    This is the report of the Computing Frontier working group on Lattice Field Theory prepared for the proceedings of the 2013 Community Summer Study ("Snowmass"). We present the future computing needs and plans of the U.S. lattice gauge theory community and argue that continued support of the U.S. (and worldwide) lattice-QCD effort is essential to fully capitalize on the enormous investment in the high-energy physics experimental program. We first summarize the dramatic progress of numerical lattice-QCD simulations in the past decade, with some emphasis on calculations carried out under the auspices of the U.S. Lattice-QCD Collaboration, and describe a broad program of lattice-QCD calculations that will be relevant for future experiments at the intensity and energy frontiers. We then present details of the computational hardware and software resources needed to undertake these calculations.

  19. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect (OSTI)

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  20. Clock distribution system for digital computers

    DOE Patents [OSTI]

    Wyman, Robert H.; Loomis, Jr., Herschel H.

    1981-01-01

    Apparatus for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse "overtaking" a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component V'.sub.01 (t); an array of N signal characteristic detector means, with detector means No. 1 receiving the timing means signal and producing a change-of-state signal V.sub.1 (t) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal V.sub.n (t) and producing a modified change-of-state signal V'.sub.n (t) (n=1, . . . , N) having a fundamental frequency component that is substantially proportional to V'.sub.01 (t-.theta..sub.n (t) with a cumulative phase shift .theta..sub.n (t) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1.ltoreq.n

  1. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect (OSTI)

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  2. Computation of glint, glare, and solar irradiance distribution

    SciTech Connect (OSTI)

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  3. A directory service for configuring high-performance distributed computations

    SciTech Connect (OSTI)

    Fitzgerald, S.; Kesselman, C.; Foster, I.

    1997-08-01

    High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

  4. Institutional Computing Executive Group Review of Multi-programmatic & Institutional Computing, Fiscal Year 2005 and 2006

    SciTech Connect (OSTI)

    Langer, S; Rotman, D; Schwegler, E; Folta, P; Gee, R; White, D

    2006-12-18

    The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources. Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte

  5. Gaussian distributions, Jacobi group, and Siegel-Jacobi space

    SciTech Connect (OSTI)

    Molitor, Mathieu

    2014-12-15

    Let N be the space of Gaussian distribution functions over ℝ, regarded as a 2-dimensional statistical manifold parameterized by the mean μ and the deviation σ. In this paper, we show that the tangent bundle of N, endowed with its natural Kähler structure, is the Siegel-Jacobi space appearing in the context of Number Theory and Jacobi forms. Geometrical aspects of the Siegel-Jacobi space are discussed in detail (completeness, curvature, group of holomorphic isometries, space of Kähler functions, and relationship to the Jacobi group), and are related to the quantum formalism in its geometrical form, i.e., based on the Kähler structure of the complex projective space. This paper is a continuation of our previous work [M. Molitor, “Remarks on the statistical origin of the geometrical formulation of quantum mechanics,” Int. J. Geom. Methods Mod. Phys. 9(3), 1220001, 9 (2012); M. Molitor, “Information geometry and the hydrodynamical formulation of quantum mechanics,” e-print arXiv (2012); M. Molitor, “Exponential families, Kähler geometry and quantum mechanics,” J. Geom. Phys. 70, 54–80 (2013)], where we studied the quantum formalism from a geometric and information-theoretical point of view.

  6. Scalable error correction in distributed ion trap computers

    SciTech Connect (OSTI)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-11-15

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment.

  7. GAiN: Distributed Array Computation with Python

    SciTech Connect (OSTI)

    Daily, Jeffrey A.

    2009-04-24

    Scientific computing makes use of very large, multidimensional numerical arrays - typically, gigabytes to terabytes in size - much larger than can fit on even the largest single compute node. Such arrays must be distributed across a "cluster" of nodes. Global Arrays is a cluster-based software system from Battelle Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate these arrays. Written in and for the C and FORTRAN programming languages, it takes advantage of high-performance cluster interconnections to allow any node in the cluster to access data on any other node very rapidly. The "numpy" module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. numpy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, numpy is inherently serial. Our system, GAiN (Global Arrays in NumPy), is a parallel extension to Python that accesses Global Arrays through numpy. This allows parallel processing and/or larger problem sizes to be harnessed almost transparently within new or existing numpy programs.

  8. Inferring Group Processes from Computer-Mediated Affective Text Analysis

    SciTech Connect (OSTI)

    Schryver, Jack C; Begoli, Edmon; Jose, Ajith; Griffin, Christopher

    2011-02-01

    Political communications in the form of unstructured text convey rich connotative meaning that can reveal underlying group social processes. Previous research has focused on sentiment analysis at the document level, but we extend this analysis to sub-document levels through a detailed analysis of affective relationships between entities extracted from a document. Instead of pure sentiment analysis, which is just positive or negative, we explore nuances of affective meaning in 22 affect categories. Our affect propagation algorithm automatically calculates and displays extracted affective relationships among entities in graphical form in our prototype (TEAMSTER), starting with seed lists of affect terms. Several useful metrics are defined to infer underlying group processes by aggregating affective relationships discovered in a text. Our approach has been validated with annotated documents from the MPQA corpus, achieving a performance gain of 74% over comparable random guessers.

  9. Reviews of computing technology: Fiber distributed data interface

    SciTech Connect (OSTI)

    Johnson, A.J.

    1991-12-01

    Fiber Distributed Data Interface, more commonly known as FDDI, is the name of the standard that describes a new local area network (LAN) technology for the 90's. This technology is based on fiber optics communications and, at a data transmission rate of 100 million bits per second (mbps), provides a full order of magnitude improvement over previous LAN standards such as Ethernet and Token Ring. FDDI as a standard has been accepted by all major computer manufacturers and is a national standard as defined by the American National Standards Institute (ANSI). FDDI will become part of the US Government Open Systems Interconnection Profile (GOSIP) under Version 3 GOSIP and will become an international standard promoted by the International Standards Organization (ISO). It is important to note that there are no competing standards for high performance LAN's so that FDDI acceptance is nearly universal. This technology report describes FDDI as a technology, looks at the applications of this technology, examine the current economics of using it, and describe activities and plans by the Information Resource Management (IRM) department to implement this technology at the Savannah River Site.

  10. Providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Faraj, Ahmad A.; Inglett, Todd A.; Ratterman, Joseph D.

    2012-10-23

    Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier.

  11. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Background Paper

    Broader source: Energy.gov [DOE]

    Paper by Arlene Anderson and Tracy Carole presented at the Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group, with a focus on key drivers, purpose, and scope.

  12. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Meeting

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    - November 2007 | Department of Energy Meeting - November 2007 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Meeting - November 2007 The Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group participated in a Hydrogen Production Technical Team Research Review on November 6, 2007. The meeting provided the opportunity for researchers to share their experiences in converting bio-derived liquids to hydrogen with members of the Department of Energy Hydrogen

  13. High-performance, distributed computing software libraries and services

    Energy Science and Technology Software Center (OSTI)

    2002-01-24

    The Globus toolkit provides basic Grid software infrastructure (i.e. middleware), to facilitate the development of applications which securely integrate geographically separated resources, including computers, storage systems, instruments, immersive environments, etc.

  14. NERSC User's Group Meeting 2.4.14 Computational Facilities: NERSC

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    User's Group Meeting 2.4.14 Computational Facilities: NERSC Collaborators: Martin Karplus, Eric Vanden-Eijnden, Kwangho Nam, Anne Houdusse, Robert Sauer Financial support: NIH Conformational change in biology: from amino acids to enzymes and molecular motors. Victor Ovchinnikov NERSC User's Group Meeting 2.4.14 2 Introduction  Conformational motions in biomolecules define all living things - Transport across membranes - Enzyme reactions (from proton transfer to DNA replication and repair) -

  15. Reviews of computing technology: Fiber distributed data interface

    SciTech Connect (OSTI)

    Johnson, A.J.

    1992-04-01

    This technology report describes Fiber Distributed Data Interface (FDDI) as a technology, looks at the applications of this technology, examines the current economics of using it, and describe activities and plans by the Information Resource Management Department to implement this technology at the Savannah River Site.

  16. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Kick-Off Meeting

    Broader source: Energy.gov [DOE]

    The U.S. Department of Energy held a kick-off meeting for the Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) on October 24, 2006, in Baltimore, Maryland. The Working Group is addressing technical challenges to distributed reforming of biomass-derived, renewable liquid fuels to hydrogen, including the reforming, water-gas shift, and hydrogen recovery and purification steps. The meeting provided the opportunity for researchers to share their experiences in converting bio-derived liquids to hydrogen with each other and with members of the DOE Hydrogen Production Technical Team.

  17. Data-aware distributed scientific computing for big-data problems...

    Office of Scientific and Technical Information (OSTI)

    big-data problems in bio-surveillance Citation Details In-Document Search Title: Data-aware distributed scientific computing for big-data problems in bio-surveillance You are ...

  18. A secure communications infrastructure for high-performance distributed computing

    SciTech Connect (OSTI)

    Foster, I.; Koenig, G.; Tuecke, S.

    1997-08-01

    Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentially of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. The authors address these requirements via a security-enhanced version of the Nexus communication library; which they use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine-grained security/performance tradeoffs. The authors present performance results that quantify the performance of their infrastructure.

  19. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Faraj, Ahmad A; Inglett, Todd A; Ratterman, Joseph D

    2013-04-16

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  20. Configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks

    DOE Patents [OSTI]

    Archer, Charles J.; Inglett, Todd A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-03-02

    Methods, apparatus, and products are disclosed for configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks, the compute nodes in the operational group connected together for data communications through a global combining network, that include: partitioning the compute nodes in the operational group into a plurality of non-overlapping subgroups; designating one compute node from each of the non-overlapping subgroups as a master node; and assigning, to the compute nodes in each of the non-overlapping subgroups, class routing instructions that organize the compute nodes in that non-overlapping subgroup as a collective network such that the master node is a physical root.

  1. Computational study of ion distributions at the air/liquid methanol interface

    SciTech Connect (OSTI)

    Sun, Xiuquan; Wick, Collin D.; Dang, Liem X.

    2011-06-16

    Molecular dynamic simulations with polarizable potentials were performed to systematically investigate the distribution of NaCl, NaBr, NaI, and SrCl2 at the air/liquid methanol interface. The density profiles indicated that there is no substantial enhancement of anions at the interface for the NaX systems in contrast to what was observed at the air/aqueous interface. The surfactant-like shape of the larger more polarizable halide anions is compensated by the surfactant nature of methanol itself. As a result, methanol hydroxy groups strongly interacted with one side of polarizable anions, in which their induced dipole points, and methanol methyl groups were more likely to be found near the positive pole of anion induced dipoles. Furthermore, salts were found to disrupt the surface structure of methanol, reducing the observed enhancement of methyl groups at the outer edge of the air/liquid methanol interface. With the additional of salts to methanol, the computed surface potentials increased, which is in contrast to what is observed in corresponding aqueous systems, where the surface potential decreases with the addition of salts. Both of these trends have been indirectly observed with experiments. This was found to be due to the propensity of anions for the air/water interface that is not present at the air/liquid methanol interface. This work was supported by the US Department of Energy Basic Energy Sciences' Chemical Sciences, Geosciences & Biosciences Division. Pacific Northwest National Laboratory is operated by Battelle for the US Department of Energy.

  2. Probing the structure of complex solids using a distributed computing approach-Applications in zeolite science

    SciTech Connect (OSTI)

    French, Samuel A.; Coates, Rosie; Lewis, Dewi W.; Catlow, C. Richard A.

    2011-06-15

    We demonstrate the viability of distributed computing techniques employing idle desktop computers in investigating complex structural problems in solids. Through the use of a combined Monte Carlo and energy minimisation method, we show how a large parameter space can be effectively scanned. By controlling the generation and running of different configurations through a database engine, we are able to not only analyse the data 'on the fly' but also direct the running of jobs and the algorithms for generating further structures. As an exemplar case, we probe the distribution of Al and extra-framework cations in the structure of the zeolite Mordenite. We compare our computed unit cells with experiment and find that whilst there is excellent correlation between computed and experimentally derived unit cell volumes, cation positioning and short-range Al ordering (i.e. near neighbour environment), there remains some discrepancy in the distribution of Al throughout the framework. We also show that stability-structure correlations only become apparent once a sufficiently large sample is used. - Graphical Abstract: Aluminium distributions in zeolites are determined using e-science methods. Highlights: > Use of e-science methods to search configurationally space. > Automated control of space searching. > Identify key structural features conveying stability. > Improved correlation of computed structures with experimental data.

  3. Efficient computation of stress and load distribution for external cylindrical gears

    SciTech Connect (OSTI)

    Zhang, J.J.; Esat, I.I.; Shi, Y.H.

    1996-12-31

    It has been extensively realized that tooth flank correction is an effective technique to improve load carrying capacity and running behavior of gears. However, the existing analytical methods of load distribution are not very satisfactory. They are either too simplified to produce accurate results or computationally too expensive. In this paper, we propose a new approach which computes the load and stress distribution of external involute gears efficiently and accurately. It adopts the {open_quotes}thin-slice{close_quotes} model and 2D FEA technique and takes into account the varying meshing stiffness.

  4. Acidity of the amidoxime functional group in aqueous solution. A combined experimental and computational study

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; Tucker, Lyndsay; Correia, Bruna; Do-Thanh, Chi-Linh; Dai, Sheng; Hancock, Robert D.; Bryantsev, Vyacheslav S.

    2015-01-01

    Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pKa values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pKa values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopicmore » titrations to measure the pKa values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pKa values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pKa units and 0.35 pKa units, respectively, and a root mean square deviation of 0.46 pKa units and 0.45 pKa units, respectively. Finally, we employ our two best methods to predict the pKa values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.« less

  5. Acidity of the amidoxime functional group in aqueous solution. A combined experimental and computational study

    SciTech Connect (OSTI)

    Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; Tucker, Lyndsay; Correia, Bruna; Do-Thanh, Chi-Linh; Dai, Sheng; Hancock, Robert D.; Bryantsev, Vyacheslav S.

    2015-01-26

    Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pKa values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pKa values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopic titrations to measure the pKa values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pKa values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pKa units and 0.35 pKa units, respectively, and a root mean square deviation of 0.46 pKa units and 0.45 pKa units, respectively. Finally, we employ our two best methods to predict the pKa values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.

  6. Methods and apparatuses for information analysis on shared and distributed computing systems

    DOE Patents [OSTI]

    Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

    2011-02-22

    Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

  7. Microsoft Word - The Essential Role of New Network Services for High Performance Distributed Computing - PARENG.CivilComp.2011.

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Second International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering 12-15 April 2011, Ajaccio - Corsica - France In "Trends in Parallel, Distributed, Grid and Cloud Computing for Engineering," Edited by: P. Iványi, B.H.V. Topping, Civil-Comp Press. Network Services for High Performance Distributed Computing and Data Management W. E. Johnston, C. Guok, J. Metzger, and B. Tierney ESnet and Lawrence Berkeley National Laboratory, Berkeley California, U.S.A

  8. Assigning unique identification numbers to new user accounts and groups in a computing environment with multiple registries

    DOE Patents [OSTI]

    DeRobertis, Christopher V.; Lu, Yantian T.

    2010-02-23

    A method, system, and program storage device for creating a new user account or user group with a unique identification number in a computing environment having multiple user registries is provided. In response to receiving a command to create a new user account or user group, an operating system of a clustered computing environment automatically checks multiple registries configured for the operating system to determine whether a candidate identification number for the new user account or user group has been assigned already to one or more existing user accounts or groups, respectively. The operating system automatically assigns the candidate identification number to the new user account or user group created in a target user registry if the checking indicates that the candidate identification number has not been assigned already to any of the existing user accounts or user groups, respectively.

  9. Targeting Atmospheric Simulation Algorithms for Large Distributed Memory GPU Accelerated Computers

    SciTech Connect (OSTI)

    Norman, Matthew R

    2013-01-01

    Computing platforms are increasingly moving to accelerated architectures, and here we deal particularly with GPUs. In [15], a method was developed for atmospheric simulation to improve efficiency on large distributed memory machines by reducing communication demand and increasing the time step. Here, we improve upon this method to further target GPU accelerated platforms by reducing GPU memory accesses, removing a synchronization point, and better clustering computations. The modification ran over two times faster in some cases even though more computations were required, demonstrating the merit of improving memory handling on the GPU. Furthermore, we discover that the modification also has a near 100% hit rate in fast on-chip L1 cache and discuss the reasons for this. In concluding, we remark on further potential improvements to GPU efficiency.

  10. System design and algorithmic development for computational steering in distributed environments

    SciTech Connect (OSTI)

    Wu, Qishi; Zhu, Mengxia; Gu, Yi; Rao, Nageswara S

    2010-03-01

    Supporting visualization pipelines over wide-area networks is critical to enabling large-scale scientific applications that require visual feedback to interactively steer online computations. We propose a remote computational steering system that employs analytical models to estimate the cost of computing and communication components and optimizes the overall system performance in distributed environments with heterogeneous resources. We formulate and categorize the visualization pipeline configuration problems for maximum frame rate into three classes according to the constraints on node reuse or resource sharing, namely no, contiguous, and arbitrary reuse. We prove all three problems to be NP-complete and present heuristic approaches based on a dynamic programming strategy. The superior performance of the proposed solution is demonstrated with extensive simulation results in comparison with existing algorithms and is further evidenced by experimental results collected on a prototype implementation deployed over the Internet.

  11. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; et al

    2015-05-22

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled 'Next Generation Workload Management and Analysis System for Big Data' (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system

  12. Models the Electromagnetic Response of a 3D Distribution using MP COMPUTERS

    Energy Science and Technology Software Center (OSTI)

    1999-05-01

    EM3D models the electromagnetic response of a 3D distribution of conductivity, dielectric permittivity and magnetic permeability within the earth for geophysical applications using massively parallel computers. The simulations are carried out in the frequency domain for either electric or magnetic sources for either scattered or total filed formulations of Maxwell''s equations. The solution is based on the method of finite differences and includes absorbing boundary conditions so that responses can be modeled up into themore » radar range where wave propagation is dominant. Recent upgrades in the software include the incorporation of finite size sources, that in addition to dipolar source fields, and a low induction number preconditioner that can significantly reduce computational run times. A graphical user interface (GUI) is bundled with the software so that complicated 3D models can be easily constructed and simulated with the software. The GUI also allows for plotting of the output.« less

  13. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation

    SciTech Connect (OSTI)

    Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn

    2014-11-14

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.

  14. 16th Department of Energy Computer Security Group Training Conference: Proceedings

    SciTech Connect (OSTI)

    Not Available

    1994-04-01

    Various topic on computer security are presented. Integrity standards, smartcard systems, network firewalls, encryption systems, cryptography, computer security programs, multilevel security guards, electronic mail privacy, the central intelligence agency, internet security, and high-speed ATM networking are typical examples of discussed topics. Individual papers are indexed separately.

  15. Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) Kick-Off Meeting Proceedings Hilton Garden Inn-BWI,Baltimore, MD October 24, 2006

    Broader source: Energy.gov [DOE]

    Proceedings from the October 24, 2006 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Kick-Off Meeting.

  16. Integrated Computing, Communication, and Distributed Control of Deregulated Electric Power Systems

    SciTech Connect (OSTI)

    Bajura, Richard; Feliachi, Ali

    2008-09-24

    Restructuring of the electricity market has affected all aspects of the power industry from generation to transmission, distribution, and consumption. Transmission circuits, in particular, are stressed often exceeding their stability limits because of the difficulty in building new transmission lines due to environmental concerns and financial risk. Deregulation has resulted in the need for tighter control strategies to maintain reliability even in the event of considerable structural changes, such as loss of a large generating unit or a transmission line, and changes in loading conditions due to the continuously varying power consumption. Our research efforts under the DOE EPSCoR Grant focused on Integrated Computing, Communication and Distributed Control of Deregulated Electric Power Systems. This research is applicable to operating and controlling modern electric energy systems. The controls developed by APERC provide for a more efficient, economical, reliable, and secure operation of these systems. Under this program, we developed distributed control algorithms suitable for large-scale geographically dispersed power systems and also economic tools to evaluate their effectiveness and impact on power markets. Progress was made in the development of distributed intelligent control agents for reliable and automated operation of integrated electric power systems. The methodologies employed combine information technology, control and communication, agent technology, and power systems engineering in the development of intelligent control agents for reliable and automated operation of integrated electric power systems. In the event of scheduled load changes or unforeseen disturbances, the power system is expected to minimize the effects and costs of disturbances and to maintain critical infrastructure operational.

  17. A Distributed OpenCL Framework using Redundant Computation and Data Replication

    SciTech Connect (OSTI)

    Kim, Junghyun; Gangwon, Jo; Jaehoon, Jung; Lee, Jaejin

    2016-01-01

    Applications written solely in OpenCL or CUDA cannot execute on a cluster as a whole. Most previous approaches that extend these programming models to clusters are based on a common idea: designating a centralized host node and coordinating the other nodes with the host for computation. However, the centralized host node is a serious performance bottleneck when the number of nodes is large. In this paper, we propose a scalable and distributed OpenCL framework called SnuCL-D for large-scale clusters. SnuCL-D's remote device virtualization provides an OpenCL application with an illusion that all compute devices in a cluster are confined in a single node. To reduce the amount of control-message and data communication between nodes, SnuCL-D replicates the OpenCL host program execution and data in each node. We also propose a new OpenCL host API function and a queueing optimization technique that significantly reduce the overhead incurred by the previous centralized approaches. To show the effectiveness of SnuCL-D, we evaluate SnuCL-D with a microbenchmark and eleven benchmark applications on a large-scale CPU cluster and a medium-scale GPU cluster.

  18. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOE Patents [OSTI]

    Bhanot, Gyan V.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Vranas, Pavlos M.

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  19. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOE Patents [OSTI]

    Bhanot, Gyan V.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Vranas, Pavlos M.

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  20. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  1. Privacy and Security Research Group workshop on network and distributed system security: Proceedings

    SciTech Connect (OSTI)

    Not Available

    1993-05-01

    This report contains papers on the following topics: NREN Security Issues: Policies and Technologies; Layer Wars: Protect the Internet with Network Layer Security; Electronic Commission Management; Workflow 2000 - Electronic Document Authorization in Practice; Security Issues of a UNIX PEM Implementation; Implementing Privacy Enhanced Mail on VMS; Distributed Public Key Certificate Management; Protecting the Integrity of Privacy-enhanced Electronic Mail; Practical Authorization in Large Heterogeneous Distributed Systems; Security Issues in the Truffles File System; Issues surrounding the use of Cryptographic Algorithms and Smart Card Applications; Smart Card Augmentation of Kerberos; and An Overview of the Advanced Smart Card Access Control System. Selected papers were processed separately for inclusion in the Energy Science and Technology Database.

  2. Data-aware distributed scientific computing for big-data problems...

    Office of Scientific and Technical Information (OSTI)

    Country of Publication: United States Language: English Subject: Mathematics & Computing(97) Computer Science Word Cloud More Like This Full Text File size NAView Full Text View ...

  3. Technologies and tools for high-performance distributed computing. Final report

    SciTech Connect (OSTI)

    Karonis, Nicholas T.

    2000-05-01

    In this project we studied the practical use of the MPI message-passing interface in advanced distributed computing environments. We built on the existing software infrastructure provided by the Globus Toolkit{trademark}, the MPICH portable implementation of MPI, and the MPICH-G integration of MPICH with Globus. As a result of this project we have replaced MPICH-G with its successor MPICH-G2, which is also an integration of MPICH with Globus. MPICH-G2 delivers significant improvements in message passing performance when compared to its predecessor MPICH-G and was based on superior software design principles resulting in a software base that was much easier to make the functional extensions and improvements we did. Using Globus services we replaced the default implementation of MPI's collective operations in MPICH-G2 with more efficient multilevel topology-aware collective operations which, in turn, led to the development of a new timing methodology for broadcasts [8]. MPICH-G2 was extended to include client/server functionality from the MPI-2 standard [23] to facilitate remote visualization applications and, through the use of MPI idioms, MPICH-G2 provided application-level control of quality-of-service parameters as well as application-level discovery of underlying Grid-topology information. Finally, MPICH-G2 was successfully used in a number of applications including an award-winning record-setting computation in numerical relativity. In the sections that follow we describe in detail the accomplishments of this project, we present experimental results quantifying the performance improvements, and conclude with a discussion of our applications experiences. This project resulted in a significant increase in the utility of MPICH-G2.

  4. Fault-tolerant quantum computation and communication on a distributed 2D array of small local systems

    SciTech Connect (OSTI)

    Fujii, K.; Yamamoto, T.; Imoto, N.; Koashi, M.

    2014-12-04

    We propose a scheme for distributed quantum computation with small local systems connected via noisy quantum channels. We show that the proposed scheme tolerates errors with probabilities ∼30% and ∼ 0.1% in quantum channels and local operations, respectively, both of which are improved substantially compared to the previous works.

  5. WITNESSING GAS MIXING IN THE METAL DISTRIBUTION OF THE HICKSON COMPACT GROUP HCG 31

    SciTech Connect (OSTI)

    Torres-Flores, S.; Alfaro-Cuello, M.; De Oliveira, C. Mendes; Amram, P.; Carrasco, E. R.

    2015-01-01

    We present for the first time direct evidence that in a merger of disk galaxies, the pre-existing central metallicities will mix as a result of gas being transported in the merger interface region along the line that joins the two coalescing nuclei. This is shown using detailed two-dimensional kinematics as well as metallicity measurements for the nearby ongoing merger in the center of the compact group HCG 31. We focus on the emission line gas, which is extensive in the system. The two coalescing cores display similar oxygen abundances. While in between the two nuclei, the metallicity changes smoothly from one nucleus to the other indicating a mix of metals in this region, which is confirmed by the high-resolution Hα kinematics (R = 45,900). This nearby system is especially important because it involves the merging of two fairly low-mass and clumpy galaxies (LMC-like galaxies), making it an important system for comparison with high-redshift galaxies.

  6. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing and Storage Requirements Computing and Storage Requirements for FES J. Candy General Atomics, San Diego, CA Presented at DOE Technical Program Review Hilton Washington DC/Rockville Rockville, MD 19-20 March 2013 2 Computing and Storage Requirements Drift waves and tokamak plasma turbulence Role in the context of fusion research * Plasma performance: In tokamak plasmas, performance is limited by turbulent radial transport of both energy and particles. * Gradient-driven: This turbulent

  7. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  8. Computer Simulation of Equilibrium Electron Beam Distribution in the Proximity of 4th Order Single Nonlinear Resonance

    SciTech Connect (OSTI)

    Kuo, C.-C.; Tsai, H.-J.; Ueng, T.-S.; Chao, A.; /SLAC

    2005-05-09

    The beam distribution of particles in a storage ring can be distorted in the presence of nonlinear resonances. Computer simulation is used to study the equilibrium distribution of an electron beam in the presence of a single 4th order nonlinear resonance in a storage ring. Its result is compared with that obtained using an analytical approach by solving the Fokker-Planck equation to first order in the resonance strength. The effect of resonance on quantum lifetime of electron beam is also compared and investigated.

  9. Parallel, distributed and GPU computing technologies in single-particle electron microscopy

    SciTech Connect (OSTI)

    Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger

    2009-07-01

    An introduction to the current paradigm shift towards concurrency in software. Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of todays technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.

  10. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management Programs Advanced Nuclear Energy

  11. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. ! Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13 Garching bei München, Germany, July 17, 2007 ScicomP 13, July 17, 2007, Garching Overview * About Bassi * Memory on Bassi * Large Page Memory (It's Great!) * System Configuration * Large Page

  12. Proceedings of the sixth Berkeley workshop on distributed data management and computer networks

    SciTech Connect (OSTI)

    Various Authors

    1982-01-01

    A distributed data base management system allows data to be stored at multiple locations and to be accessed as a single unified data base. In this workshop, seventeen papers were presented which have been prepared separately for the energy data base. These items deal with data transfer, protocols and management. (GHT)

  13. System and method for secure group transactions

    DOE Patents [OSTI]

    Goldsmith, Steven Y.

    2006-04-25

    A method and a secure system, processing on one or more computers, provides a way to control a group transaction. The invention uses group consensus access control and multiple distributed secure agents in a network environment. Each secure agent can organize with the other secure agents to form a secure distributed agent collective.

  14. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect (OSTI)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  15. DualTrust: A Distributed Trust Model for Swarm-Based Autonomic Computing Systems

    SciTech Connect (OSTI)

    Maiden, Wendy M.; Dionysiou, Ioanna; Frincke, Deborah A.; Fink, Glenn A.; Bakken, David E.

    2011-02-01

    For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, trust management is important for the acceptance of the mobile agent sensors and to protect the system from malicious behavior by insiders and entities that have penetrated network defenses. This paper examines the trust relationships, evidence, and decisions in a representative system and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. We then propose the DualTrust conceptual trust model. By addressing the autonomic manager’s bi-directional primary relationships in the ACS architecture, DualTrust is able to monitor the trustworthiness of the autonomic managers, protect the sensor swarm in a scalable manner, and provide global trust awareness for the orchestrating autonomic manager.

  16. Distribution:

    Office of Legacy Management (LM)

    JAN26 19% Distribution: OR00 Attn: h.H.M.Roth DFMusser ITMM MMMann INS JCRyan FIw(2) Hsixele SRGustavson, Document rocm Formal file i+a@mmm bav@ ~@esiaw*cp Suppl. file 'Br & Div rf's s/health (lic.only) UNITED STATES ATOMIC ENERGY COMMISSION SPECIAL NUCLEAB MATERIAL LICENSE pursuant to the Atomic Energy Act of 1954 and Title 10, Code of Federal Regulations, Chapter 1, P&t 70, "Special Nuclear Material Reg)llatiqm," a license is hereby issued a$hortztng the licensee to rekeive

  17. Nonequilibrium critical relaxation of structurally disordered systems in the short-time regime: Renormalization group description and computer simulation

    SciTech Connect (OSTI)

    Prudnikov, V. V. Prudnikov, P. V.; Kalashnikov, I. A.; Rychkov, M. V.

    2010-02-15

    The influence of nonequilibrium initial states on the evolution of anisotropic systems with quenched uncorrelated structural defects at the critical point is studied. The field-theoretical description of the nonequilibrium critical behavior of 3D systems is obtained for the first time, and the dynamic critical exponent of the short-time evolution in the two-loop approximation without the use of {epsilon} expansion is calculated. The values of dynamic critical exponents calculated using the series resummation methods are compared with the results of computer simulation of nonequilibrium critical behavior of the 3D disordered Ising model in the short-time regime. It is demonstrated that the values of the critical exponents calculated in this paper are in better agreement with the results of computer simulation than the results of application of {epsilon} expansion.

  18. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    SciTech Connect (OSTI)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W.; Senkpeil, Ryan R.; Tlatov, Andrey G.; Nagovitsyn, Yury A.; Pevtsov, Alexei A.; Chapman, Gary A.; Cookson, Angela M.; Yeates, Anthony R.; Watson, Fraser T.; Balmaceda, Laura A.; DeLuca, Edward E.; Martens, Petrus C. H.

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  19. NERSC Enhances PDSF, Genepool Computing Capabilities

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Enhances PDSF, Genepool Computing Capabilities NERSC Enhances PDSF, Genepool Computing Capabilities Linux cluster expansion speeds data access and analysis January 3, 2014 Christmas came early for users of the Parallel Distributed Systems Facility (PDSF) and Genepool systems at Department of Energy's National Energy Research Scientific Computer Center (NERSC). Throughout November members of NERSC's Computational Systems Group were busy expanding the Linux computing resources that support PDSF's

  20. Distributed computing strategies for processing of FT-ICR MS imaging datasets for continuous mode data visualization

    SciTech Connect (OSTI)

    Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco; Kilic, Mehmet; Heeren, Ronald M.

    2015-03-01

    High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (“big data”) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode “Mosaic Datacube” approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, but requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.

    1. Metal distributions out to 0.5 r {sub 180} in the intracluster medium of four galaxy groups observed with Suzaku

      SciTech Connect (OSTI)

      Sasaki, Toru; Matsushita, Kyoko; Sato, Kosuke E-mail: matusita@rs.kagu.tus.ac.jp

      2014-01-20

      We studied the distributions of metal abundances and metal-mass-to-light ratios in the intracluster medium (ICM) of four galaxy groups, MKW 4, HCG 62, the NGC 1550 group, and the NGC 5044 group, out to ?0.5 r {sub 180} observed with Suzaku. The iron abundance decreases with radius and is about 0.2-0.4 solar beyond 0.1 r {sub 180}. At a given radius in units of r {sub 180}, the iron abundance in the ICM of the four galaxy groups was consistent with or smaller than those of clusters of galaxies. The Mg/Fe and Si/Fe ratios in the ICM are nearly constant at the solar ratio out to 0.5 r {sub 180}. We also studied systematic uncertainties in the derived metal abundances, comparing the results from two versions of atomic data for astrophysicists (ATOMDB) and single- and two-temperature model fits. Since the metals have been synthesized in galaxies, we collected K-band luminosities of galaxies from the Two Micron All Sky Survey catalog and calculated the integrated iron-mass-to-light-ratios (IMLR), or the ratios of the iron mass in the ICM to light from stars in galaxies. The groups with smaller gas-mass-to-light ratios have smaller IMLR values and the IMLR is inversely correlated with the entropy excess. Based on these abundance features, we discussed the past history of metal enrichment processes in groups of galaxies.

    2. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    3. Software/Computing | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software/Computing Software/Computing Argonne is the central site for work on database and data management. The group has key responsibilities in the design and implementation of the I/O model which must provided distributed access to many petabytes of data for both event reconstruction and physics analysis. The group deployed a number of HEP packages on the BlueGene/Q supercomputer of the Argonne Leadership Computing Facility, and currently generates CPU-intensive Monte Carlo event samples for

    4. Final report and documentation for the security enabled programmable switch for protection of distributed internetworked computers LDRD.

      SciTech Connect (OSTI)

      Van Randwyk, Jamie A.; Robertson, Perry J.; Durgin, Nancy Ann; Toole, Timothy J.; Kucera, Brent D.; Campbell, Philip LaRoche; Pierson, Lyndon George

      2010-02-01

      An increasing number of corporate security policies make it desirable to push security closer to the desktop. It is not practical or feasible to place security and monitoring software on all computing devices (e.g. printers, personal digital assistants, copy machines, legacy hardware). We have begun to prototype a hardware and software architecture that will enforce security policies by pushing security functions closer to the end user, whether in the office or home, without interfering with users' desktop environments. We are developing a specialized programmable Ethernet network switch to achieve this. Embodied in this device is the ability to detect and mitigate network attacks that would otherwise disable or compromise the end user's computing nodes. We call this device a 'Secure Programmable Switch' (SPS). The SPS is designed with the ability to be securely reprogrammed in real time to counter rapidly evolving threats such as fast moving worms, etc. This ability to remotely update the functionality of the SPS protection device is cryptographically protected from subversion. With this concept, the user cannot turn off or fail to update virus scanning and personal firewall filtering in the SPS device as he/she could if implemented on the end host. The SPS concept also provides protection to simple/dumb devices such as printers, scanners, legacy hardware, etc. This report also describes the development of a cryptographically protected processor and its internal architecture in which the SPS device is implemented. This processor executes code correctly even if an adversary holds the processor. The processor guarantees both the integrity and the confidentiality of the code: the adversary cannot determine the sequence of instructions, nor can the adversary change the instruction sequence in a goal-oriented way.

    5. Distributed computing for signal processing: modeling of asynchronous parallel computation. Appendix C. Fault-tolerant interconnection networks and image-processing applications for the PASM parallel processing systems. Final report

      SciTech Connect (OSTI)

      Adams, G.B.

      1984-12-01

      The demand for very-high-speed data processing coupled with falling hardware costs has made large-scale parallel and distributed computer systems both desirable and feasible. Two modes of parallel processing are single-instruction stream-multiple data stream (SIMD) and multiple instruction stream - multiple data stream (MIMD). PASM, a partitionable SIMD/MIMD system, is a reconfigurable multimicroprocessor system being designed for image processing and pattern recognition. An important component of these systems is the interconnection network, the mechanism for communication among the computation nodes and memories. Assuring high reliability for such complex systems is a significant task. Thus, a crucial practical aspect of an interconnection network is fault tolerance. In answer to this need, the Extra Stage Cube (ESC), a fault-tolerant, multistage cube-type interconnection network, is defined. The fault tolerance of the ESC is explored for both single and multiple faults, routing tags are defined, and consideration is given to permuting data and partitioning the ESC in the presence of faults. The ESC is compared with other fault-tolerant multistage networks. Finally, reliability of the ESC and an enhanced version of it are investigated.

    6. BOC Group | Open Energy Information

      Open Energy Info (EERE)

      Group Jump to: navigation, search Name: BOC Group Place: United Kingdom Zip: GU20 6HJ Sector: Services Product: UK-based industrial gases, vacuum technologies and distribution...

    7. Development of an Extensible Computational Framework for Centralized Storage and Distributed Curation and Analysis of Genomic Data Genome-scale Metabolic Models

      SciTech Connect (OSTI)

      Stevens, Rick

      2010-08-01

      The DOE funded KBase project of the Stevens group at the University of Chicago was focused on four high-level goals: (i) improve extensibility, accessibility, and scalability of the SEED framework for genome annotation, curation, and analysis; (ii) extend the SEED infrastructure to support transcription regulatory network reconstructions (2.1), metabolic model reconstruction and analysis (2.2), assertions linked to data (2.3), eukaryotic annotation (2.4), and growth phenotype prediction (2.5); (iii) develop a web-API for programmatic remote access to SEED data and services; and (iv) application of all tools to bioenergy-related genomes and organisms. In response to these goals, we enhanced and improved the ModelSEED resource within the SEED to enable new modeling analyses, including improved model reconstruction and phenotype simulation. We also constructed a new website and web-API for the ModelSEED. Further, we constructed a comprehensive web-API for the SEED as a whole. We also made significant strides in building infrastructure in the SEED to support the reconstruction of transcriptional regulatory networks by developing a pipeline to identify sets of consistently expressed genes based on gene expression data. We applied this pipeline to 29 organisms, computing regulons which were subsequently stored in the SEED database and made available on the SEED website (http://pubseed.theseed.org). We developed a new pipeline and database for the use of kmers, or short 8-residue oligomer sequences, to annotate genomes at high speed. Finally, we developed the PlantSEED, or a new pipeline for annotating primary metabolism in plant genomes. All of the work performed within this project formed the early building blocks for the current DOE Knowledgebase system, and the kmer annotation pipeline, plant annotation pipeline, and modeling tools are all still in use in KBase today.

    8. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      been distributed to the Focus Group prior to the meeting. The comments that required editorial changes to the document were made in the working electronic version. b. At the June...

    9. Important role of the non-uniform Fe distribution for the ferromagnetism in group-IV-based ferromagnetic semiconductor GeFe

      SciTech Connect (OSTI)

      Wakabayashi, Yuki K.; Ohya, Shinobu; Ban, Yoshisuke; Tanaka, Masaaki [Department of Electrical Engineering and Information Systems, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

      2014-11-07

      We investigate the growth-temperature dependence of the properties of the group-IV-based ferromagnetic semiconductor Ge{sub 1?x}Fe{sub x} films (x?=?6.5% and 10.5%), and reveal the correlation of the magnetic properties with the lattice constant, Curie temperature (T{sub C}), non-uniformity of Fe atoms, stacking-fault defects, and Fe-atom locations. While T{sub C} strongly depends on the growth temperature, we find a universal relationship between T{sub C} and the lattice constant, which does not depend on the Fe content x. By using the spatially resolved transmission-electron diffractions combined with the energy-dispersive X-ray spectroscopy, we find that the density of the stacking-fault defects and the non-uniformity of the Fe concentration are correlated with T{sub C}. Meanwhile, by using the channeling Rutherford backscattering and particle-induced X-ray emission measurements, we clarify that about 15% of the Fe atoms exist on the tetrahedral interstitial sites in the Ge{sub 0.935}Fe{sub 0.065} lattice and that the substitutional Fe concentration is not correlated with T{sub C}. Considering these results, we conclude that the non-uniformity of the Fe concentration plays an important role in determining the ferromagnetic properties of GeFe.

    10. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader ...

    11. Group X

      SciTech Connect (OSTI)

      Fields, Susannah

      2007-08-16

      This project is currently under contract for research through the Department of Homeland Security until 2011. The group I was responsible for studying has to remain confidential so as not to affect the current project. All dates, reference links and authors, and other distinguishing characteristics of the original group have been removed from this report. All references to the name of this group or the individual splinter groups has been changed to 'Group X'. I have been collecting texts from a variety of sources intended for the use of recruiting and radicalizing members for Group X splinter groups for the purpose of researching the motivation and intent of leaders of those groups and their influence over the likelihood of group radicalization. This work included visiting many Group X websites to find information on splinter group leaders and finding their statements to new and old members. This proved difficult because the splinter groups of Group X are united in beliefs, but differ in public opinion. They are eager to tear each other down, prove their superiority, and yet remain anonymous. After a few weeks of intense searching, a list of eight recruiting texts and eight radicalizing texts from a variety of Group X leaders were compiled.

    12. Galaxy groups

      SciTech Connect (OSTI)

      Brent Tully, R.

      2015-02-01

      Galaxy groups can be characterized by the radius of decoupling from cosmic expansion, the radius of the caustic of second turnaround, and the velocity dispersion of galaxies within this latter radius. These parameters can be a challenge to measure, especially for small groups with few members. In this study, results are gathered pertaining to particularly well-studied groups over four decades in group mass. Scaling relations anticipated from theory are demonstrated and coefficients of the relationships are specified. There is an update of the relationship between light and mass for groups, confirming that groups with mass of a few times 10{sup 12}M{sub ?} are the most lit up while groups with more and less mass are darker. It is demonstrated that there is an interesting one-to-one correlation between the number of dwarf satellites in a group and the group mass. There is the suggestion that small variations in the slope of the luminosity function in groups are caused by the degree of depletion of intermediate luminosity systems rather than variations in the number per unit mass of dwarfs. Finally, returning to the characteristic radii of groups, the ratio of first to second turnaround depends on the dark matter and dark energy content of the universe and a crude estimate can be made from the current observations of ?{sub matter}?0.15 in a flat topology, with a 68% probability of being less than 0.44.

    13. Specific Group Hardware

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Specific Group Hardware Specific Group Hardware ALICE palicevo1 The Virtual Organization (VO) server. Serves as gatekeeper for ALICE jobs. It's duties include getting assignments from ALICE file catalog (at CERN), submitting jobs to pdsfgrid (via condor) which submits jobs to the compute nodes, monitoring the cluster work load, and uploading job information to ALICE file catalog. It is monitored with MonALISA (the monitoring page is here). It's made up of 2 Intel Xeon E5520 processors each with

    14. Computer, Computational, and Statistical Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCS Computer, Computational, and Statistical Sciences Computational physics, computer science, applied mathematics, statistics and the integration of large data streams are central ...

    15. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    16. Link failure detection in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Megerian, Mark G.; Smith, Brian E.

      2010-11-09

      Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

    17. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These colors were

    18. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental health, cleaner energy, and national security. Contact Us Group Leader Carl Gable Deputy Group Leader Gilles Bussod Email Profile pages header Search our Profile pages Hari Viswanathan inspects a microfluidic cell used to study the extraction of hydrocarbon fuels from a complex fracture network. EES-16's Subsurface Flow

    19. Welcome - Modeling and Simulation Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCS Directorate ORNL Modeling and Simulation Group Computational Sciences & Engineering Division Home Organization Chart Staff Research Areas Major Projects Fact Sheets Publications M&S News Awards Contacts Intership Programs ORNL has lots of opportunities for students to conduct research in scientific fields. Check out our Fellowship and Intership programs Fellowships Interships RAMS Program Modeling and Simulation Group The ORNL Modeling and Simulation Group (MSG) develops

    20. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2015-03-30 20:55:24...

    1. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    2. Research Groups - Cyclotron Institute

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Research Groups Research Group Homepages: Nuclear Theory Group Dr. Sherry Yennello's Research Group Dr. Dan Melconian's Research Group Dr. Cody Folden's Group...

    3. Human collective dynamics: Two groups in adversarial encounter. [melete code

      SciTech Connect (OSTI)

      Sandoval, D.L.; Harlow, F.H.; Genin, K.E.

      1988-04-01

      The behavior of a group of people depends strongly on the interaction of personal (individual) traits with the collective moods of the group as a whole. We have developed a computer program to model circumstances of this nature with recognition of the crucial role played by such psychological properties as fear, excitement, peer pressure, moral outrage, and anger, together with the distribution among participants of intrinsic susceptibilities to these emotions. This report extends previous work to consider two groups of people in adversarial encounter, for example, two platoons in battle, a SWAT team against rioting prisoners, or opposing mobs of different ethnic backgrounds. Closely related applications of the modeling include prowling groups of predatory animals interacting with herds of prey, and even the ''slow-mob'' behavior of social or political units in their response to legislative or judicial activities. Examples in this present study emphasize battlefield encounters, with each group characterizzed by its susceptibilities, skills, and other manifestions of both intentional and accidental circumstances. Specifically, we investigate the relative importance of leadership, camaraderie, training level (i.e. skill in firing weapons), bravery, excitability, and dedication in the battle performance of personnel with random or specified distributions of capabilities and susceptibilities in these various regards. The goal is to exhibit the probable outcome of these encounters in circumstances involving specified battle goals and distributions of terrain impediments. A collateral goal is to provide a real-time hands-on battle simulator into which a leadership trainee can insert his own interactive command.

    4. Snowmass Computing Frontier I2: Distributed

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      panelists from different parts of the grid world: operations, technology, security, big thinking Snowmass report will summarize the discussion Listened carefully to...

    5. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information From here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop. Kerberos. AFS. Printing. Recommended applications for various common tasks. Running CPU- or IO-intensive programs (batch jobs) Commonly encountered problems Computing support within BooNE Bringing a computer to FNAL, or purchasing a new one. Laptops. The Computer Security Program Plan for MiniBooNE The

    6. Prabhat Steps In as DAS Group Lead

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Prabhat Steps In as DAS Group Lead Prabhat Steps In as DAS Group Lead September 1, 2014 prabhat Prabhat has been named Group Lead of the Data and Analytics Services (DAS) Group at the Department of Energy's National Energy Research Scientific Computing Center (NERSC). The DAS group helps NERSC's users address data and analytics challenges arising from the increasing size and complexity of data from simulations and experiments. As the DAS Group Lead, Prabhat will play a key role in developing and

    7. Measurements and computations of room airflow with displacement ventilation

      SciTech Connect (OSTI)

      Yuan, X.; Chen, Q.; Glicksman, L.R.; Hu, Y.; Yang, X.

      1999-07-01

      This paper presents a set of detailed experimental data of room airflow with displacement ventilation. These data were obtained from a new environmental test facility. The measurements were conducted for three typical room configurations: a small office, a large office with partitions, and a classroom. The distributions of air velocity, air velocity fluctuation, and air temperature were measured by omnidirectional hot-sphere anemometers, and contaminant concentrations were measured by tracer gas at 54 points in the rooms. Smoke was used to observe airflow. The data also include the wall surface temperature distribution, air supply parameters, and the age of air at several locations in the rooms. A computational fluid dynamics (CFD) program with the Re-Normalization Group (RNG) {kappa}-{epsilon} model was also used to predict the indoor airflow. The agreement between the computed results and measured data of air temperature and velocity is good. However, some discrepancies exist in the computed and measured concentrations and velocity fluctuation.

    8. Security and Policy for Group Collaboration

      SciTech Connect (OSTI)

      Ian Foster; Carl Kesselman

      2006-07-31

      “Security and Policy for Group Collaboration” was a Collaboratory Middleware research project aimed at providing the fundamental security and policy infrastructure required to support the creation and operation of distributed, computationally enabled collaborations. The project developed infrastructure that exploits innovative new techniques to address challenging issues of scale, dynamics, distribution, and role. To reduce greatly the cost of adding new members to a collaboration, we developed and evaluated new techniques for creating and managing credentials based on public key certificates, including support for online certificate generation, online certificate repositories, and support for multiple certificate authorities. To facilitate the integration of new resources into a collaboration, we improved significantly the integration of local security environments. To make it easy to create and change the role and associated privileges of both resources and participants of collaboration, we developed community wide authorization services that provide distributed, scalable means for specifying policy. These services make it possible for the delegation of capability from the community to a specific user, class of user or resource. Finally, we instantiated our research results into a framework that makes it useable to a wide range of collaborative tools. The resulting mechanisms and software have been widely adopted within DOE projects and in many other scientific projects. The widespread adoption of our Globus Toolkit technology has provided, and continues to provide, a natural dissemination and technology transfer vehicle for our results.

    9. EIS Distribution

      Broader source: Energy.gov [DOE]

      This DOE guidance presents a series of recommendations related to the EIS distribution process, which includes creating and updating a distribution list, distributing an EIS, and filing an EIS with the EPA.

    10. # Energy Measuremenfs Group

      Office of Legacy Management (LM)

      ri EECE # Energy Measuremenfs Group SUMMARY REPORT . AiRIAL R4DIOLOGICAL SURVEY - NIAGARA FALLS AREA NIAGARA FALLS, NEh' YORK DATE OF SURVEY: SEPTEMBER 1979 APPROVED FOR DISTRIBUTION: P Stuart, EC&G, Inc. . . Herbirt F. Hahn, Department of Energy PERFDRflED BY EGtf, INC. UNDER CONTRACT NO. DE-AHO&76NV01163 WITH THE UNITED STATES DEPARTMENT OF ENERGY II'AFID 010 November 30, 1979 - The Aerial Measurements System (A%), operated by EC&t, Inc< for the Un i ted States Department of

    11. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and...

    12. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    13. A Component Architecture for High-Performance Scientific Computing

      SciTech Connect (OSTI)

      Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

      2004-12-14

      The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

    14. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    15. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computer security Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved

    16. TEC Working Group Topic Groups Archives Consolidated Grant Topic Group |

      Office of Environmental Management (EM)

      Department of Energy Consolidated Grant Topic Group TEC Working Group Topic Groups Archives Consolidated Grant Topic Group The Consolidated Grant Topic Group arose from recommendations provided by the TEC and other external parties to the DOE Senior Executive Transportation Forum in July 1998. It was proposed that the consolidation of multiple funding streams from numerous DOE sources into a single grant would provide a more equitable and efficient means of assistance to States and Tribes

    17. Manufacturing Energy and Carbon Footprint - Sector: Computer...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computers, Electronics and Electrical Equipment (NAICS 334, 335) Process Energy Electricity and Steam Generation Losses Process Losses 5 Nonprocess Losses 493 46 Steam Distribution ...

    18. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      DesignForward FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Exascale Computing Exascale Computing Moving forward into the exascale era, NERSC users place will place increased demands on NERSC computational facilities. Users will be facing increased complexity in the memory subsystem and node architecture. System designs and programming models will have to evolve to face these new challenges. NERSC staff are active in current initiatives addressing

    19. Computer Accounts | Stanford Synchrotron Radiation Lightsource

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Accounts Each user group must have a computer account. Additionally, all persons using these accounts are responsible for understanding and complying with the terms outlined in the "Use of SLAC Information Resources". Links are provided below for computer account forms and the computer security agreement which must be completed and sent to the appropriate contact person. SSRL does not charge for use of its computer systems. Forms X-ray/VUV Computer Account Request Form

    20. Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Advanced Materials Laboratory Center for Integrated Nanotechnologies Combustion Research Facility Computational Science Research Institute Joint BioEnergy Institute About EC News ...

    1. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    2. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      low-overhead operating system optimized for high performance computing called "Cray Linux Environment" (CLE). This OS supports only a limited number of system calls and UNIX...

    3. Broadcasting a message in a parallel computer

      DOE Patents [OSTI]

      Berg, Jeremy E.; Faraj, Ahmad A.

      2011-08-02

      Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

    4. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and...

    5. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center for Computational Sciences

    6. World-wide distribution automation systems

      SciTech Connect (OSTI)

      Devaney, T.M.

      1994-12-31

      A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems.

    7. Computing Sciences Staff Help East Bay High Schoolers Upgrade...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      from underrepresented groups learn about careers in a variety of IT fields, the Laney College Computer Information Systems Department offered its Upgrade: Computer Science Program. ...

    8. NERSC Hosts 50 Enthusiastic Computer Science Students from Dougherty...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Hosts 50 Enthusiastic Computer Science Students from Dougherty Valley High NERSC Hosts 50 Enthusiastic Computer Science Students from Dougherty Valley High May 31, 2016 A group of ...

    9. Computing at SSRL Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      contents you are looking for have moved. You will be redirected to the new location automatically in 5 seconds. Please bookmark the correct page at http://www-ssrl.slac.stanford.edu/content/staff-resources/computer-networking-group

    10. Distributed Generation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and regulations such as IEEE (Institute of Electrical and Electronics Engineers) 1547 have come a long way in addressing interconnection standards for distributed generation, ...

    11. Distribution Workshop

      Broader source: Energy.gov [DOE]

      On September 24-26, 2012, the GTT presented a workshop on grid integration on the distribution system at the Sheraton Crystal City near Washington, DC.

    12. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    13. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    14. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Exascale Computing CoDEx Project: A Hardware/Software Codesign Environment for the Exascale Era The next decade will see a rapid evolution of HPC node architectures as power and cooling constraints are limiting increases in microprocessor clock speeds and constraining data movement. Applications and algorithms will need to change and adapt as node architectures evolve. A key element of the strategy as we move forward is the co-design of applications, architectures and programming

    15. Jay Srinivasan! NERSC Systems Group!

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      NERSC Systems Group! ! NUG 2014! Feb 6, 2014 Computational Systems Group Update (CSG) What CSG Does- * Manage t he s ystems t hat r un y our j obs: - The L arge M PP s ystems ( Hopper & E dison) - The L inux C lusters ( Carver, Genepool, M endel, P DSF) - Testbeds ( Dirac, J esup, I ntel S B/MIC) * Help improve the user experience (batch system, login e nvironment, s ystem p erformance) * Deploy a nd m aintain s torage ( local, N ERSC---Global) on c ompute p laForms * ParHcipate o n S ystem

    16. Interagency mechanical operations group numerical systems group

      SciTech Connect (OSTI)

      1997-09-01

      This report consists of the minutes of the May 20-21, 1971 meeting of the Interagency Mechanical Operations Group (IMOG) Numerical Systems Group. This group looks at issues related to numerical control in the machining industry. Items discussed related to the use of CAD and CAM, EIA standards, data links, and numerical control.

    17. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    18. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    19. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Resources This page is the repository for sundry items of information relevant to general computing on BooNE. If you have a question or problem that isn't answered here, or a suggestion for improving this page or the information on it, please mail boone-computing@fnal.gov and we'll do our best to address any issues. Note about this page Some links on this page point to www.everything2.com, and are meant to give an idea about a concept or thing without necessarily wading through a whole website

    20. ITP Industrial Distributed Energy: Distributed Energy Program...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      ITP Industrial Distributed Energy: Distributed Energy Program Project Profile: Verizon Central Office Building ITP Industrial Distributed Energy: Distributed Energy Program Project ...

    1. Automatic identification of abstract online groups

      DOE Patents [OSTI]

      Engel, David W; Gregory, Michelle L; Bell, Eric B; Cowell, Andrew J; Piatt, Andrew W

      2014-04-15

      Online abstract groups, in which members aren't explicitly connected, can be automatically identified by computer-implemented methods. The methods involve harvesting records from social media and extracting content-based and structure-based features from each record. Each record includes a social-media posting and is associated with one or more entities. Each feature is stored on a data storage device and includes a computer-readable representation of an attribute of one or more records. The methods further involve grouping records into record groups according to the features of each record. Further still the methods involve calculating an n-dimensional surface representing each record group and defining an outlier as a record having feature-based distances measured from every n-dimensional surface that exceed a threshold value. Each of the n-dimensional surfaces is described by a footprint that characterizes the respective record group as an online abstract group.

    2. Computational trigonometry

      SciTech Connect (OSTI)

      Gustafson, K.

      1994-12-31

      By means of the author`s earlier theory of antieigenvalues and antieigenvectors, a new computational approach to iterative methods is presented. This enables an explicit trigonometric understanding of iterative convergence and provides new insights into the sharpness of error bounds. Direct applications to Gradient descent, Conjugate gradient, GCR(k), Orthomin, CGN, GMRES, CGS, and other matrix iterative schemes will be given.

    3. Nick Wright Named Advanced Technologies Group Lead

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nick Wright Named Advanced Technologies Group Lead Nick Wright Named Advanced Technologies Group Lead February 4, 2013 Nick Nick Wright has been named head of the National Energy Research Scientific Computing Center's (NERSC) Advanced Technologies Group (ATG), which focuses on understanding the requirements of current and emerging applications to make choices in hardware design and programming models that best serve the science needs of NERSC users. ATG specializes in benchmarking, system

    4. The Computational Physics Program of the national MFE Computer Center

      SciTech Connect (OSTI)

      Mirin, A.A.

      1989-01-01

      Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.

    5. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, ... The DOE Office of Science's Advanced Scientific Computing Research (ASCR) program ...

    6. Theory, Simulation, and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer, Computational, and Statistical Sciences (CCS) Division is an international ... and statistics The deployment and integration of computational technology, ...

    7. Secure computing for the 'Everyman'

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Secure computing for the 'Everyman' Secure computing for the 'Everyman' If implemented on a wide scale, quantum key distribution technology could ensure truly secure commerce, banking, communications and data transfer. September 2, 2014 This small device developed at Los Alamos National Laboratory uses the truly random spin of light particles as defined by laws of quantum mechanics to generate a random number for use in a cryptographic key that can be used to securely transmit information

    8. JLF User Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      jlf user group JLF User Group 2015 NIF and JLF User Group Meeting Links: Send request to join the JLF User Group Join the NIF User Group Dr. Carolyn Kuranz - JLF User Group Dr. Carolyn Kuranz received her Ph.D. in Applied Physics from the University of Michigan in 2009. She is currently an Assistant Research Scientist at the Center for Laser Experimental Astrophysical Research and the Center for Radiative Shock Hydrodynamics at the University of Michigan. Her research involves hydrodynamic

    9. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

    10. Bio-Derived Liquids to Hydrogen Distributed Reforming Working...

      Office of Environmental Management (EM)

      Meeting - November 2007 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group Meeting - November 2007 The Bio-Derived Liquids to Hydrogen Distributed Reforming ...

    11. Computational Combustion

      SciTech Connect (OSTI)

      Westbrook, C K; Mizobuchi, Y; Poinsot, T J; Smith, P J; Warnatz, J

      2004-08-26

      Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surface and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.

    12. RATIO COMPUTER

      DOE Patents [OSTI]

      Post, R.F.

      1958-11-11

      An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.

    13. Debugging a high performance computing program

      DOE Patents [OSTI]

      Gooding, Thomas M.

      2014-08-19

      Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

    14. Debugging a high performance computing program

      DOE Patents [OSTI]

      Gooding, Thomas M.

      2013-08-20

      Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

    15. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    16. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Events Computing Events Spotlighting the most advanced scientific and technical applications in the world! Featuring exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies will be seen for the first time in Denver. Supercomputing Conference 13 Denver, Colorado November 17-22, 2013 Spotlighting the most advanced scientific and technical applications in the world, SC13 will bring together the international

    17. TEC Working Group Topic Groups Manual Review

      Broader source: Energy.gov [DOE]

      This group is responsible for the update of DOE Manual 460.2-1, Radioactive Material Transportation Practices Manual.  This manual was issued on September 23, 2002, and establishes a set of...

    18. TEC Working Group Topic Groups Routing

      Broader source: Energy.gov [DOE]

      The Routing Topic Group has been established to examine topics of interest and relevance concerning routing of shipments of spent nuclear fuel (SNF) and high-level radioactive waste (HLW) to a...

    19. Spatial distribution of HTO activity in unsaturated soil depth in the vicinity of long-term release source

      SciTech Connect (OSTI)

      Golubev, A.; Golubeva, V.; Mavrin, S.

      2015-03-15

      Previous studies reported about a correlation between HTO activity distribution in unsaturated soil layer and atmospheric long-term releases of HTO in the vicinity of Savannah River Site. The Tritium Working Group of BIOMASS Programme has performed a model-model intercomparison study of HTO transport from atmosphere to unsaturated soil and has evaluated HTO activity distribution in the unsaturated soil layer in the vicinity of permanent atmospheric sources. The Tritium Working Group has also reported about such a correlation, however the conclusion was that experimental data sets are needed to confirm this conclusion and also to validate appropriate computer models. (authors)

    20. Constructing the ASCI computational grid

      SciTech Connect (OSTI)

      BEIRIGER,JUDY I.; BIVENS,HUGH P.; HUMPHREYS,STEVEN L.; JOHNSON,WILBUR R.; RHEA,RONALD E.

      2000-06-01

      The Accelerated Strategic Computing Initiative (ASCI) computational grid is being constructed to interconnect the high performance computing resources of the nuclear weapons complex. The grid will simplify access to the diverse computing, storage, network, and visualization resources, and will enable the coordinated use of shared resources regardless of location. To match existing hardware platforms, required security services, and current simulation practices, the Globus MetaComputing Toolkit was selected to provide core grid services. The ASCI grid extends Globus functionality by operating as an independent grid, incorporating Kerberos-based security, interfacing to Sandia's Cplant{trademark},and extending job monitoring services. To fully meet ASCI's needs, the architecture layers distributed work management and criteria-driven resource selection services on top of Globus. These services simplify the grid interface by allowing users to simply request ''run code X anywhere''. This paper describes the initial design and prototype of the ASCI grid.

    1. Venkatram Vishwanath | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Venkatram Vishwanath Computer Scientist, Data Science Group Lead Venkatram Vishwanath Argonne National Laboratory 9700 S. Cass Avenue Building 240 - Rm. 4141 Argonne, IL 60439 630-252-4971 venkat@anl.gov Venkatram Vishwanath is a computer scientist at Argonne National Laboratory. He is the Data Science group lead at the Argonne leadership computing facility (ALCF). His current focus is on algorithms, system software, and workflows to facilitate data-centric applications on supercomputing

    2. JLab Users Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      JLab Users Group Please upgrade your browser. This site's design is only visible in a graphical browser that supports web standards, but its content is accessible to any browser. Concerns? JLab Users Group User Liaison Home Users Group Program Advisory Committee User/Researcher Information print version UG Resources Background & Purpose Users Group Wiki By Laws Board of Directors Board of Directors Minutes Directory of Members Events At-A-Glance Member Institutions News Users Group Mailing

    3. The Ren Group - Home

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      resolution (1-2 nm). We continue to develop this approach by optimizing through empirical and computational methods to achieve high-resolution structures of single...

    4. Jason Hick! Storage Systems Group! NERSC User Group Meeting!

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Group! ! NERSC User Group Meeting! February 6, 2014 Storage Systems: 2014 and beyond The compute and storage systems 2013 Produc(on C lusters Carver, P DSF, J GI,KBASE,HEP 1 4x Q DR Global Scratch 3.6 PB 5 x S FA12KE /project 5 PB DDN9900 & NexSAN /home 250 TB NetApp 5 460 50 P B s tored, 2 40 PB c apacity, 3 5 years o f community d ata HPSS 16 x Q DR I B 2.2 P B L ocal Scratch 70 GB/s 6.4 P B L ocal Scratch 140 GB/s 16 x F DR I B Ethernet & I B F abric Science F riendly S ecurity

    5. Distributed Merge Trees

      SciTech Connect (OSTI)

      Morozov, Dmitriy; Weber, Gunther

      2013-01-08

      Improved simulations and sensors are producing datasets whose increasing complexity exhausts our ability to visualize and comprehend them directly. To cope with this problem, we can detect and extract significant features in the data and use them as the basis for subsequent analysis. Topological methods are valuable in this context because they provide robust and general feature definitions. As the growth of serial computational power has stalled, data analysis is becoming increasingly dependent on massively parallel machines. To satisfy the computational demand created by complex datasets, algorithms need to effectively utilize these computer architectures. The main strength of topological methods, their emphasis on global information, turns into an obstacle during parallelization. We present two approaches to alleviate this problem. We develop a distributed representation of the merge tree that avoids computing the global tree on a single processor and lets us parallelize subsequent queries. To account for the increasing number of cores per processor, we develop a new data structure that lets us take advantage of multiple shared-memory cores to parallelize the work on a single node. Finally, we present experiments that illustrate the strengths of our approach as well as help identify future challenges.

    6. Computing and Computational Sciences Directorate - Computer Science and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics Division Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Our mission includes basic research in computational sciences and application of advanced computing systems, computational, mathematical and analysis techniques to the solution of scientific problems of national importance. We seek to work

    7. Moltech Power Systems Group MPS Group | Open Energy Information

      Open Energy Info (EERE)

      Moltech Power Systems Group MPS Group Jump to: navigation, search Name: Moltech Power Systems Group (MPS Group) Place: China Product: China-based subsidiary of Shanghai Huayi Group...

    8. Hanergy Holdings Group Company Ltd formerly Farsighted Group...

      Open Energy Info (EERE)

      Hanergy Holdings Group Company Ltd formerly Farsighted Group aka Huarui Group Jump to: navigation, search Name: Hanergy Holdings Group Company Ltd (formerly Farsighted Group, aka...

    9. NERSC Users Group Monthly Meeting

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      August 25, 2016 Agenda ● Cori Phase II Update ● Data Day debrief ● NESAP & resources for porting to KNL ● Edison Scratch Filesystem Updates ● AY 2017 ERCAP Allocation Requests Cori Phase II Update Tina Declerck Computational Systems Group August 25, 2016 ● Prep for Cori Phase 2 ● Cori Phase 2 Installation ● System Arrival & Installation ● Current Status ● Projected Timeline ● NERSC pre-merge testing ● Merge plan ● Post Merge ● Acceptance Testing Agenda 4 ●

    10. MiniBooNE Pion Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Pion Group

    11. NERSC Intern Wins Award for Computing Achievement

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Intern Wins Award for Computing Achievement NERSC Intern Wins Award for Computing Achievement March 27, 2013 Linda Vu, lvu@lbl.gov, +1 510 495 2402 ncwit1 Stephanie Cabanela, a student intern in the National Energy Research Scientific Computing Center's (NERSC) Operation Technologies Group was honored with the Bay Area Affiliate National Center for Women and Information Technology (NCWIT) Aspirations in Computing award on Saturday, March 16, 2013 in a ceremony in San Jose, CA. The award honors

    12. Distributed Optimization System

      DOE Patents [OSTI]

      Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

      2004-11-30

      A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

    13. Distributed Generation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Untapped Value of Backup Generation While new guidelines and regulations such as IEEE (Institute of Electrical and Electronics Engineers) 1547 have come a long way in addressing interconnection standards for distributed generation, utilities have largely overlooked the untapped potential of these resources. Under certain conditions, these units (primarily backup generators) represent a significant source of power that can deliver utility services at lower costs than traditional centralized

    14. Distribution Category:

      Office of Legacy Management (LM)

      - Distribution Category: Remedial Action and Decommissioning Program (UC-70A) DOE/EV-0005/48 ANL-OHS/HP-84-104 ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 FORMERLY UTILIZED MXD/AEC SITES REMEDIAL ACTION PROGRAM RADIOLOGICAL SURVEY OF THE HARSHAW CHEMICAL COMPANY CLEVELAND. OHIO Prepared by R. A. Wynveen Associate Division Director, OHS W. H. Smith Senior Health Physicist C. M. Sholeen Health Physicist A. L. Justus Health Physicist K. F. Flynn Health Physicist

    15. Exploratory Experimentation and Computation

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2010-02-25

      We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

    16. Introduction to computers: Reference guide

      SciTech Connect (OSTI)

      Ligon, F.V.

      1995-04-01

      The ``Introduction to Computers`` program establishes formal partnerships with local school districts and community-based organizations, introduces computer literacy to precollege students and their parents, and encourages students to pursue Scientific, Mathematical, Engineering, and Technical careers (SET). Hands-on assignments are given in each class, reinforcing the lesson taught. In addition, the program is designed to broaden the knowledge base of teachers in scientific/technical concepts, and Brookhaven National Laboratory continues to act as a liaison, offering educational outreach to diverse community organizations and groups. This manual contains the teacher`s lesson plans and the student documentation to this introduction to computer course.

    17. HEP Computing | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HEP Computing A number of computing resources are available for HEP employees and visitors. Problem Report or Service Request - Send email to the computing group and log it on the Problem Report Page. (Note: You need to be connected to the ANL network or to be running VPN to submit a problem report.) New Users or Visitors - Start here if you are new to Argonne HEP. Password Help Email Windows Desktops Laptops Linux Users HEP Division FAQs - Find answers for commonly requested information here.

    18. Running Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Running Jobs by Group Running Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2016-04-29 11:34:43

    19. Running Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Running Jobs by Group Running Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2011-04-05 13:59:48...

    20. Pending Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Pending Jobs by Group Pending Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2011-04-05 14:00:14...

    1. UFD Working Group 2015

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Working Group 2015 - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare UFD Working Group 2015 HomeStationary ...

    2. Pending Jobs by Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Pending Jobs by Group Pending Jobs by Group Daily Graph: Weekly Graph: Monthly Graph: Yearly Graph: 2 Year Graph: Last edited: 2016-04-29 11:35:04

    3. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      17, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on July 17, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Glen Clark, Robert Elkins, Scot Fitzgerald, Larry Markel, Cindy Taylor, Sam Vega, Rich Weiss and Eric Wyse. I. Huei Meznarich requested comments on the minutes from the June 12, 2012 meeting. No HASQARD Focus Group members present stated any

    4. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      8, 2013 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on June 18, 2013 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Glen Clark, Scot Fitzgerald, Joan Kessner, Larry Markel, Karl Pool, Chris Sutton, Amanda Tuttle, Rich Weiss and Eric Wyse. I. Huei Meznarich requested comments on the minutes from the May 21, 2013 meeting. No HASQARD Focus Group members present

    5. NIF User Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      group NIF User Group The National Ignition Facility User Group provides an organized framework and independent vehicle for interaction between the scientists who use NIF for "Science Use of NIF" experiments and NIF management. Responsibility for NIF and the research programs carried out at NIF resides with the NIF Director. The NIF User Group advises the NIF Director on matters of concern to users, as well as providing a channel for communication for NIF users with funding agencies and

    6. TEC Communications Topic Group

      Office of Environmental Management (EM)

      procurement - Routing criteriaemergency preparedness Tribal Issues Topic Group * TEPP Navajo Nation (Tom Clawson) - 1404 - Needs Assessment * Identified strengths and...

    7. Interagency Sustainability Working Group

      Broader source: Energy.gov [DOE]

      The Interagency Sustainability Working Group (ISWG) is the coordinating body for sustainable buildings in the federal government.

    8. Tritium Focus Group- INEL

      Broader source: Energy.gov [DOE]

      Presentation from the 34th Tritium Focus Group Meeting held in Idaho Falls, Idaho on September 23-25, 2014.

    9. SSRL ETS Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      STANFORD SYNCHROTRON RADIATION LABORATORY Stanford Linear Accelerator Center Engineering & Technical Services Groups: Mechanical Services Group Mechanical Services Group Sharepoint ASD: Schedule Priorites Accelerator tech support - Call List Documentation: Engineering Notes, Drawings, and Accelerator Safety Documents Mechanical Systems: Accelerator Drawings Accelerator Pictures Accelerator Vacuum Systems (SSRL) LCW Vacuum Projects: Last Updated: February 8, 2007 Ben Scott

    10. Secure key storage and distribution

      SciTech Connect (OSTI)

      Agrawal, Punit

      2015-06-02

      This disclosure describes a distributed, fault-tolerant security system that enables the secure storage and distribution of private keys. In one implementation, the security system includes a plurality of computing resources that independently store private keys provided by publishers and encrypted using a single security system public key. To protect against malicious activity, the security system private key necessary to decrypt the publication private keys is not stored at any of the computing resources. Rather portions, or shares of the security system private key are stored at each of the computing resources within the security system and multiple security systems must communicate and share partial decryptions in order to decrypt the stored private key.

    11. An integrated distributed processing interface for supercomputers and workstations

      SciTech Connect (OSTI)

      Campbell, J.; McGavran, L.

      1989-01-01

      Access to documentation, communication between multiple processes running on heterogeneous computers, and animation of simulations of engineering problems are typically weak in most supercomputer environments. This presentation will describe how we are improving this situation in the Computer Research and Applications group at Los Alamos National Laboratory. We have developed a tool using UNIX filters and a SunView interface that allows users simple access to documentation via mouse driven menus. We have also developed a distributed application that integrated a two point boundary value problem on one of our Cray Supercomputers. It is controlled and displayed graphically by a window interface running on a workstation screen. Our motivation for this research has been to improve the usual typewriter/static interface using language independent controls to show capabilities of the workstation/supercomputer combination. 8 refs.

    12. Computing Resources | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Computing Resources Theory and Computing Sciences Building Argonne's Theory and Computing Sciences (TCS) building houses a wide variety of computing systems including some of the most powerful supercomputers in the world. The facility has 25,000 square feet of raised computer floor space and a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. The building also

    13. Large Group Visits

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Large Group Visits Large Group Visits All tours of the Museum are self-guided, but please schedule in advance so we can best accommodate your group. Contact Us thumbnail of 1350 Central Avenue (505) 667-4444 Email Let us know if you plan to bring a group of 10 or more. All tours of the Museum are self-guided, but please schedule in advance so we can best accommodate your group. Parking for buses and RVs is available on Iris Street behind the Museum off of 15th St. See attached map (pdf). Contact

    14. Grouped exposed metal heaters

      DOE Patents [OSTI]

      Vinegar, Harold J.; Coit, William George; Griffin, Peter Terry; Hamilton, Paul Taylor; Hsu, Chia-Fu; Mason, Stanley Leroy; Samuel, Allan James; Watkins, Ronnie Wade

      2012-07-31

      A system for treating a hydrocarbon containing formation is described. The system includes two or more groups of elongated heaters. The group includes two or more heaters placed in two or more openings in the formation. The heaters in the group are electrically coupled below the surface of the formation. The openings include at least partially uncased wellbores in a hydrocarbon layer of the formation. The groups are electrically configured such that current flow through the formation between at least two groups is inhibited. The heaters are configured to provide heat to the formation.

    15. Grouped exposed metal heaters

      DOE Patents [OSTI]

      Vinegar, Harold J.; Coit, William George; Griffin, Peter Terry; Hamilton, Paul Taylor; Hsu, Chia-Fu; Mason, Stanley Leroy; Samuel, Allan James; Watkins, Ronnie Wade

      2010-11-09

      A system for treating a hydrocarbon containing formation is described. The system includes two or more groups of elongated heaters. The group includes two or more heaters placed in two or more openings in the formation. The heaters in the group are electrically coupled below the surface of the formation. The openings include at least partially uncased wellbores in a hydrocarbon layer of the formation. The groups are electrically configured such that current flow through the formation between at least two groups is inhibited. The heaters are configured to provide heat to the formation.

    16. Avanced Large-scale Integrated Computational Environment

      Energy Science and Technology Software Center (OSTI)

      1998-10-27

      The ALICE Memory Snooper is a software applications programming interface (API) and library for use in implementing computational steering systems. It allows distributed memory parallel programs to publish variables in the computation that may be accessed over the Internet. In this way, users can examine and even change the variables in their running application remotely. The API and library ensure the consistency of the variables across the distributed memory system.

    17. Proceedings of the April 2011 Computational Needs for the Next...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      computational challenges associated with the operation and planning of the electric power system. ... Final Report and Other Materials from 2014 Resilient Electric Distribution Grid ...

    18. Supercomputing on a Shoestring: Cluster Computers at JLab | Jefferson...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      which describe the distribution of electric charge and current inside the nucleon. Apple To calculate the solution to a science problem, a cluster computer slices space up...

    19. Specific Group Hardware

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      jobs to pdsfgrid (via condor) which submits jobs to the compute nodes, monitoring the cluster work load, and uploading job information to ALICE file catalog. It is monitored with...

    20. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    1. Fermilab Steering Group Report

      SciTech Connect (OSTI)

      Steering Group, Fermilab; /Fermilab

      2007-12-01

      The Fermilab Steering Group has developed a plan to keep U.S. accelerator-based particle physics on the pathway to discovery, both at the Terascale with the LHC and the ILC and in the domain of neutrinos and precision physics with a high-intensity accelerator. The plan puts discovering Terascale physics with the LHC and the ILC as Fermilab's highest priority. While supporting ILC development, the plan creates opportunities for exciting science at the intensity frontier. If the ILC remains near the Global Design Effort's technically driven timeline, Fermilab would continue neutrino science with the NOvA experiment, using the NuMI (Neutrinos at the Main Injector) proton plan, scheduled to begin operating in 2011. If ILC construction must wait somewhat longer, Fermilab's plan proposes SNuMI, an upgrade of NuMI to create a more powerful neutrino beam. If the ILC start is postponed significantly, a central feature of the proposed Fermilab plan calls for building an intense proton facility, Project X, consisting of a linear accelerator with the currently planned characteristics of the ILC combined with Fermilab's existing Recycler Ring and the Main Injector accelerator. The major component of Project X is the linac. Cryomodules, radio-frequency distribution, cryogenics and instrumentation for the linac are the same as or similar to those used in the ILC at a scale of about one percent of a full ILC linac. Project X's intense proton beams would open a path to discovery in neutrino science and in precision physics with charged leptons and quarks. World-leading experiments would allow physicists to address key questions of the Quantum Universe: How did the universe come to be? Are there undiscovered principles of nature: new symmetries, new physical laws? Do all the particles and forces become one? What happened to the antimatter? Building Project X's ILC-like linac would offer substantial support for ILC development by accelerating the industrialization of ILC components

    2. Fermilab Steering Group Report

      SciTech Connect (OSTI)

      Beier, Eugene; Butler, Joel; Dawson, Sally; Edwards, Helen; Himel, Thomas; Holmes, Stephen; Kim, Young-Kee; Lankford, Andrew; McGinnis, David; Nagaitsev, Sergei; Raubenheimer, Tor; /SLAC /Fermilab

      2007-01-01

      The Fermilab Steering Group has developed a plan to keep U.S. accelerator-based particle physics on the pathway to discovery, both at the Terascale with the LHC and the ILC and in the domain of neutrinos and precision physics with a high-intensity accelerator. The plan puts discovering Terascale physics with the LHC and the ILC as Fermilab's highest priority. While supporting ILC development, the plan creates opportunities for exciting science at the intensity frontier. If the ILC remains near the Global Design Effort's technically driven timeline, Fermilab would continue neutrino science with the NOVA experiment, using the NuMI (Neutrinos at the Main Injector) proton plan, scheduled to begin operating in 2011. If ILC construction must wait somewhat longer, Fermilab's plan proposes SNuMI, an upgrade of NuMI to create a more powerful neutrino beam. If the ILC start is postponed significantly, a central feature of the proposed Fermilab plan calls for building an intense proton facility, Project X, consisting of a linear accelerator with the currently planned characteristics of the ILC combined with Fermilab's existing Recycler Ring and the Main Injector accelerator. The major component of Project X is the linac. Cryomodules, radio-frequency distribution, cryogenics and instrumentation for the linac are the same as or similar to those used in the ILC at a scale of about one percent of a full ILC linac. Project X's intense proton beams would open a path to discovery in neutrino science and in precision physics with charged leptons and quarks. World-leading experiments would allow physicists to address key questions of the Quantum Universe: How did the universe come to be? Are there undiscovered principles of nature: new symmetries, new physical laws? Do all the particles and forces become one? What happened to the antimatter? Building Project X's ILC-like linac would offer substantial support for ILC development by accelerating the industrialization of ILC components

    3. The computational physics program of the National MFE Computer Center

      SciTech Connect (OSTI)

      Mirin, A.A.

      1988-01-01

      The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generation of supercomputers. The computational physics group is involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to compact toroids. Another major area is the investigation of kinetic instabilities using a 3-D particle code. This work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence are being examined. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers.

    4. Logistical Multicast for Data Distribution linkbordercolor

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Logistical Multicast for Data Distribution Jason Zurawski, Martin Swany Micah Beck, Ying Ding Department of Computer and Information Sciences Department of Computer Science University of Delaware, Newark, DE 19716 University of Tennessee, Knoxville, TN 37996 {zurawski, swany}@cis.udel.edu {mbeck, ying}@cs.utk.edu Abstract This paper describes a simple scheduling procedure for use in multicast data distribution within a logistical networking infrastructure. The goal of our scheduler is to

    5. and Control of Power Systems Using Distributed Synchrophasors

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      ... be offered through the Electrical & Computer Engineering ... program focused on distribution systems, substation ... of Synchrophasors in transmission-level power systems, and ...

    6. TEC Working Group Topic Groups Rail Key Documents Intermodal Subgroup |

      Office of Environmental Management (EM)

      Department of Energy Intermodal Subgroup TEC Working Group Topic Groups Rail Key Documents Intermodal Subgroup Intermodal Subgroup Draft Work Plan (206.83 KB) More Documents & Publications TEC Working Group Topic Groups Rail Key Documents Radiation Monitoring Subgroup TEC Working Group Topic Groups Rail Conference Call Summaries Intermodal Subgroup TEC Working Group Topic Groups Rail Conference Call Summaries Rail Topic Group

    7. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      January 15, 2013 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:02 PM on January 15, 2013 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Glen Clark, Scot Fitzgerald, Larry Markel, Karl Pool, Dave St. John, Chris Sutton, Chris Thompson, Steve Trent, Amanda Tuttle and Eric Wyse. I. Huei Meznarich requested comments on the minutes from the December 18, 2012 meeting. One issue

    8. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7, 2013 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:09 PM on December 17, 2013 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Taffy Almeida, Joe Archuleta, Jeff Cheadle, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Karl Pool, Chris Sutton, Amanda Tuttle, Rich Weiss and Eric Wyse. I. Huei Meznarich asked if there were any comments on the minutes from the

    9. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      22, 2015 The meeting was called to order by Cliff Watkins, HASQARD Focus Group Secretary at 2:05 PM on October 22, 2015 in Conference Room 328 at 2420 Stevens. Those attending were: Jonathan Sanwald (Mission Support Alliance (MSA), Focus Group Chair), Cliff Watkins (Corporate Allocation Services, DOE-RL Support Contractor, Focus Group Secretary), Glen Clark (Washington River Protection Solution (WRPS)), Fred Dunhour (DOE-ORP), Joan Kessner (Washington Closure Hanford (WCH)), Karl Pool (Pacific

    10. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6, 2016 The meeting was called to order by Jonathan Sanwald, HASQARD Focus Group Chair at 2:05 PM on January 26, 2016 in Conference Room 308 at 2420 Stevens. Those attending were: Jonathan Sanwald (Mission Support Alliance (MSA), Focus Group Chair), Cliff Watkins (Corporate Allocation Services, DOE-RL Support Contractor, Focus Group Secretary), Taffy Almeida (Pacific Northwest National Laboratory (PNNL)), Jeff Cheadle (DOE-ORP), Glen Clark (Washington River Protection Solution (WRPS)), Fred

    11. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 The meeting was called to order by Jonathan Sanwald, HASQARD Focus Group Chair at 2:10 PM on April 19, 2016 in Conference Room 308 at 2420 Stevens. Those attending were: Jonathan Sanwald (Mission Support Alliance (Mission Support Alliance (MSA)), Focus Group Chair), Cliff Watkins (Corporate Allocation Services, DOE-RL Support Contractor, Focus Group Secretary), Marcus Aranda (Wastren Advantage Inc. Wastren Hanford Laboratory (WHL)), Joe Archuleta (CH2M HILL Plateau Remediation Company

    12. TEC Communications Topic Group

      Office of Environmental Management (EM)

      Tribal Issues Topic Group Judith Holm, Chair April 21, 2004 Albuquerque, NM Tribal Issues Topic Group * February Tribal Summit with Secretary of Energy (Kristen Ellis, CI) - Held in conjunction with NCAI mid-year conference - First Summit held in response to DOE Indian Policy - Addressed barriers to communication and developing framework for interaction Tribal Issues Topic Group * Summit (continued) - Federal Register Notice published in March soliciting input on how to improve summit process

    13. Tritium Focus Group Meeting

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Meeting Information Tritium Focus Group Charter (pdf) Hotel Information Classified Session Information Los Alamos Restaurants (pdf) LANL Information Visiting Los Alamos Area Map ...

    14. ALS Communications Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ALS Communications Group Print From left: Ashley White, Lori Tamura, Keri Troutman, and Carina Braun. The ALS Communications staff maintain the ALS Web site; write and edit all...

    15. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:04 PM on October 16, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Jeff Cheadle, Glen Clark, Robert Elkins, Larry Markel, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Chris Sutton, Steve Trent, Amanda Tuttle, Sam Vega, Rich Weiss and Eric Wyse. New personnel have joined the Focus Group since the last

    16. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      27, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:09 PM on November 27, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Glen Clark, Robert Elkins, Joan Kessner, Larry Markel, Mary McCormick-Barger, Steve Trent, and Rich Weiss. I. Huei Meznarich requested comments on the minutes from the October 16, 2012 meeting. No HASQARD Focus Group members present stated any

    17. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      0, 2013 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on August 20, 2013 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Taffy Almeida, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Steve Smith, Rich Weiss and Eric Wyse. I. Huei Meznarich asked if there were any comments on the minutes from the July 23, 2013 meeting. No Focus Group members stated they had

    18. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      5, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:10 PM on April 15, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Glen Clark, Robert Elkins, Scot Fitzgerald, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, and Eric Wyse. I. Huei Meznarich asked if there were any comments on the minutes from the March 18, 2014 meeting. No Focus Group members stated they

    19. Hydrogen Technologies Group

      SciTech Connect (OSTI)

      Not Available

      2008-03-01

      The Hydrogen Technologies Group at the National Renewable Energy Laboratory advances the Hydrogen Technologies and Systems Center's mission by researching a variety of hydrogen technologies.

    20. The Chaninik Wind Group

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Energy Chaninik Wind Group Villages Kongiganak pop.359 Kwigillingok pop. 388 Kipnuk pop.644 Tuntutuliak pop. 370 On average, 24% of families are below the poverty line. ...

    1. SCM Working Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Modeling Working Group Translator Update Shaocheng Xie Lawrence Livermore National Laboratory Outline 1. Data development in support of CMWG * Climate modeling best estimate data * ...

    2. Buildings Sector Working Group

      U.S. Energy Information Administration (EIA) Indexed Site

      Group Forrestal 2E-069 July 22, 2013 2 * Residential projects - RECS update - Lighting model - Equipment, shell subsidies - ENERGY STAR benchmarking - Housing stock formation ...

    3. Mobile computing device configured to compute irradiance, glint, and glare of the sun

      DOE Patents [OSTI]

      Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib

      2014-03-11

      Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.

    4. Unix File Groups at NERSC

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      A user's default group is the same as their username. NERSC users usually belong to ... Useful Unix Group Commands Command Description groups username List group membership id ...

    5. Exascale Hardware Architectures Working Group

      SciTech Connect (OSTI)

      Hemmert, S; Ang, J; Chiang, P; Carnes, B; Doerfler, D; Leininger, M; Dosanjh, S; Fields, P; Koch, K; Laros, J; Noe, J; Quinn, T; Torrellas, J; Vetter, J; Wampler, C; White, A

      2011-03-15

      The ASC Exascale Hardware Architecture working group is challenged to provide input on the following areas impacting the future use and usability of potential exascale computer systems: processor, memory, and interconnect architectures, as well as the power and resilience of these systems. Going forward, there are many challenging issues that will need to be addressed. First, power constraints in processor technologies will lead to steady increases in parallelism within a socket. Additionally, all cores may not be fully independent nor fully general purpose. Second, there is a clear trend toward less balanced machines, in terms of compute capability compared to memory and interconnect performance. In order to mitigate the memory issues, memory technologies will introduce 3D stacking, eventually moving on-socket and likely on-die, providing greatly increased bandwidth but unfortunately also likely providing smaller memory capacity per core. Off-socket memory, possibly in the form of non-volatile memory, will create a complex memory hierarchy. Third, communication energy will dominate the energy required to compute, such that interconnect power and bandwidth will have a significant impact. All of the above changes are driven by the need for greatly increased energy efficiency, as current technology will prove unsuitable for exascale, due to unsustainable power requirements of such a system. These changes will have the most significant impact on programming models and algorithms, but they will be felt across all layers of the machine. There is clear need to engage all ASC working groups in planning for how to deal with technological changes of this magnitude. The primary function of the Hardware Architecture Working Group is to facilitate codesign with hardware vendors to ensure future exascale platforms are capable of efficiently supporting the ASC applications, which in turn need to meet the mission needs of the NNSA Stockpile Stewardship Program. This issue is

    6. Jason Hick! Storage Systems Group NERSC User Group Storage Update

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      NERSC User Group Storage Update Feb 2 6, 2 014 The compute and storage systems 2014 Sponsored C ompute S ystems Carver, P DSF, J GI, K BASE, H EP 8 x F DR I B /global/ scratch 4 PB /project 5 PB /home 250 TB 45 P B s tored, 2 40 P B capacity, 4 0 y ears o f community d ata HPSS 48 GB/s 2.2 P B L ocal Scratch 70 GB/s 6.4 P B L ocal Scratch 140 GB/s 80 GB/s Ethernet & I B F abric Science F riendly S ecurity ProducKon M onitoring Power E fficiency WAN 2 x 10 Gb 1 x 100 Gb Science D ata N etwork

    7. TEC Working Group Topic Groups Routing Meeting Summaries | Department of

      Office of Environmental Management (EM)

      Energy Meeting Summaries TEC Working Group Topic Groups Routing Meeting Summaries MEETING SUMMARIES Atlanta TEC Meeting, Routing Topic Group Summary (101.72 KB) More Documents & Publications TEC Meeting Summaries - January - February 2007 TEC Working Group Topic Groups Rail Meeting Summaries TEC Working Group Topic Groups Rail Conference Call Summaries Rail Topic Group

    8. TEC Working Group Topic Groups Rail Conference Call Summaries...

      Office of Environmental Management (EM)

      Summaries Rail Topic Group TEC Working Group Topic Groups Rail Conference Call Summaries Rail Topic Group Rail Topic Group PDF icon May 17, 2007 PDF icon January 16, 2007 PDF icon...

    9. Beyond moore computing research challenge workshop report.

      SciTech Connect (OSTI)

      Huey, Mark C.; Aidun, John Bahram

      2013-10-01

      We summarize the presentations and break out session discussions from the in-house workshop that was held on 11 July 2013 to acquaint a wider group of Sandians with the Beyond Moore Computing research challenge.

    10. Distributed processor allocation for launching applications in a massively connected processors complex

      DOE Patents [OSTI]

      Pedretti, Kevin

      2008-11-18

      A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.

    11. Trails Working Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Working Group Trails Working Group Our mission is to inventory, map, and prepare historical reports on the many trails used at LANL. Contact Environmental Communication & Public Involvement P.O. Box 1663 MS M996 Los Alamos, NM 87545 (505) 667-0216 Email The LANL Trails Working Group inventories, maps, and prepares historical reports on the many trails used at LANL. Some of these trails are ancient pueblo footpaths that continue to be used for recreational hiking today. Some serve as quiet

    12. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:06 PM on June 12, 2012 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Jeff Cheadle, Glen Clark, Shannan Johnson, Joan Kessner, Larry Markel, Karl Pool, Steve Smith, Noe'l Smith-Jackson, Chris Sutton, Cindy Taylor, Chris Thomson, Amanda Tuttle, Sam Vega, Rick Warriner and Eric Wyse. I. Huei Meznarich requested comments on the

    13. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      1, 2012 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:10 PM on August 21, 2012 in an alternate Conference Room in 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Lynn Albin, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Larry Markel, Steve Smith, Chris Sutton. Chris Thompson, Amanda Tuttle, and Rich Weiss. I. Because the meeting was scheduled to take place in Room 308 and a glitch in

    14. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6, 2013 The beginning of the meeting was delayed due to an unannounced loss of the conference room scheduled for the meeting. After securing another meeting location, the meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:18 PM on April 16, 2013 in Conference Room 156 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Jeff Cheadle, Glen Clark, Joan Kessner, Larry Markel, Mary McCormick-Barger, Karl Pool,

    15. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      19, 2013 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on November 19, 2013 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Taffy Almeida, Joe Archuleta, Mike Barnes, Jeff Cheadle, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Mary McCormick-Barger, Noe'l Smith-Jackson, Chris Sutton, Amanda Tuttle, Rich Weiss and Eric Wyse. I. Huei Meznarich asked if

    16. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      January 28, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:04 PM on January 28, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Joe Archuleta, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Chris Sutton, Chris Thompson, Rich Weiss and Eric Wyse. I. Huei Meznarich asked if there were any comments on

    17. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      5, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:07 PM on February 25, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Lynn Albin, Taffy Almeida, Joe Archuleta, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Chris Sutton, Chris Thompson, and Eric Wyse. I. Huei Meznarich asked if there were any

    18. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      8, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on March 18, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Joe Archuleta, Glen Clark, Robert Elkins, Scot Fitzgerald, Joan Kessner, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Rich Weiss, and Eric Wyse. I. Huei Meznarich asked if there were any comments on the minutes from the February 25, 2014

    19. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      0, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:05 PM on May 20, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Lynn Albin, Taffy Almeida, Joe Archuleta, Glen Clark, Robert Elkins, Scot Fitzgerald, Shannan Johnson, Joan Kessner, Mary McCormick-Barger, Craig Perkins, Karl Pool, Noe'l Smith-Jackson, Chris Sutton, Chris Thompson and Eric Wyse. I. Acknowledging the

    20. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      4 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:07 PM on June 12, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Joe Archuleta, Sara Champoux, Glen Clark, Jim Douglas, Robert Elkins, Scot Fitzgerald, Joan Kessner, Jan McCallum, Mary McCormick-Barger, Karl Pool, Noe'l Smith-Jackson, Rich Weiss and Eric Wyse. I. Acknowledging the presence of new and/or infrequent

    1. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7, 2014 The meeting was called to order by Huei Meznarich, HASQARD Focus Group Chair at 2:10 PM on June 17, 2014 in Conference Room 308 at 2420 Stevens. Those attending were: Huei Meznarich (Focus Group Chair), Cliff Watkins (Focus Group Secretary), Robert Elkins, Shannan Johnson, Joan Kessner, Jan McCallum, Craig Perkins, Karl Pool, Chris Sutton and Rich Weiss. I. Because of the short time since the last meeting, Huei Meznarich stated that the minutes from the June 12, 2014 meeting have not yet

    2. Group key management

      SciTech Connect (OSTI)

      Dunigan, T.; Cao, C.

      1997-08-01

      This report describes an architecture and implementation for doing group key management over a data communications network. The architecture describes a protocol for establishing a shared encryption key among an authenticated and authorized collection of network entities. Group access requires one or more authorization certificates. The implementation includes a simple public key and certificate infrastructure. Multicast is used for some of the key management messages. An application programming interface multiplexes key management and user application messages. An implementation using the new IP security protocols is postulated. The architecture is compared with other group key management proposals, and the performance and the limitations of the implementation are described.

    3. Bio-Derived Liquids to Hydrogen Distributed Reforming Targets (Presentation)

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Distributed Reforming Targets Arlene F. Anderson Technology Development Manager, U.S. DOE Office of Energy Efficiency and Renewable Energy Hydrogen, Fuel Cells and Infrastructure Technologies Program Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group and Hydrogen Production Technical Team Review November 6, 2007 Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG) The Bio-Derived Liquids to Hydrogen Distributed Reforming Working Group (BILIWG), launched

    4. System-wide power management control via clock distribution network

      DOE Patents [OSTI]

      Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.

      2015-05-19

      An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.

    5. Introduction to High Performance Computing Using GPUs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC Using GPUs Introduction to High Performance Computing Using GPUs July 11, 2013 NERSC, NVIDIA, and The Portland Group presented a one-day workshop "Introduction to High Performance Computing Using GPUs" on July 11, 2013 in Room 250 of Sutardja Dai Hall on the University of California, Berkeley, campus. Registration was free and open to all NERSC users; Berkeley Lab Researchers; UC students, faculty, and staff; and users of the Oak Ridge Leadership Computing Facility. This workshop

    6. InterGroup Protocols

      Energy Science and Technology Software Center (OSTI)

      2003-04-02

      Existing reliable ordered group communication protocols have been developed for local-area networks and do not in general scale well to a large number of nodes and wide-area networks. The InterGroup suite of protocols is a scalable group communication system that introduces an unusual approach to handling group membership, and supports a receiver-oriented selection of service. The protocols are intended for a wide-area network, with a large number of nodes, that has highly variable delays andmore » a high message loss rate, such as the Internet. The levels of the message delivery service range from unreliable unordered to reliable timestamp ordered.« less

    7. Tritium Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      matters related to tritium. Contacts Mike Rogers (505) 665-2513 Email Chandra Savage Marsden (505) 664-0183 Email The Tritium Focus Group consists of participants from member...

    8. Strategic Initiatives Work Group

      Broader source: Energy.gov [DOE]

      The Work Group, comprised of members representing DOE, contractor and worker representatives, provides a forum for information sharing; data collection and analysis; as well as, identifying best practices and initiatives to enhance safety performance and safety culture across the Complex.

    9. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Markel, Huei Meznarich, Karl Pool, Noe'l Smith-Jackson, Andrew Stevens, Genesis Thomas, ... the radar of the DOE- HQ QA group. Noe'l Smith-Jackson commented that Ecology was always ...

    10. HASQARD Focus Group

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Elkins, Mary McCormick-Barger, Noe'l Smith-Jackson, Chris Sutton, Amanda Tuttle, Rick ... Noe'l Smith-Jackson stated that the HASQARD document is the work of the Focus Group not ...