National Library of Energy BETA

Sample records for analysis including computer

  1. Quantitative Analysis of Biofuel Sustainability, Including Land...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    life cycle analysis of biofuels continue to improve 2 Feedstock Production Feedstock Logistics, Storage and Transportation Feedstock Conversion Fuel Transportation and...

  2. Human-computer interface including haptically controlled interactions

    DOE Patents [OSTI]

    Anderson, Thomas G.

    2005-10-11

    The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.

  3. Quantitative Analysis of Biofuel Sustainability, Including Land Use Change

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    GHG Emissions | Department of Energy Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions Plenary V: Biofuels and Sustainability: Acknowledging Challenges and Confronting Misconceptions Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions Jennifer B. Dunn, Energy Systems and Sustainability Analyst, Argonne National Laboratory PDF

  4. Radiological Safety Analysis Computer Program

    Energy Science and Technology Software Center (OSTI)

    2001-08-28

    RSAC-6 is the latest version of the RSAC program. It calculates the consequences of a release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory; decay and in-grow the inventory during transport through processes, facilities, and the environment; model the downwind dispersion of the activity; and calculate doses to downwind individuals. Internal dose from the inhalation and ingestion pathways is calculated. External dose from ground surface andmore » plume gamma pathways is calculated. New and exciting updates to the program include the ability to evaluate a release to an enclosed room, resuspension of deposited activity and evaluation of a release up to 1 meter from the release point. Enhanced tools are included for dry deposition, building wake, occupancy factors, respirable fraction, AMAD adjustment, updated and enhanced radionuclide inventory and inclusion of the dose-conversion factors from FOR 11 and 12.« less

  5. Semiconductor Device Analysis on Personal Computers

    Energy Science and Technology Software Center (OSTI)

    1993-02-08

    PC-1D models the internal operation of bipolar semiconductor devices by solving for the concentrations and quasi-one-dimensional flow of electrons and holes resulting from either electrical or optical excitation. PC-1D uses the same detailed physical models incorporated in mainframe computer programs, yet runs efficiently on personal computers. PC-1D was originally developed with DOE funding to analyze solar cells. That continues to be its primary mode of usage, with registered copies in regular use at more thanmore » 100 locations worldwide. The program has been successfully applied to the analysis of silicon, gallium-arsenide, and indium-phosphide solar cells. The program is also suitable for modeling bipolar transistors and diodes, including heterojunction devices. Its easy-to-use graphical interface makes it useful as a teaching tool as well.« less

  6. Search for Earth-like planets includes LANL star analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    their interiors. Consortium team members at Los Alamos include Joyce Ann Guzik, Paul Bradley, Arthur N. Cox, and Kim Simmons. They will help interpret the stellar oscillation...

  7. Impact analysis on a massively parallel computer

    SciTech Connect (OSTI)

    Zacharia, T.; Aramayo, G.A.

    1994-06-01

    Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper.

  8. Computer aided cogeneration feasibility analysis

    SciTech Connect (OSTI)

    Anaya, D.A.; Caltenco, E.J.L.; Robles, L.F.

    1996-12-31

    A successful cogeneration system design depends of several factors, and the optimal configuration can be founded using a steam and power simulation software. The key characteristics of one of this kind of software are described below, and its application on a process plant cogeneration feasibility analysis is shown in this paper. Finally a study case is illustrated. 4 refs., 2 figs.

  9. Application of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the...

  10. Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions

    Broader source: Energy.gov [DOE]

    Plenary V: Biofuels and Sustainability: Acknowledging Challenges and Confronting MisconceptionsQuantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG EmissionsJennifer B....

  11. Computation of Domain-Averaged Irradiance with a Simple Two-Stream Radiative Transfer Model Including Vertical Cloud Property Correlations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computation of Domain-Averaged Irradiance with a Simple Two-Stream Radiative Transfer Model Including Vertical Cloud Property Correlations S. Kato Center for Atmospheric Sciences Hampton University Hampton, Virginia Introduction Recent development of remote sensing instruments by Atmospheric Radiation Measurement (ARM?) Program provides information of spatial and temporal variability of cloud structures. However it is not clear what cloud properties are required to express complicated cloud

  12. PArallel Reacting Multiphase FLOw Computational Fluid Dynamic Analysis

    Energy Science and Technology Software Center (OSTI)

    2002-06-01

    PARMFLO is a parallel multiphase reacting flow computational fluid dynamics (CFD) code. It can perform steady or unsteady simulations in three space dimensions. It is intended for use in engineering CFD analysis of industrial flow system components. Its parallel processing capabilities allow it to be applied to problems that use at least an order of magnitude more computational cells than the number that can be used on a typical single processor workstation (about 106 cellsmore » in parallel processing mode versus about io cells in serial processing mode). Alternately, by spreading the work of a CFD problem that could be run on a single workstation over a group of computers on a network, it can bring the runtime down by an order of magnitude or more (typically from many days to less than one day). The software was implemented using the industry standard Message-Passing Interface (MPI) and domain decomposition in one spatial direction. The phases of a flow problem may include an ideal gas mixture with an arbitrary number of chemical species, and dispersed droplet and particle phases. Regions of porous media may also be included within the domain. The porous media may be packed beds, foams, or monolith catalyst supports. With these features, the code is especially suited to analysis of mixing of reactants in the inlet chamber of catalytic reactors coupled to computation of product yields that result from the flow of the mixture through the catalyst coaled support structure.« less

  13. A Research Roadmap for Computation-Based Human Reliability Analysis

    SciTech Connect (OSTI)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  14. Distributed Design and Analysis of Computer Experiments

    Energy Science and Technology Software Center (OSTI)

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an algorithm by Michael McKay to compute variable correlations. DDACE can also be used to carry out a main-effects analysis to calculate the sensitivity of an output variable to each of the varied inputs taken individually. 1 Continued« less

  15. Computational Aerodynamic Analysis of Offshore Upwind and Downwind Turbines

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zhao, Qiuying; Sheng, Chunhua; Afjeh, Abdollah

    2014-01-01

    Aerodynamic interactions of the model NREL 5 MW offshore horizontal axis wind turbines (HAWT) are investigated using a high-fidelity computational fluid dynamics (CFD) analysis. Four wind turbine configurations are considered; three-bladed upwind and downwind and two-bladed upwind and downwind configurations, which operate at two different rotor speeds of 12.1 and 16 RPM. In the present study, both steady and unsteady aerodynamic loads, such as the rotor torque, blade hub bending moment, and base the tower bending moment of the tower, are evaluated in detail to provide overall assessment of different wind turbine configurations. Aerodynamic interactions between the rotor and tower are analyzed,more » including the rotor wake development downstream. The computational analysis provides insight into aerodynamic performance of the upwind and downwind, two- and three-bladed horizontal axis wind turbines.« less

  16. The Design and Analysis of Computer Experiments | Open Energy...

    Open Energy Info (EERE)

    to library Book: The Design and Analysis of Computer Experiments Authors Thomas J. Santner, Brian J. Williams and William I. Notz Published Springer-Verlag, 2003 DOI Not...

  17. Scalable Computer Performance and Analysis (Hierarchical INTegration)

    Energy Science and Technology Software Center (OSTI)

    1999-09-02

    HINT is a program to measure a wide variety of scalable computer systems. It is capable of demonstrating the benefits of using more memory or processing power, and of improving communications within the system. HINT can be used for measurement of an existing system, while the associated program ANALYTIC HINT can be used to explain the measurements or as a design tool for proposed systems.

  18. Wind energy conversion system analysis model (WECSAM) computer program documentation

    SciTech Connect (OSTI)

    Downey, W T; Hendrick, P L

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation. Thus, any user-supplied data for WECS performance, application load, utility rates, or wind resource may be entered into the scratch file to override the default data-base value. After the model and the inputs required from the user and derived from the data base are described, the model output and the various output options that can be exercised by the user are detailed. The general operation is set forth and suggestions are made for efficient modes of operation. Sample listings of various input, output, and data-base files are appended. (LEW)

  19. Comparative genome analysis of Pseudomonas genomes including Populus-associated isolates

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jun, Se Ran; Wassenaar, Trudy; Nookaew, Intawat; Hauser, Loren John; Wanchai, Visanu; Land, Miriam L; Timm, Collin M; Lu, Tse-Yuan S; Schadt, Christopher Warren; Doktycz, Mitchel John; et al

    2016-01-01

    The Pseudomonas genus contains a metabolically versatile group of organisms that are known to occupy numerous ecological niches including the rhizosphere and endosphere of many plants influencing phylogenetic diversity and heterogeneity. In this study, comparative genome analysis was performed on over one thousand Pseudomonas genomes, including 21 Pseudomonas strains isolated from the roots of native Populus deltoides. Based on average amino acid identity, genomic clusters were identified within the Pseudomonas genus, which showed agreements with clades by NCBI and cliques by IMG. The P. fluorescens group was organized into 20 distinct genomic clusters, representing enormous diversity and heterogeneity. The speciesmore » P. aeruginosa showed clear distinction in their genomic relatedness compared to other Pseudomonas species groups based on the pan and core genome analysis. The 19 isolates of our 21 Populus-associated isolates formed three distinct subgroups within the P. fluorescens major group, supported by pathway profiles analysis, while two isolates were more closely related to P. chlororaphis and P. putida. The specific genes to Populus-associated subgroups were identified where genes specific to subgroup 1 include several sensory systems such as proteins which act in two-component signal transduction, a TonB-dependent receptor, and a phosphorelay sensor; specific genes to subgroup 2 contain unique hypothetical genes; and genes specific to subgroup 3 organisms have a different hydrolase activity. IMPORTANCE The comparative genome analyses of the genus Pseudomonas that included Populus-associated isolates resulted in novel insights into high diversity of Pseudomonas. Consistent and robust genomic clusters with phylogenetic homogeneity were identified, which resolved species-clades that are not clearly defined by 16S rRNA gene sequence analysis alone. The genomic clusters may be reflective of distinct ecological niches to which the organisms have adapted, but this needs to be experimentally characterized with ecologically relevant phenotype properties. This study justifies the need to sequence multiple isolates, especially from P. fluorescens group in order to study functional capabilities from a pangenomic perspective. This information will prove useful when choosing Pseudomonas strains for use to promote growth and increase disease resistance in plants.« less

  20. A joint analysis of Planck and BICEP2 B modes including dust polarization uncertainty

    SciTech Connect (OSTI)

    Mortonson, Michael J.; Seljak, Uro E-mail: useljak@berkeley.edu

    2014-10-01

    We analyze BICEP2 and Planck data using a model that includes CMB lensing, gravity waves, and polarized dust. Recently published Planck dust polarization maps have highlighted the difficulty of estimating the amount of dust polarization in low intensity regions, suggesting that the polarization fractions have considerable uncertainties and may be significantly higher than previous predictions. In this paper, we start by assuming nothing about the dust polarization except for the power spectrum shape, which we take to be C{sub l}{sup BB,dust}?l{sup -2.42}. The resulting joint BICEP2+Planck analysis favors solutions without gravity waves, and the upper limit on the tensor-to-scalar ratio is r<0.11, a slight improvement relative to the Planck analysis alone which gives r<0.13 (95% c.l.). The estimated amplitude of the dust polarization power spectrum agrees with expectations for this field based on both HI column density and Planck polarization measurements at 353 GHz in the BICEP2 field. Including the latter constraint on the dust spectrum amplitude in our analysis improves the limit further to r<0.09, placing strong constraints on theories of inflation (e.g., models with r>0.14 are excluded with 99.5% confidence). We address the cross-correlation analysis of BICEP2 at 150 GHz with BICEP1 at 100 GHz as a test of foreground contamination. We find that the null hypothesis of dust and lensing with 0r= gives ??{sup 2}<2 relative to the hypothesis of no dust, so the frequency analysis does not strongly favor either model over the other. We also discuss how more accurate dust polarization maps may improve our constraints. If the dust polarization is measured perfectly, the limit can reach r<0.05 (or the corresponding detection significance if the observed dust signal plus the expected lensing signal is below the BICEP2 observations), but this degrades quickly to almost no improvement if the dust calibration error is 20% or larger or if the dust maps are not processed through the BICEP2 pipeline, inducing sampling variance noise.

  1. Applicaiton of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop...

  2. Process for computing geometric perturbations for probabilistic analysis

    DOE Patents [OSTI]

    Fitch, Simeon H. K. (Charlottesville, VA); Riha, David S. (San Antonio, TX); Thacker, Ben H. (San Antonio, TX)

    2012-04-10

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  3. Multiscale analysis of nonlinear systems using computational homology

    SciTech Connect (OSTI)

    Konstantin Mischaikow; Michael Schatz; William Kalies; Thomas Wanner

    2010-05-24

    This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.

  4. Multiscale analysis of nonlinear systems using computational homology

    SciTech Connect (OSTI)

    Konstantin Mischaikow, Rutgers University /Georgia Institute of Technology, Michael Schatz, Georgia Institute of Technology, William Kalies, Florida Atlantic University, Thomas Wanner,George Mason University

    2010-05-19

    This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.

  5. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, M.S.

    1998-08-18

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device. 27 figs.

  6. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S.

    2003-08-19

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  7. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S.; Wang, Chunwei; Jevons, Luis C.; Bernhart, Derek H.; Lipshutz, Robert J.

    2004-05-11

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  8. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S. (Palo Alto, CA)

    2001-06-05

    A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

  9. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S. (3199 Waverly St., Palo Alto, CA 94306)

    1998-08-18

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  10. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S. (Palo Alto, CA)

    1999-10-26

    A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

  11. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect (OSTI)

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  12. Large-scale computations in analysis of structures

    SciTech Connect (OSTI)

    McCallen, D.B.; Goudreau, G.L.

    1993-09-01

    Computer hardware and numerical analysis algorithms have progressed to a point where many engineering organizations and universities can perform nonlinear analyses on a routine basis. Through much remains to be done in terms of advancement of nonlinear analysis techniques and characterization on nonlinear material constitutive behavior, the technology exists today to perform useful nonlinear analysis for many structural systems. In the current paper, a survey on nonlinear analysis technologies developed and employed for many years on programmatic defense work at the Lawrence Livermore National Laboratory is provided, and ongoing nonlinear numerical simulation projects relevant to the civil engineering field are described.

  13. Present and Future Computational Requirements General Plasma Physics Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computational Current Future Accelerators Present and Future Computational Requirements General Plasma Physics Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART) Kai Germaschewski, Homa Karimabadi Amitava Bhattacharjee, Fatima Ebrahimi, Will Fox, Liwei Lin CICART Space Science Center / Dept. of Physics University of New Hampshire March 18, 2013 Kai Germaschewski and Homa Karimabadi CICART Project Computational Current Future Accelerators Outline 1 Project

  14. Low-frequency computational electromagnetics for antenna analysis

    SciTech Connect (OSTI)

    Miller, E.K. ); Burke, G.J. )

    1991-01-01

    An overview of low-frequency, computational methods for modeling the electromagnetic characteristics of antennas is presented here. The article presents a brief analytical background, and summarizes the essential ingredients of the method of moments, for numerically solving low-frequency antenna problems. Some extensions to the basic models of perfectly conducting objects in free space are also summarized, followed by a consideration of some of the same computational issues that affect model accuracy, efficiency and utility. A variety of representative computations are then presented to illustrate various modeling aspects and capabilities that are currently available. A fairly extensive bibliography is included to suggest further reference material to the reader. 90 refs., 27 figs.

  15. RDI's Wisdom Way Solar Village Final Report: Includes Utility Bill Analysis of Occupied Homes

    SciTech Connect (OSTI)

    Robb Aldrich, Steven Winter Associates

    2011-07-01

    In 2010, Rural Development, Inc. (RDI) completed construction of Wisdom Way Solar Village (WWSV), a community of ten duplexes (20 homes) in Greenfield, MA. RDI was committed to very low energy use from the beginning of the design process throughout construction. Key features include: 1. Careful site plan so that all homes have solar access (for active and passive); 2. Cellulose insulation providing R-40 walls, R-50 ceiling, and R-40 floors; 3. Triple-pane windows; 4. Airtight construction (~0.1 CFM50/ft2 enclosure area); 5. Solar water heating systems with tankless, gas, auxiliary heaters; 6. PV systems (2.8 or 3.4kWSTC); 7. 2-4 bedrooms, 1,100-1,700 ft2. The design heating loads in the homes were so small that each home is heated with a single, sealed-combustion, natural gas room heater. The cost savings from the simple HVAC systems made possible the tremendous investments in the homes' envelopes. The Consortium for Advanced Residential Buildings (CARB) monitored temperatures and comfort in several homes during the winter of 2009-2010. In the Spring of 2011, CARB obtained utility bill information from 13 occupied homes. Because of efficient lights, appliances, and conscientious home occupants, the energy generated by the solar electric systems exceeded the electric energy used in most homes. Most homes, in fact, had a net credit from the electric utility over the course of a year. On the natural gas side, total gas costs averaged $377 per year (for heating, water heating, cooking, and clothes drying). Total energy costs were even less - $337 per year, including all utility fees. The highest annual energy bill for any home evaluated was $458; the lowest was $171.

  16. Analysis of energy conversion systems, including material and global warming aspects

    SciTech Connect (OSTI)

    Zhang, M.; Reistad, G.M.

    1998-12-31

    This paper addresses a method for the overall evaluation of energy conversion systems, including material and global environmental aspects. To limit the scope of the work reported here, the global environmental aspects have been limited to global warming aspects. A method is presented that uses exergy as an overall evaluation measure of energy conversion systems for their lifetime. The method takes the direct exergy consumption (fuel consumption) of the conventional exergy analyses and adds (1) the exergy of the energy conversion system equipment materials, (2) the fuel production exergy and material exergy, and (3) the exergy needed to recover the total global warming gases (equivalent) of the energy conversion system. This total, termed Total Equivalent Resource Exergy (TERE), provides a measure of the effectiveness of the energy conversion system in its use of natural resources. The results presented here for several example systems illustrate how the method can be used to screen candidate energy conversion systems and perhaps, as data become more available, to optimize systems. It appears that this concept may be particularly useful for comparing systems that have quite different direct energy and/or environmental impacts. This work should be viewed in the context of being primarily a concept paper in that the lack of detailed data available to the authors at this time limits the accuracy of the overall results. The authors are working on refinements to data used in the evaluation.

  17. Surface and grain boundary scattering in nanometric Cu thin films: A quantitative analysis including twin boundaries

    SciTech Connect (OSTI)

    Barmak, Katayun [Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027 and Department of Materials Science and Engineering and Materials Research Science and Engineering Center, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, Pennsylvania 15213 (United States); Darbal, Amith [Department of Materials Science and Engineering and Materials Research Science and Engineering Center, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, Pennsylvania 15213 (United States); Ganesh, Kameswaran J.; Ferreira, Paulo J. [Materials Science and Engineering, The University of Texas at Austin, 1 University Station, Austin, Texas 78712 (United States); Rickman, Jeffrey M. [Department of Materials Science and Engineering and Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States); Sun, Tik; Yao, Bo; Warren, Andrew P.; Coffey, Kevin R., E-mail: kb2612@columbia.edu [Department of Materials Science and Engineering, University of Central Florida, 4000 Central Florida Boulevard, Orlando, Florida 32816 (United States)

    2014-11-01

    The relative contributions of various defects to the measured resistivity in nanocrystalline Cu were investigated, including a quantitative account of twin-boundary scattering. It has been difficult to quantitatively assess the impact twin boundary scattering has on the classical size effect of electrical resistivity, due to limitations in characterizing twin boundaries in nanocrystalline Cu. In this study, crystal orientation maps of nanocrystalline Cu films were obtained via precession-assisted electron diffraction in the transmission electron microscope. These orientation images were used to characterize grain boundaries and to measure the average grain size of a microstructure, with and without considering twin boundaries. The results of these studies indicate that the contribution from grain-boundary scattering is the dominant factor (as compared to surface scattering) leading to enhanced resistivity. The resistivity data can be well-described by the combined FuchsSondheimer surface scattering model and MayadasShatzkes grain-boundary scattering model using Matthiessen's rule with a surface specularity coefficient of p?=?0.48 and a grain-boundary reflection coefficient of R?=?0.26.

  18. A system analysis computer model for the High Flux Isotope Reactor (HFIRSYS Version 1)

    SciTech Connect (OSTI)

    Sozer, M.C.

    1992-04-01

    A system transient analysis computer model (HFIRSYS) has been developed for analysis of small break loss of coolant accidents (LOCA) and operational transients. The computer model is based on the Advanced Continuous Simulation Language (ACSL) that produces the FORTRAN code automatically and that provides integration routines such as the Gear`s stiff algorithm as well as enabling users with numerous practical tools for generating Eigen values, and providing debug outputs and graphics capabilities, etc. The HFIRSYS computer code is structured in the form of the Modular Modeling System (MMS) code. Component modules from MMS and in-house developed modules were both used to configure HFIRSYS. A description of the High Flux Isotope Reactor, theoretical bases for the modeled components of the system, and the verification and validation efforts are reported. The computer model performs satisfactorily including cases in which effects of structural elasticity on the system pressure is significant; however, its capabilities are limited to single phase flow. Because of the modular structure, the new component models from the Modular Modeling System can easily be added to HFIRSYS for analyzing their effects on system`s behavior. The computer model is a versatile tool for studying various system transients. The intent of this report is not to be a users manual, but to provide theoretical bases and basic information about the computer model and the reactor.

  19. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes - Update to Include Evaluation of Impact of Including a Humidifier Option

    SciTech Connect (OSTI)

    Baxter, Van D

    2007-02-01

    The long range strategic goal of the Department of Energy's Building Technologies (DOE/BT) Program is to create, by 2020, technologies and design approaches that enable the construction of net-zero energy homes at low incremental cost (DOE/BT 2005). A net zero energy home (NZEH) is a residential building with greatly reduced needs for energy through efficiency gains, with the balance of energy needs supplied by renewable technologies. While initially focused on new construction, these technologies and design approaches are intended to have application to buildings constructed before 2020 as well resulting in substantial reduction in energy use for all building types and ages. DOE/BT's Emerging Technologies (ET) team is working to support this strategic goal by identifying and developing advanced heating, ventilating, air-conditioning, and water heating (HVAC/WH) technology options applicable to NZEHs. In FY05 ORNL conducted an initial Stage 1 (Applied Research) scoping assessment of HVAC/WH systems options for future NZEHs to help DOE/BT identify and prioritize alternative approaches for further development. Eleven system concepts with central air distribution ducting and nine multi-zone systems were selected and their annual and peak demand performance estimated for five locations: Atlanta (mixed-humid), Houston (hot-humid), Phoenix (hot-dry), San Francisco (marine), and Chicago (cold). Performance was estimated by simulating the systems using the TRNSYS simulation engine (Solar Energy Laboratory et al. 2006) in two 1800-ft{sup 2} houses--a Building America (BA) benchmark house and a prototype NZEH taken from BEopt results at the take-off (or crossover) point (i.e., a house incorporating those design features such that further progress towards ZEH is through the addition of photovoltaic power sources, as determined by current BEopt analyses conducted by NREL). Results were summarized in a project report, HVAC Equipment Design options for Near-Zero-Energy Homes--A Stage 2 Scoping Assessment, ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. In 2006, the two top-ranked options from the 2005 study, air-source and ground-source versions of a centrally ducted integrated heat pump (IHP) system, were subjected to an initial business case study. The IHPs were subjected to a more rigorous hourly-based assessment of their performance potential compared to a baseline suite of equipment of legally minimum efficiency that provided the same heating, cooling, water heating, demand dehumidification, and ventilation services as the IHPs. Results were summarized in a project report, Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes, ORNL/TM-2006/130 (Baxter 2006a). The present report is an update to that document which summarizes results of an analysis of the impact of adding a humidifier to the HVAC system to maintain minimum levels of space relative humidity (RH) in winter. The space RH in winter has direct impact on occupant comfort and on control of dust mites, many types of disease bacteria, and 'dry air' electric shocks. Chapter 8 in ASHRAE's 2005 Handbook of Fundamentals (HOF) suggests a 30% lower limit on RH for indoor temperatures in the range of {approx}68-69F based on comfort (ASHRAE 2005). Table 3 in chapter 9 of the same reference suggests a 30-55% RH range for winter as established by a Canadian study of exposure limits for residential indoor environments (EHD 1987). Harriman, et al (2001) note that for RH levels of 35% or higher, electrostatic shocks are minimized and that dust mites cannot live at RH levels below 40%. They also indicate that many disease bacteria life spans are minimized when space RH is held within a 30-60% range. From the foregoing it is reasonable to assume that a winter space RH range of 30-40% would be an acceptable compromise between comfort considerations and limitation of growth rates for dust mites and many bacteria. In addition it reports some corrections made to the simulation models used in order to correct some errors in the TRNSYS building model for Atlanta and in the refrigerant pressure drop calculation in the water-to-refrigerant evaporator module of the ORNL Heat Pump Design Model (HPDM) used for the IHP analyses. These changes resulted in some minor differences between IHP performance as reported in Baxter (2006) and in this report.

  20. Engineering Analysis of Intermediate Loop and Process Heat Exchanger Requirements to Include Configuration Analysis and Materials Needs

    SciTech Connect (OSTI)

    T.M. Lillo; R.L. Williamson; T.R. Reed; C.B. Davis; D.M. Ginosar

    2005-09-01

    The need to locate advanced hydrogen production facilities a finite distance away from a nuclear power source necessitates the need for an intermediate heat transport loop (IHTL). This IHTL must not only efficiently transport energy over distances up to 500 meters but must also be capable of operating at high temperatures (>850oC) for many years. High temperature, long term operation raises concerns of material strength, creep resistance and general material stability (corrosion resistance). IHTL design is currently in the initial stages. Many questions remain to be answered before intelligent design can begin. The report begins to look at some of the issues surrounding the main components of an IHTL. Specifically, a stress analysis of a compact heat exchanger design under expected operating conditions is reported. Also the results of a thermal analysis performed on two ITHL pipe configurations for different heat transport fluids are presented. The configurations consist of separate hot supply and cold return legs as well as annular design in which the hot fluid is carried in an inner pipe and the cold return fluids travels in the opposite direction in the annular space around the hot pipe. The effects of insulation configurations on pipe configuration performance are also reported. Finally, a simple analysis of two different process heat exchanger designs, one a tube in shell type and the other a compact or microchannel reactor are evaluated in light of catalyst requirements. Important insights into the critical areas of research and development are gained from these analyses, guiding the direction of future areas of research.

  1. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  2. Computational analysis of azine-N-oxides as energetic materials

    SciTech Connect (OSTI)

    Ritchie, J.P.

    1994-05-01

    A BKW equation of state in a 1-dimensional hydrodynamic simulation of the cylinder test can be used to estimate the performance of explosives. Using this approach, the novel explosive 1,4-diamino-2,3,5,6-tetrazine-2,5-dioxide (TZX) was analyzed. Despite a high detonation velocity and a predicted CJ pressure comparable to that of RDX, TZX performs relatively poorly in the cylinder test. Theoretical and computational analysis shows this to be the result of a low heat of detonation. A conceptual strategy is proposed to remedy this problem. In order to predict the required heats of formation, new ab initio group equivalents were developed. Crystal structure calculations are also described that show hydrogen-bonding is important in determining the density of TZX and related compounds.

  3. Data analysis using the Gnu R system for statistical computation

    SciTech Connect (OSTI)

    Simone, James; /Fermilab

    2011-07-01

    R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.

  4. MHK technology developments include current

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    technology developments include current energy conversion (CEC) devices, for example, hydrokinetic turbines that extract power from water currents (riverine, tidal, and ocean) and wave energy conversion (WEC) devices that extract power from wave motion. Sandia's MHK research leverages decades of experience in engineering, design, and analysis of wind power technologies, and its vast research complex, including high- performance computing (HPC), advanced materials and coatings, nondestructive

  5. Computer analysis of sodium cold trap design and performance. [LMFBR

    SciTech Connect (OSTI)

    McPheeters, C.C.; Raue, D.J.

    1983-11-01

    Normal steam-side corrosion of steam-generator tubes in Liquid Metal Fast Breeder Reactors (LMFBRs) results in liberation of hydrogen, and most of this hydrogen diffuses through the tubes into the heat-transfer sodium and must be removed by the purification system. Cold traps are normally used to purify sodium, and they operate by cooling the sodium to temperatures near the melting point, where soluble impurities including hydrogen and oxygen precipitate as NaH and Na/sub 2/O, respectively. A computer model was developed to simulate the processes that occur in sodium cold traps. The Model for Analyzing Sodium Cold Traps (MASCOT) simulates any desired configuration of mesh arrangements and dimensions and calculates pressure drops and flow distributions, temperature profiles, impurity concentration profiles, and impurity mass distributions.

  6. Computing and Computational Sciences Directorate - Computer Science and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mathematics Division Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Our mission includes basic research in computational sciences and application of advanced computing systems, computational, mathematical and analysis techniques to the solution of scientific problems of national importance. We seek to work

  7. Numerical power balance and free energy loss analysis for solar cells including optical, thermodynamic, and electrical aspects

    SciTech Connect (OSTI)

    Greulich, Johannes Höffler, Hannes; Würfel, Uli; Rein, Stefan

    2013-11-28

    A method for analyzing the power losses of solar cells is presented, supplying a complete balance of the incident power, the optical, thermodynamic, and electrical power losses and the electrical output power. The involved quantities have the dimension of a power density (units: W/m{sup 2}), which permits their direct comparison. In order to avoid the over-representation of losses arising from the ultraviolet part of the solar spectrum, a method for the analysis of the electrical free energy losses is extended to include optical losses. This extended analysis does not focus on the incident solar power of, e.g., 1000 W/m{sup 2} and does not explicitly include the thermalization losses and losses due to the generation of entropy. Instead, the usable power, i.e., the free energy or electro-chemical potential of the electron-hole pairs is set as reference value, thereby, overcoming the ambiguities of the power balance. Both methods, the power balance and the free energy loss analysis, are carried out exemplarily for a monocrystalline p-type silicon metal wrap through solar cell with passivated emitter and rear (MWT-PERC) based on optical and electrical measurements and numerical modeling. The methods give interesting insights in photovoltaic (PV) energy conversion, provide quantitative analyses of all loss mechanisms, and supply the basis for the systematic technological improvement of the device.

  8. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  9. Computer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    I. INTRODUCTION This paper presents several computational tools required for processing images of a heavy ion beam and estimating the magnetic field within a plasma. The...

  10. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    SciTech Connect (OSTI)

    Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the experienced user-base and the experimental validation base was decaying away quickly.

  11. Analysis of magnetic probe signals including effect of cylindrical conducting wall for field-reversed configuration experiment

    SciTech Connect (OSTI)

    Ikeyama, Taeko; Hiroi, Masanori; Nemoto, Yuuichi; Nogi, Yasuyuki

    2008-06-15

    A confinement field is disturbed by magnetohydrodynamic (MHD) motions of a field-reversed configuration (FRC) plasma in a cylindrical conductor. The effect of the conductor should be included to obtain a spatial structure of the disturbed field with a good precision. For this purpose, a toroidal current in the plasma and an eddy current on a conducting wall are replaced by magnetic dipole and image magnetic dipole moments, respectively. Typical spatial structures of the disturbed field are calculated by using the dipole moments for such MHD motions as radial shift, internal tilt, external tilt, and n=2 mode deformation. Then, analytic formulas for estimating the shift distance, tilt angle, and deformation rate of the MHD motions from magnetic probe signals are derived. It is estimated from the calculations by using the dipole moments that the analytic formulas include an approximately 40% error. Two kinds of experiment are carried out to investigate the reliability of the calculations. First, a magnetic field produced by a circular current is measured in an aluminum pipe to confirm the replacement of the eddy current with the image magnetic dipole moments. The measured fields coincide well with the calculated values including the image magnetic dipole moments. Second, magnetic probe signals measured from the FRC plasma are substituted into the analytic formulas to obtain shift distance and deformation rate. The experimental results are compared to the MHD motions measured by using a radiation from the plasma. If the error included in the analytic formulas and the difference between the magnetic and optical structures in the plasma are considered, the results of the radiation measurement support well those of the magnetic analysis.

  12. Computational Fluid Dynamics Analysis of Flexible Duct Junction Box Design

    SciTech Connect (OSTI)

    Beach, Robert; Prahl, Duncan; Lange, Rich

    2013-12-01

    IBACOS explored the relationships between pressure and physical configurations of flexible duct junction boxes by using computational fluid dynamics (CFD) simulations to predict individual box parameters and total system pressure, thereby ensuring improved HVAC performance. Current Air Conditioning Contractors of America (ACCA) guidance (Group 11, Appendix 3, ACCA Manual D, Rutkowski 2009) allows for unconstrained variation in the number of takeoffs, box sizes, and takeoff locations. The only variables currently used in selecting an equivalent length (EL) are velocity of air in the duct and friction rate, given the first takeoff is located at least twice its diameter away from the inlet. This condition does not account for other factors impacting pressure loss across these types of fittings. For each simulation, the IBACOS team converted pressure loss within a box to an EL to compare variation in ACCA Manual D guidance to the simulated variation. IBACOS chose cases to represent flows reasonably correlating to flows typically encountered in the field and analyzed differences in total pressure due to increases in number and location of takeoffs, box dimensions, and velocity of air, and whether an entrance fitting is included. The team also calculated additional balancing losses for all cases due to discrepancies between intended outlet flows and natural flow splits created by the fitting. In certain asymmetrical cases, the balancing losses were significantly higher than symmetrical cases where the natural splits were close to the targets. Thus, IBACOS has shown additional design constraints that can ensure better system performance.

  13. Analysis of gallium arsenide deposition in a horizontal chemical vapor deposition reactor using massively parallel computations

    SciTech Connect (OSTI)

    Salinger, A.G.; Shadid, J.N.; Hutchinson, S.A.

    1998-01-01

    A numerical analysis of the deposition of gallium from trimethylgallium (TMG) and arsine in a horizontal CVD reactor with tilted susceptor and a three inch diameter rotating substrate is performed. The three-dimensional model includes complete coupling between fluid mechanics, heat transfer, and species transport, and is solved using an unstructured finite element discretization on a massively parallel computer. The effects of three operating parameters (the disk rotation rate, inlet TMG fraction, and inlet velocity) and two design parameters (the tilt angle of the reactor base and the reactor width) on the growth rate and uniformity are presented. The nonlinear dependence of the growth rate uniformity on the key operating parameters is discussed in detail. Efficient and robust algorithms for massively parallel reacting flow simulations, as incorporated into our analysis code MPSalsa, make detailed analysis of this complicated system feasible.

  14. Computational Challenges for Microbial Genome and Metagenome Analysis (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Mavrommatis, Kostas

    2011-06-08

    Kostas Mavrommatis of the DOE JGI gives a presentation on "Computational Challenges for Microbial Genome & Metagenome Analysis" at the JGI/Argonne HPC Workshop on January 26, 2010.

  15. Thermodynamic analysis of a possible CO{sub 2}-laser plant included in a heat engine cycle

    SciTech Connect (OSTI)

    Bisio, G.; Rubatto, G.

    1998-07-01

    In these last years, several plants have been realized in some industrialized countries to recover pressure exergy from various fluids. That has been done by means of suitable turbines in particular for blast-furnace top gas and natural gas. Various papers have examined the topic, considering pros and cons. High-power CO{sub 2}-lasers are being more and more widely used for welding, drilling and cutting in machine shops. In the near future different kinds of metal surface treatments will probably become routine practice with laser units. The industries benefiting most from high power lasers will be: the automotive industry, shipbuilding, the offshore industry, the aerospace industry, the nuclear and the chemical processing industries. Both degradation and cooling problems may be alleviated by allowing the gas to flow through the laser tube and by reducing its pressure outside this tube. Thus, a thermodynamic analysis on high-power CO{sub 2}-lasers with particular reference to a possible energy recovery is justified. In previous papers the critical examination of the concept of efficiency has led one of the present authors to the definition of an operational domain in which the process can be achieved. This domain is confined by regions of no entropy production (upper limit) and no useful effects (lower limit). On the basis of these concepts and of what has been done for pressure exergy recovery from other fluids, exergy investigations and an analysis of losses are performed for a cyclic process including a high performance CO2 laser. Thermodynamic analysis of flow processes in a CO{sub 2}-laser plant shows that the inclusion of a turbine in this plant allows us to recover the most part of the exergy necessary for the compressor; in addition, the water consumption for the refrigeration in the heat exchanger is reduced.

  16. Applicaiton of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities

    Office of Environmental Management (EM)

    of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop October 25, 2011 Application of the Computer Program SASSI for Seismic SSI Analysis for WTP Facilities, Farhang Ostadan & Raman Venkata, October 25, 2011, Page-2 Background *SASSI computer code was developed in the early 1980's to solve Soil-Structure-Interaction (SSI) problems * Original version of SASSI was

  17. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. ! Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13 Garching bei München, Germany, July 17, 2007 ScicomP 13, July 17, 2007, Garching Overview * About Bassi * Memory on Bassi * Large Page Memory (It's Great!) * System Configuration * Large Page

  18. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear

  19. NASTRAN-based computer program for structural dynamic analysis of horizontal axis wind turbines

    SciTech Connect (OSTI)

    Lobitz, D.W.

    1984-01-01

    This paper describes a computer program developed for structural dynamic analysis of horizontal axis wind turbines (HAWTs). It is based on the finite element method through its reliance on NASTRAN for the development of mass, stiffness, and damping matrices of the tower and rotor, which are treated in NASTRAN as separate structures. The tower is modeled in a stationary frame and the rotor in one rotating at a constant angular velocity. The two structures are subsequently joined together (external to NASTRAN) using a time-dependent transformation consistent with the hub configuration. Aerodynamic loads are computed with an established flow model based on strip theory. Aeroelastic effects are included by incorporating the local velocity and twisting deformation of the blade in the load computation. The turbulent nature of the wind, both in space and time, is modeled by adding in stochastic wind increments. The resulting equations of motion are solved in the time domain using the implicit Newmark-Beta integrator. Preliminary comparisons with data from the Boeing/NASA MOD2 HAWT indicate that the code is capable of accurately and efficiently predicting the response of HAWTs driven by turbulent winds.

  20. THE SAP3 COMPUTER PROGRAM FOR QUANTITATIVE MULTIELEMENT ANALYSIS BY ENERGY DISPERSIVE X-RAY FLUORESCENCE

    SciTech Connect (OSTI)

    Nielson, K. K.; Sanders, R. W.

    1982-04-01

    SAP3 is a dual-function FORTRAN computer program which performs peak analysis of energy-dispersive x-ray fluorescence spectra and then quantitatively interprets the results of the multielement analysis. It was written for mono- or bi-chromatic excitation as from an isotopic or secondary excitation source, and uses the separate incoherent and coherent backscatter intensities to define the bulk sample matrix composition. This composition is used in performing fundamental-parameter matrix corrections for self-absorption, enhancement, and particle-size effects, obviating the need for specific calibrations for a given sample matrix. The generalized calibration is based on a set of thin-film sensitivities, which are stored in a library disk file and used for all sample matrices and thicknesses. Peak overlap factors are also determined from the thin-film standards, and are stored in the library for calculating peak overlap corrections. A detailed description is given of the algorithms and program logic, and the program listing and flow charts are also provided. An auxiliary program, SPCAL, is also given for use in calibrating the backscatter intensities. SAP3 provides numerous analysis options via seventeen control switches which give flexibility in performing the calculations best suited to the sample and the user needs. User input may be limited to the name of the library, the analysis livetime, and the spectrum filename and location. Output includes all peak analysis information, matrix correction factors, and element concentrations, uncertainties and detection limits. Twenty-four elements are typically determined from a 1024-channel spectrum in one-to-two minutes using a PDP-11/34 computer operating under RSX-11M.

    1. Computational design and analysis of flatback airfoil wind tunnel experiment.

      SciTech Connect (OSTI)

      Mayda, Edward A.; van Dam, C.P.; Chao, David D.; Berg, Dale E.

      2008-03-01

      A computational fluid dynamics study of thick wind turbine section shapes in the test section of the UC Davis wind tunnel at a chord Reynolds number of one million is presented. The goals of this study are to validate standard wind tunnel wall corrections for high solid blockage conditions and to reaffirm the favorable effect of a blunt trailing edge or flatback on the performance characteristics of a representative thick airfoil shape prior to building the wind tunnel models and conducting the experiment. The numerical simulations prove the standard wind tunnel corrections to be largely valid for the proposed test of 40% maximum thickness to chord ratio airfoils at a solid blockage ratio of 10%. Comparison of the computed lift characteristics of a sharp trailing edge baseline airfoil and derived flatback airfoils reaffirms the earlier observed trend of reduced sensitivity to surface contamination with increasing trailing edge thickness.

    2. Modeling and Analysis of a Lunar Space Reactor with the Computer Code

      Office of Scientific and Technical Information (OSTI)

      RELAP5-3D/ATHENA (Conference) | SciTech Connect Conference: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA Citation Details In-Document Search Title: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE

    3. Routing performance analysis and optimization within a massively parallel computer

      DOE Patents [OSTI]

      Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

      2013-04-16

      An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

    4. Computational Proteomics: High-throughput Analysis for Systems Biology

      SciTech Connect (OSTI)

      Cannon, William R.; Webb-Robertson, Bobbie-Jo M.

      2007-01-03

      High-throughput (HTP) proteomics is a rapidly developing field that offers the global profiling of proteins from a biological system. The HTP technological advances are fueling a revolution in biology, enabling analyses at the scales of entire systems (e.g., whole cells, tumors, or environmental communities). However, simply identifying the proteins in a cell is insufficient for understanding the underlying complexity and operating mechanisms of the overall system. Systems level investigations are relying more and more on computational analyses, especially in the field of proteomics generating large-scale global data.

    5. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

      ScienceCinema (OSTI)

      Oehmen, Chris [PNNL

      2011-06-08

      Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

    6. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

      SciTech Connect (OSTI)

      Oehmen, Chris [PNNL] [PNNL

      2010-01-25

      Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

    7. Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities

      Broader source: Energy.gov [DOE]

      Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop October 25, 2011

    8. Computer Modeling of Violent Intent: A Content Analysis Approach

      SciTech Connect (OSTI)

      Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

      2014-01-03

      We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

    9. Technical support document: Energy conservation standards for consumer products: Dishwashers, clothes washers, and clothes dryers including: Environmental impacts; regulatory impact analysis

      SciTech Connect (OSTI)

      Not Available

      1990-12-01

      The Energy Policy and Conservation Act as amended (P.L. 94-163), establishes energy conservation standards for 12 of the 13 types of consumer products specifically covered by the Act. The legislation requires the Department of Energy (DOE) to consider new or amended standards for these and other types of products at specified times. This Technical Support Document presents the methodology, data and results from the analysis of the energy and economic impacts of standards on dishwashers, clothes washers, and clothes dryers. The economic impact analysis is performed in five major areas: An Engineering Analysis, which establishes technical feasibility and product attributes including costs of design options to improve appliance efficiency. A Consumer Analysis at two levels: national aggregate impacts, and impacts on individuals. The national aggregate impacts include forecasts of appliance sales, efficiencies, energy use, and consumer expenditures. The individual impacts are analyzed by Life-Cycle Cost (LCC), Payback Periods, and Cost of Conserved Energy (CCE), which evaluate the savings in operating expenses relative to increases in purchase price; A Manufacturer Analysis, which provides an estimate of manufacturers' response to the proposed standards. Their response is quantified by changes in several measures of financial performance for a firm. An Industry Impact Analysis shows financial and competitive impacts on the appliance industry. A Utility Analysis that measures the impacts of the altered energy-consumption patterns on electric utilities. A Environmental Effects analysis, which estimates changes in emissions of carbon dioxide, sulfur oxides, and nitrogen oxides, due to reduced energy consumption in the home and at the power plant. A Regulatory Impact Analysis collects the results of all the analyses into the net benefits and costs from a national perspective. 47 figs., 171 tabs. (JF)

    10. INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION

      SciTech Connect (OSTI)

      Kenneth D. Luff

      2002-09-30

      Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). Luff Exploration Company is applying these tools for analysis of carbonate reservoirs in the southern Williston Basin. The integrated software programs are designed to be used by small team consisting of an engineer, geologist and geophysicist. The software tools are flexible and robust, allowing application in many environments for hydrocarbon reservoirs. Keystone elements of the software tools include clustering and neural-network techniques. The tools are used to transform seismic attribute data to reservoir characteristics such as storage (phi-h), probable oil-water contacts, structural depths and structural growth history. When these reservoir characteristics are combined with neural network or fuzzy logic solvers, they can provide a more complete description of the reservoir. This leads to better estimates of hydrocarbons in place, areal limits and potential for infill or step-out drilling. These tools were developed and tested using seismic, geologic and well data from the Red River Play in Bowman County, North Dakota and Harding County, South Dakota. The geologic setting for the Red River Formation is shallow-shelf carbonate at a depth from 8000 to 10,000 ft.

    11. Uncertainty Studies of Real Anode Surface Area in Computational Analysis for Molten Salt Electrorefining

      SciTech Connect (OSTI)

      Sungyeol Choi; Jaeyeong Park; Robert O. Hoover; Supathorn Phongikaroon; Michael F. Simpson; Kwang-Rag Kim; Il Soon Hwang

      2011-09-01

      This study examines how much cell potential changes with five differently assumed real anode surface area cases. Determining real anode surface area is a significant issue to be resolved for precisely modeling molten salt electrorefining. Based on a three-dimensional electrorefining model, calculated cell potentials compare with an experimental cell potential variation over 80 hours of operation of the Mark-IV electrorefiner with driver fuel from the Experimental Breeder Reactor II. We succeeded to achieve a good agreement with an overall trend of the experimental data with appropriate selection of a mode for real anode surface area, but there are still local inconsistencies between theoretical calculation and experimental observation. In addition, the results were validated and compared with two-dimensional results to identify possible uncertainty factors that had to be further considered in a computational electrorefining analysis. These uncertainty factors include material properties, heterogeneous material distribution, surface roughness, and current efficiency. Zirconium's abundance and complex behavior have more impact on uncertainty towards the latter period of electrorefining at given batch of fuel. The benchmark results found that anode materials would be dissolved from both axial and radial directions at least for low burn-up metallic fuels after active liquid sodium bonding was dissolved.

    12. INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION

      SciTech Connect (OSTI)

      Mark A. Sippel; William C. Carrigan; Kenneth D. Luff; Lyn Canter

      2003-11-12

      Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). The software tools in ICS have been developed for characterization of reservoir properties and evaluation of hydrocarbon potential using a combination of inter-disciplinary data sources such as geophysical, geologic and engineering variables. The ICS tools provide a means for logical and consistent reservoir characterization and oil reserve estimates. The tools can be broadly characterized as (1) clustering tools, (2) neural solvers, (3) multiple-linear regression, (4) entrapment-potential calculator and (5) file utility tools. ICS tools are extremely flexible in their approach and use, and applicable to most geologic settings. The tools are primarily designed to correlate relationships between seismic information and engineering and geologic data obtained from wells, and to convert or translate seismic information into engineering and geologic terms or units. It is also possible to apply ICS in a simple framework that may include reservoir characterization using only engineering, seismic, or geologic data in the analysis. ICS tools were developed and tested using geophysical, geologic and engineering data obtained from an exploitation and development project involving the Red River Formation in Bowman County, North Dakota and Harding County, South Dakota. Data obtained from 3D seismic surveys, and 2D seismic lines encompassing nine prospective field areas were used in the analysis. The geologic setting of the Red River Formation in Bowman and Harding counties is that of a shallow-shelf, carbonate system. Present-day depth of the Red River formation is approximately 8000 to 10,000 ft below ground surface. This report summarizes production results from well demonstration activity, results of reservoir characterization of the Red River Formation at demonstration sites, descriptions of ICS tools and strategies for their application.

    13. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop....

    14. Computer code input for thermal hydraulic analysis of Multi-Function Waste Tank Facility Title II design

      SciTech Connect (OSTI)

      Cramer, E.R.

      1994-10-01

      The input files to the P/Thermal computer code are documented for the thermal hydraulic analysis of the Multi-Function Waste Tank Facility Title II design analysis.

    15. Methods and apparatuses for information analysis on shared and distributed computing systems

      DOE Patents [OSTI]

      Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

      2011-02-22

      Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

    16. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    17. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

      SciTech Connect (OSTI)

      Krstulovich, S.F.

      1986-11-12

      This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

    18. Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Objectives Current Future New science Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART) Kai Germaschewski, Amitava Bhattacharjee, Barrett Rogers, Will Fox, Yi-Min Huang, and others CICART Space Science Center / Dept. of Physics University of New Hampshire August 3, 2010 Kai Germaschewski CICART Project Objectives Current Future New science Outline 1 Project Information 2 Project summary and scientific objectives 3 Current HPC usage and methods 4 HPC

    19. Internal air flow analysis of a bladeless micro aerial vehicle hemisphere body using computational fluid dynamic

      SciTech Connect (OSTI)

      Othman, M. N. K. E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Zuradzman, M. Razlan E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Hazry, D. E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Khairunizam, Wan E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Shahriman, A. B. E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Yaacob, S. E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Ahmed, S. Faiz E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; and others

      2014-12-04

      This paper explain the analysis of internal air flow velocity of a bladeless vertical takeoff and landing (VTOL) Micro Aerial Vehicle (MAV) hemisphere body. In mechanical design, before produce a prototype model, several analyses should be done to ensure the product's effectiveness and efficiency. There are two types of analysis method can be done in mechanical design; mathematical modeling and computational fluid dynamic. In this analysis, I used computational fluid dynamic (CFD) by using SolidWorks Flow Simulation software. The idea came through to overcome the problem of ordinary quadrotor UAV which has larger size due to using four rotors and the propellers are exposed to environment. The bladeless MAV body is designed to protect all electronic parts, which means it can be used in rainy condition. It also has been made to increase the thrust produced by the ducted propeller compare to exposed propeller. From the analysis result, the air flow velocity at the ducted area increased to twice the inlet air. This means that the duct contribute to the increasing of air velocity.

    20. The Radiological Safety Analysis Computer Program (RSAC-5) user`s manual. Revision 1

      SciTech Connect (OSTI)

      Wenzel, D.R.

      1994-02-01

      The Radiological Safety Analysis Computer Program (RSAC-5) calculates the consequences of the release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory from either reactor operating history or nuclear criticalities. RSAC-5 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated through the inhalation, immersion, ground surface, and ingestion pathways. RSAC+, a menu-driven companion program to RSAC-5, assists users in creating and running RSAC-5 input files. This user`s manual contains the mathematical models and operating instructions for RSAC-5 and RSAC+. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-5 and RSAC+. These programs are designed for users who are familiar with radiological dose assessment methods.

    1. Analysis and selection of optimal function implementations in massively parallel computer

      DOE Patents [OSTI]

      Archer, Charles Jens (Rochester, MN); Peters, Amanda (Rochester, MN); Ratterman, Joseph D. (Rochester, MN)

      2011-05-31

      An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

    2. High-Performance Computing for Real-Time Grid Analysis and Operation

      SciTech Connect (OSTI)

      Huang, Zhenyu; Chen, Yousu; Chavarría-Miranda, Daniel

      2013-10-31

      Power grids worldwide are undergoing an unprecedented transition as a result of grid evolution meeting information revolution. The grid evolution is largely driven by the desire for green energy. Emerging grid technologies such as renewable generation, smart loads, plug-in hybrid vehicles, and distributed generation provide opportunities to generate energy from green sources and to manage energy use for better system efficiency. With utility companies actively deploying these technologies, a high level of penetration of these new technologies is expected in the next 5-10 years, bringing in a level of intermittency, uncertainties, and complexity that the grid did not see nor design for. On the other hand, the information infrastructure in the power grid is being revolutionized with large-scale deployment of sensors and meters in both the transmission and distribution networks. The future grid will have two-way flows of both electrons and information. The challenge is how to take advantage of the information revolution: pull the large amount of data in, process it in real time, and put information out to manage grid evolution. Without addressing this challenge, the opportunities in grid evolution will remain unfulfilled. This transition poses grand challenges in grid modeling, simulation, and information presentation. The computational complexity of underlying power grid modeling and simulation will significantly increase in the next decade due to an increased model size and a decreased time window allowed to compute model solutions. High-performance computing is essential to enable this transition. The essential technical barrier is to vastly increase the computational speed so operation response time can be reduced from minutes to seconds and sub-seconds. The speed at which key functions such as state estimation and contingency analysis are conducted (typically every 3-5 minutes) needs to be dramatically increased so that the analysis of contingencies is both comprehensive and real time. An even bigger challenge is how to incorporate dynamic information into real-time grid operation. Today’s online grid operation is based on a static grid model and can only provide a static snapshot of current system operation status, while dynamic analysis is conducted offline because of low computational efficiency. The offline analysis uses a worst-case scenario to determine transmission limits, resulting in under-utilization of grid assets. This conservative approach does not necessarily lead to reliability. Many times, actual power grid scenarios are not studied, and they will push the grid over the edge and resulting in outages and blackouts. This chapter addresses the HPC needs in power grid analysis and operations. Example applications such as state estimation and contingency analysis are given to demonstrate the value of HPC in power grid applications. Future research directions are suggested for high performance computing applications in power grids to improve the transparency, efficiency, and reliability of power grids.

    3. Station for X-ray structural analysis of materials and single crystals (including nanocrystals) on a synchrotron radiation beam from the wiggler at the Siberia-2 storage ring

      SciTech Connect (OSTI)

      Kheiker, D. M. Kovalchuk, M. V.; Korchuganov, V. N.; Shilin, Yu. N.; Shishkov, V. A.; Sulyanov, S. N.; Dorovatovskii, P. V.; Rubinsky, S. V.; Rusakov, A. A.

      2007-11-15

      The design of the station for structural analysis of polycrystalline materials and single crystals (including nanoobjects and macromolecular crystals) on a synchrotron radiation beam from the superconducting wiggler of the Siberia-2 storage ring is described. The wiggler is constructed at the Budker Institute of Nuclear Physics of the Siberian Division of the Russian Academy of Sciences. The X-ray optical scheme of the station involves a (1, -1) double-crystal monochromator with a fixed position of the monochromatic beam and a sagittal bending of the second crystal, segmented mirrors bent by piezoelectric motors, and a (2{theta}, {omega}, {phi}) three-circle goniometer with a fixed tilt angle. Almost all devices of the station are designed and fabricated at the Shubnikov Institute of Crystallography of the Russian Academy of Sciences. The Bruker APEX11 two-dimensional CCD detector will serve as a detector in the station.

    4. COBRA-SFS (Spent Fuel Storage): A thermal-hydraulic analysis computer code: Volume 3, Validation assessments

      SciTech Connect (OSTI)

      Lombardo, N.J.; Cuta, J.M.; Michener, T.E.; Rector, D.R.; Wheeler, C.L.

      1986-12-01

      This report presents the results of the COBRA-SFS (Spent Fuel Storage) computer code validation effort. COBRA-SFS, while refined and specialized for spent fuel storage system analyses, is a lumped-volume thermal-hydraulic analysis computer code that predicts temperature and velocity distributions in a wide variety of systems. Through comparisons of code predictions with spent fuel storage system test data, the code's mathematical, physical, and mechanistic models are assessed, and empirical relations defined. The six test cases used to validate the code and code models include single-assembly and multiassembly storage systems under a variety of fill media and system orientations and include unconsolidated and consolidated spent fuel. In its entirety, the test matrix investigates the contributions of convection, conduction, and radiation heat transfer in spent fuel storage systems. To demonstrate the code's performance for a wide variety of storage systems and conditions, comparisons of code predictions with data are made for 14 runs from the experimental data base. The cases selected exercise the important code models and code logic pathways and are representative of the types of simulations required for spent fuel storage system design and licensing safety analyses. For each test, a test description, a summary of the COBRA-SFS computational model, assumptions, and correlations employed are presented. For the cases selected, axial and radial temperature profile comparisons of code predictions with test data are provided, and conclusions drawn concerning the code models and the ability to predict the data and data trends. Comparisons of code predictions with test data demonstrate the ability of COBRA-SFS to successfully predict temperature distributions in unconsolidated or consolidated single and multiassembly spent fuel storage systems.

    5. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    6. Methods, computer readable media, and graphical user interfaces for analysis of frequency selective surfaces

      DOE Patents [OSTI]

      Kotter, Dale K. (Shelley, ID) [Shelley, ID; Rohrbaugh, David T. (Idaho Falls, ID) [Idaho Falls, ID

      2010-09-07

      A frequency selective surface (FSS) and associated methods for modeling, analyzing and designing the FSS are disclosed. The FSS includes a pattern of conductive material formed on a substrate to form an array of resonance elements. At least one aspect of the frequency selective surface is determined by defining a frequency range including multiple frequency values, determining a frequency dependent permittivity across the frequency range for the substrate, determining a frequency dependent conductivity across the frequency range for the conductive material, and analyzing the frequency selective surface using a method of moments analysis at each of the multiple frequency values for an incident electromagnetic energy impinging on the frequency selective surface. The frequency dependent permittivity and the frequency dependent conductivity are included in the method of moments analysis.

    7. Technical support document: Energy efficiency standards for consumer products: Refrigerators, refrigerator-freezers, and freezers including draft environmental assessment, regulatory impact analysis

      SciTech Connect (OSTI)

      1995-07-01

      The Energy Policy and Conservation Act (P.L. 94-163), as amended by the National Appliance Energy Conservation Act of 1987 (P.L. 100-12) and by the National Appliance Energy Conservation Amendments of 1988 (P.L. 100-357), and by the Energy Policy Act of 1992 (P.L. 102-486), provides energy conservation standards for 12 of the 13 types of consumer products` covered by the Act, and authorizes the Secretary of Energy to prescribe amended or new energy standards for each type (or class) of covered product. The assessment of the proposed standards for refrigerators, refrigerator-freezers, and freezers presented in this document is designed to evaluate their economic impacts according to the criteria in the Act. It includes an engineering analysis of the cost and performance of design options to improve the efficiency of the products; forecasts of the number and average efficiency of products sold, the amount of energy the products will consume, and their prices and operating expenses; a determination of change in investment, revenues, and costs to manufacturers of the products; a calculation of the costs and benefits to consumers, electric utilities, and the nation as a whole; and an assessment of the environmental impacts of the proposed standards.

    8. National cyber defense high performance computing and analysis : concepts, planning and roadmap.

      SciTech Connect (OSTI)

      Hamlet, Jason R.; Keliiaa, Curtis M.

      2010-09-01

      There is a national cyber dilemma that threatens the very fabric of government, commercial and private use operations worldwide. Much is written about 'what' the problem is, and though the basis for this paper is an assessment of the problem space, we target the 'how' solution space of the wide-area national information infrastructure through the advancement of science, technology, evaluation and analysis with actionable results intended to produce a more secure national information infrastructure and a comprehensive national cyber defense capability. This cybersecurity High Performance Computing (HPC) analysis concepts, planning and roadmap activity was conducted as an assessment of cybersecurity analysis as a fertile area of research and investment for high value cybersecurity wide-area solutions. This report and a related SAND2010-4765 Assessment of Current Cybersecurity Practices in the Public Domain: Cyber Indications and Warnings Domain report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.

    9. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing. The PRIMA Project

      SciTech Connect (OSTI)

      Malony, Allen D.; Wolf, Felix G.

      2014-01-31

      The growing number of cores provided by todays high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-Productivity Supercomputing (VI-HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-HPS training activities together within the past three years.

    10. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing: the PRIMA Project

      SciTech Connect (OSTI)

      Malony, Allen D.; Wolf, Felix G.

      2014-01-31

      The growing number of cores provided by todays high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-Productivity Supercomputing (VI-HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-HPS training activities together within the past three years.

    11. SAFE: A computer code for the steady-state and transient thermal analysis of LMR fuel elements

      SciTech Connect (OSTI)

      Hayes, S.L.

      1993-12-01

      SAFE is a computer code developed for both the steady-state and transient thermal analysis of single LMR fuel elements. The code employs a two-dimensional control-volume based finite difference methodology with fully implicit time marching to calculate the temperatures throughout a fuel element and its associated coolant channel for both the steady-state and transient events. The code makes no structural calculations or predictions whatsoever. It does, however, accept as input structural parameters within the fuel such as the distributions of porosity and fuel composition, as well as heat generation, to allow a thermal analysis to be performed on a user-specified fuel structure. The code was developed with ease of use in mind. An interactive input file generator and material property correlations internal to the code are available to expedite analyses using SAFE. This report serves as a complete design description of the code as well as a user`s manual. A sample calculation made with SAFE is included to highlight some of the code`s features. Complete input and output files for the sample problem are provided.

    12. Use of model calibration to achieve high accuracy in analysis of computer networks

      DOE Patents [OSTI]

      Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

      2004-05-11

      A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

    13. Comparison of different computed radiography systems: Physical characterization and contrast detail analysis

      SciTech Connect (OSTI)

      Rivetti, Stefano; Lanconelli, Nico; Bertolini, Marco; Nitrosi, Andrea; Burani, Aldo; Acchiappati, Domenico

      2010-02-15

      Purpose: In this study, five different units based on three different technologies--traditional computed radiography (CR) units with granular phosphor and single-side reading, granular phosphor and dual-side reading, and columnar phosphor and line-scanning reading--are compared in terms of physical characterization and contrast detail analysis. Methods: The physical characterization of the five systems was obtained with the standard beam condition RQA5. Three of the units have been developed by FUJIFILM (FCR ST-VI, FCR ST-BD, and FCR Velocity U), one by Kodak (Direct View CR 975), and one by Agfa (DX-S). The quantitative comparison is based on the calculation of the modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE). Noise investigation was also achieved by using a relative standard deviation analysis. Psychophysical characterization is assessed by performing a contrast detail analysis with an automatic reading of CDRAD images. Results: The most advanced units based on columnar phosphors provide MTF values in line or better than those from conventional CR systems. The greater thickness of the columnar phosphor improves the efficiency, allowing for enhanced noise properties. In fact, NPS values for standard CR systems are remarkably higher for all the investigated exposures and especially for frequencies up to 3.5 lp/mm. As a consequence, DQE values for the three units based on columnar phosphors and line-scanning reading, or granular phosphor and dual-side reading, are neatly better than those from conventional CR systems. Actually, DQE values of about 40% are easily achievable for all the investigated exposures. Conclusions: This study suggests that systems based on the dual-side reading or line-scanning reading with columnar phosphors provide a remarkable improvement when compared to conventional CR units and yield results in line with those obtained from most digital detectors for radiography.

    14. Computational analysis of an autophagy/translation switch based on mutual inhibition of MTORC1 and ULK1

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Szymańska, Paulina; Martin, Katie R.; MacKeigan, Jeffrey P.; Hlavacek, William S.; Lipniacki, Tomasz

      2015-03-11

      We constructed a mechanistic, computational model for regulation of (macro)autophagy and protein synthesis (at the level of translation). The model was formulated to study the system-level consequences of interactions among the following proteins: two key components of MTOR complex 1 (MTORC1), namely the protein kinase MTOR (mechanistic target of rapamycin) and the scaffold protein RPTOR; the autophagy-initiating protein kinase ULK1; and the multimeric energy-sensing AMP-activated protein kinase (AMPK). Inputs of the model include intrinsic AMPK kinase activity, which is taken as an adjustable surrogate parameter for cellular energy level or AMP:ATP ratio, and rapamycin dose, which controls MTORC1 activity. Outputsmore » of the model include the phosphorylation level of the translational repressor EIF4EBP1, a substrate of MTORC1, and the phosphorylation level of AMBRA1 (activating molecule in BECN1-regulated autophagy), a substrate of ULK1 critical for autophagosome formation. The model incorporates reciprocal regulation of mTORC1 and ULK1 by AMPK, mutual inhibition of MTORC1 and ULK1, and ULK1-mediated negative feedback regulation of AMPK. Through analysis of the model, we find that these processes may be responsible, depending on conditions, for graded responses to stress inputs, for bistable switching between autophagy and protein synthesis, or relaxation oscillations, comprising alternating periods of autophagy and protein synthesis. A sensitivity analysis indicates that the prediction of oscillatory behavior is robust to changes of the parameter values of the model. The model provides testable predictions about the behavior of the AMPK-MTORC1-ULK1 network, which plays a central role in maintaining cellular energy and nutrient homeostasis.« less

    15. Pump apparatus including deconsolidator

      DOE Patents [OSTI]

      Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew

      2014-10-07

      A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.

    16. Computational Analysis of an Evolutionarily Conserved VertebrateMuscle Alternative Splicing Program

      SciTech Connect (OSTI)

      Das, Debopriya; Clark, Tyson A.; Schweitzer, Anthony; Marr,Henry; Yamamoto, Miki L.; Parra, Marilyn K.; Arribere, Josh; Minovitsky,Simon; Dubchak, Inna; Blume, John E.; Conboy, John G.

      2006-06-15

      A novel exon microarray format that probes gene expression with single exon resolution was employed to elucidate critical features of a vertebrate muscle alternative splicing program. A dataset of 56 microarray-defined, muscle-enriched exons and their flanking introns were examined computationally in order to investigate coordination of the muscle splicing program. Candidate intron regulatory motifs were required to meet several stringent criteria: significant over-representation near muscle-enriched exons, correlation with muscle expression, and phylogenetic conservation among genomes of several vertebrate orders. Three classes of regulatory motifs were identified in the proximal downstream intron, within 200nt of the target exons: UGCAUG, a specific binding site for Fox-1 related splicing factors; ACUAAC, a novel branchpoint-like element; and UG-/UGC-rich elements characteristic of binding sites for CELF splicing factors. UGCAUG was remarkably enriched, being present in nearly one-half of all cases. These studies suggest that Fox and CELF splicing factors play a major role in enforcing the muscle-specific alternative splicing program, facilitating expression of a set of unique isoforms of cytoskeletal proteins that are critical to muscle cell differentiation. Supplementary materials: There are four supplementary tables and one supplementary figure. The tables provide additional detailed information concerning the muscle-enriched datasets, and about over-represented oligonucleotide sequences in the flanking introns. The supplementary figure shows RT-PCR data confirming the muscle-enriched expression of exons predicted from the microarray analysis.

    17. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

      SciTech Connect (OSTI)

      John D. Bess; Keith C. Bledsoe; Bradley T. Rearden

      2011-02-01

      An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

    18. Radiological Safety Analysis Computer (RSAC) Program Version 7.2 Users Manual

      SciTech Connect (OSTI)

      Dr. Bradley J Schrader

      2010-10-01

      The Radiological Safety Analysis Computer (RSAC) Program Version 7.2 (RSAC-7) is the newest version of the RSAC legacy code. It calculates the consequences of a release of radionuclides to the atmosphere. A user can generate a fission product inventory from either reactor operating history or a nuclear criticality event. RSAC-7 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates the decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated for inhalation, air immersion, ground surface, ingestion, and cloud gamma pathways. RSAC-7 can be used as a tool to evaluate accident conditions in emergency response scenarios, radiological sabotage events and to evaluate safety basis accident consequences. This users manual contains the mathematical models and operating instructions for RSAC-7. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-7. This program was designed for users who are familiar with radiological dose assessment methods.

    19. MCS division researchers help develop new sequencing analysis...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computation Institute has announced a new sequencing analysis service called Globus Genomics. The Globus Genomics team includes two members of Argonne's Mathematics and Computer...

    20. BPO crude oil analysis data base user`s guide: Methods, publications, computer access correlations, uses, availability

      SciTech Connect (OSTI)

      Sellers, C.; Fox, B.; Paulz, J.

      1996-03-01

      The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.

    1. Towards Real-Time High Performance Computing For Power Grid Analysis

      SciTech Connect (OSTI)

      Hui, Peter SY; Lee, Barry; Chikkagoudar, Satish

      2012-11-16

      Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored. Imposing real-time constraints on a parallel (cluster) computing environment introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we give a motivating example to demonstrate the need for such a system--- an application to estimate the electromechanical states of the power grid--- and we introduce a formal method for performing verification of certain temporal properties within a system of parallel processes. We describe our work towards a full real-time implementation of the target application--- namely, our progress towards extracting a key mathematical kernel from the application, the formal process by which we analyze the intricate timing behavior of the processes on the cluster, as well as timing measurements taken on our test cluster to demonstrate use of these concepts.

    2. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.

    3. Fracture Analysis of Vessels. Oak Ridge FAVOR, v06.1, Computer Code: Theory and Implementation of Algorithms, Methods, and Correlations

      SciTech Connect (OSTI)

      Williams, P. T.; Dickson, T. L.; Yin, S.

      2007-12-01

      The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include the NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.

    4. Computing Resources | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Computing Resources Theory and Computing Sciences Building Argonne's Theory and Computing Sciences (TCS) building houses a wide variety of computing systems including some of the most powerful supercomputers in the world. The facility has 25,000 square feet of raised computer floor space and a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. The building also

    5. An Analysis Framework for Investigating the Trade-offs Between System Performance and Energy Consumption in a Heterogeneous Computing Environment

      SciTech Connect (OSTI)

      Friese, Ryan; Khemka, Bhavesh; Maciejewski, Anthony A; Siegel, Howard Jay; Koenig, Gregory A; Powers, Sarah S; Hilton, Marcia M; Rambharos, Rajendra; Okonski, Gene D; Poole, Stephen W

      2013-01-01

      Rising costs of energy consumption and an ongoing effort for increases in computing performance are leading to a significant need for energy-efficient computing. Before systems such as supercomputers, servers, and datacenters can begin operating in an energy-efficient manner, the energy consumption and performance characteristics of the system must be analyzed. In this paper, we provide an analysis framework that will allow a system administrator to investigate the tradeoffs between system energy consumption and utility earned by a system (as a measure of system performance). We model these trade-offs as a bi-objective resource allocation problem. We use a popular multi-objective genetic algorithm to construct Pareto fronts to illustrate how different resource allocations can cause a system to consume significantly different amounts of energy and earn different amounts of utility. We demonstrate our analysis framework using real data collected from online benchmarks, and further provide a method to create larger data sets that exhibit similar heterogeneity characteristics to real data sets. This analysis framework can provide system administrators with insight to make intelligent scheduling decisions based on the energy and utility needs of their systems.

    6. Open-cycle ocean thermal energy conversion surface-condenser design analysis and computer program

      SciTech Connect (OSTI)

      Panchal, C.B.; Rabas, T.J.

      1991-05-01

      This report documents a computer program for designing a surface condenser that condenses low-pressure steam in an ocean thermal energy conversion (OTEC) power plant. The primary emphasis is on the open-cycle (OC) OTEC power system, although the same condenser design can be used for conventional and hybrid cycles because of their highly similar operating conditions. In an OC-OTEC system, the pressure level is very low (deep vacuums), temperature differences are small, and the inlet noncondensable gas concentrations are high. Because current condenser designs, such as the shell-and-tube, are not adequate for such conditions, a plate-fin configuration is selected. This design can be implemented in aluminum, which makes it very cost-effective when compared with other state-of-the-art vacuum steam condenser designs. Support for selecting a plate-fin heat exchanger for OC-OTEC steam condensation can be found in the sizing (geometric details) and rating (heat transfer and pressure drop) calculations presented. These calculations are then used in a computer program to obtain all the necessary thermal performance details for developing design specifications for a plate-fin steam condenser. 20 refs., 5 figs., 5 tabs.

    7. Cogeneration: Economic and technical analysis. (Latest citations from the INSPEC - The Database for Physics, Electronics, and Computing). Published Search

      SciTech Connect (OSTI)

      Not Available

      1993-11-01

      The bibliography contains citations concerning economic and technical analyses of cogeneration systems. Topics include electric power generation, industrial cogeneration, use by utilities, and fuel cell cogeneration. The citations explore steam power station, gas turbine and steam turbine technology, district heating, refuse derived fuels, environmental effects and regulations, bioenergy and solar energy conversion, waste heat and waste product recycling, and performance analysis. (Contains a minimum of 104 citations and includes a subject term index and title list.)

    8. CORCON-MOD3: An integrated computer model for analysis of molten core-concrete interactions. User`s manual

      SciTech Connect (OSTI)

      Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O.

      1993-10-01

      The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.

    9. FRAP-T6: a computer code for the transient analysis of oxide fuel rods. [PWR; BWR

      SciTech Connect (OSTI)

      Siefken, L.J.; Shah, V.N.; Berna, G.A.; Hohorst, J.K.

      1983-06-01

      FRAP-T6 is a computer code which is being developed to calculate the transient behavior of a light water reactor fuel rod. This report is an addendum to the FRAP-T6/MODO user's manual which provides the additional user information needed to use FRAP-T6/MOD1. This includes model changes, improvements, and additions, coding changes and improvements, change in input and control language, and example problem solutions to aid the user. This information is designed to supplement the FRAP-T6/MODO user's manual.

    10. Inference of tumor evolution during chemotherapy by computational modeling and in situ analysis of genetic and phenotypic cellular diversity

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Almendro, Vanessa; Cheng, Yu -Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muñoz, Montse; Russnes, Hege  G.; Helland, Åslaug; et al

      2014-02-01

      Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatialmore » distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.« less

    11. Inference of tumor evolution during chemotherapy by computational modeling and in situ analysis of genetic and phenotypic cellular diversity

      SciTech Connect (OSTI)

      Almendro, Vanessa; Cheng, Yu -Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muoz, Montse; Russnes, Hege G.; Helland, slaug; Rye, Inga H.; Borresen-Dale, Anne -Lise; Maruyama, Reo; vanOudenaarden, Alexander; Dowsett, Mitchell; Jones, Robin L.; Reis-Filho, Jorge; Gascon, Pere; Gnen, Mithat; Michor, Franziska; Polyak, Kornelia

      2014-02-01

      Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatial distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.

    12. Scalable Computational Methods for the Analysis of High-Throughput Biological Data

      SciTech Connect (OSTI)

      Langston, Michael A

      2012-09-06

      This primary focus of this research project is elucidating genetic regulatory mechanisms that control an organism?¢????s responses to low-dose ionizing radiation. Although low doses (at most ten centigrays) are not lethal to humans, they elicit a highly complex physiological response, with the ultimate outcome in terms of risk to human health unknown. The tools of molecular biology and computational science will be harnessed to study coordinated changes in gene expression that orchestrate the mechanisms a cell uses to manage the radiation stimulus. High performance implementations of novel algorithms that exploit the principles of fixed-parameter tractability will be used to extract gene sets suggestive of co-regulation. Genomic mining will be performed to scrutinize, winnow and highlight the most promising gene sets for more detailed investigation. The overall goal is to increase our understanding of the health risks associated with exposures to low levels of radiation.

    13. Computational fluid dynamics analysis of a wire-feed, high-velocity oxygen-fuel (HVOF) thermal spray torch

      SciTech Connect (OSTI)

      Lopez, A.R.; Hassan, B.; Oberkampf, W.L.; Neiser, R.A.; Roemer, T.J.

      1996-09-01

      The fluid and particle dynamics of a High-Velocity Oxygen-Fuel Thermal Spray torch are analyzed using computational and experimental techniques. Three-dimensional Computational Fluid Dynamics (CFD) results are presented for a curved aircap used for coating interior surfaces such as engine cylinder bores. The device analyzed is similar to the Metco Diamond Jet Rotating Wire (DJRW) torch. The feed gases are injected through an axisymmetric nozzle into the curved aircap. Premixed propylene and oxygen are introduced from an annulus in the nozzle, while cooling air is injected between the nozzle and the interior wall of the aircap. The combustion process is modeled using a single-step finite-rate chemistry model with a total of 9 gas species which includes dissociation of combustion products. A continually-fed steel wire passes through the center of the nozzle and melting occurs at a conical tip near the exit of the aircap. Wire melting is simulated computationally by injecting liquid steel particles into the flow field near the tip of the wire. Experimental particle velocity measurements during wire feed were also taken using a Laser Two-Focus (L2F) velocimeter system. Flow fields inside and outside the aircap are presented and particle velocity predictions are compared with experimental measurements outside of the aircap.

    14. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    15. Computational Study and Analysis of Structural Imperfections in 1D and 2D Photonic Crystals

      SciTech Connect (OSTI)

      K.R. Maskaly

      2005-06-01

      Dielectric reflectors that are periodic in one or two dimensions, also known as 1D and 2D photonic crystals, have been widely studied for many potential applications due to the presence of wavelength-tunable photonic bandgaps. However, the unique optical behavior of photonic crystals is based on theoretical models of perfect analogues. Little is known about the practical effects of dielectric imperfections on their technologically useful optical properties. In order to address this issue, a finite-difference time-domain (FDTD) code is employed to study the effect of three specific dielectric imperfections in 1D and 2D photonic crystals. The first imperfection investigated is dielectric interfacial roughness in quarter-wave tuned 1D photonic crystals at normal incidence. This study reveals that the reflectivity of some roughened photonic crystal configurations can change up to 50% at the center of the bandgap for RMS roughness values around 20% of the characteristic periodicity of the crystal. However, this reflectivity change can be mitigated by increasing the index contrast and/or the number of bilayers in the crystal. In order to explain these results, the homogenization approximation, which is usually applied to single rough surfaces, is applied to the quarter-wave stacks. The results of the homogenization approximation match the FDTD results extremely well, suggesting that the main role of the roughness features is to grade the refractive index profile of the interfaces in the photonic crystal rather than diffusely scatter the incoming light. This result also implies that the amount of incoherent reflection from the roughened quarterwave stacks is extremely small. This is confirmed through direct extraction of the amount of incoherent power from the FDTD calculations. Further FDTD studies are done on the entire normal incidence bandgap of roughened 1D photonic crystals. These results reveal a narrowing and red-shifting of the normal incidence bandgap with increasing RMS roughness. Again, the homogenization approximation is able to predict these results. The problem of surface scratches on 1D photonic crystals is also addressed. Although the reflectivity decreases are lower in this study, up to a 15% change in reflectivity is observed in certain scratched photonic crystal structures. However, this reflectivity change can be significantly decreased by adding a low index protective coating to the surface of the photonic crystal. Again, application of homogenization theory to these structures confirms its predictive power for this type of imperfection as well. Additionally, the problem of a circular pores in 2D photonic crystals is investigated, showing that almost a 50% change in reflectivity can occur for some structures. Furthermore, this study reveals trends that are consistent with the 1D simulations: parameter changes that increase the absolute reflectivity of the photonic crystal will also increase its tolerance to structural imperfections. Finally, experimental reflectance spectra from roughened 1D photonic crystals are compared to the results predicted computationally in this thesis. Both the computed and experimental spectra correlate favorably, validating the findings presented herein.

    16. The Use Of Computational Human Performance Modeling As Task Analysis Tool

      SciTech Connect (OSTI)

      Jacuqes Hugo; David Gertman

      2012-07-01

      During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employed to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.

    17. User's manual for RATEPAC: a digital-computer program for revenue requirements and rate-impact analysis

      SciTech Connect (OSTI)

      Fuller, L.C.

      1981-09-01

      The RATEPAC computer program is designed to model the financial aspects of an electric power plant or other investment requiring capital outlays and having annual operating expenses. The program produces incremental pro forma financial statements showing how an investment will affect the overall financial statements of a business entity. The code accepts parameters required to determine capital investment and expense as a function of time and sums these to determine minimum revenue requirements (cost of service). The code also calculates present worth of revenue requirements and required return on rate base. This user's manual includes a general description of the code as well as the instructions for input data preparation. A complete example case is appended.

    18. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw (Los Alamos, NM); Gokhale, Maya B. (Los Alamos, NM); McCabe, Kevin Peter (Los Alamos, NM)

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    19. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    20. Computation & Simulation > Theory & Computation > Research >...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...

    1. Accelerated Aging of BKC 44306-10 Rigid Polyurethane Foam: FT-IR Spectroscopy, Dimensional Analysis, and Micro Computed Tomography

      SciTech Connect (OSTI)

      Gilbertson, Robert D.; Patterson, Brian M.; Smith, Zachary

      2014-01-02

      An accelerated aging study of BKC 44306-10 rigid polyurethane foam was carried out. Foam samples were aged in a nitrogen atmosphere at three different temperatures: 50 C, 65 C, and 80 C. Foam samples were periodically removed from the aging canisters at 1, 3, 6, 9, 12, and 15 month intervals when FT-IR spectroscopy, dimensional analysis, and mechanical testing experiments were performed. Micro Computed Tomography imaging was also employed to study the morphology of the foams. Over the course of the aging study the foams the decreased in size by a magnitude of 0.001 inches per inch of foam. Micro CT showed the heterogeneous nature of the foam structure likely resulting from flow effects during the molding process. The effect of aging on the compression and tensile strength of the foam was minor and no cause for concern. FT-IR spectroscopy was used to follow the foam chemistry. However, it was difficult to draw definitive conclusions about the changes in chemical nature of the materials due to large variability throughout the samples.

    2. Computational analysis of a three-dimensional High-Velocity Oxygen-Fuel (HVOF) Thermal Spray torch

      SciTech Connect (OSTI)

      Hassan, B.; Lopez, A.R.; Oberkampf, W.L.

      1995-07-01

      An analysis of a High-Velocity Oxygen-Fuel Thermal Spray torch is presented using computational fluid dynamics (CFD). Three-dimensional CFD results are presented for a curved aircap used for coating interior surfaces such as engine cylinder bores. The device analyzed is similar to the Metco Diamond Jet Rotating Wire torch, but wire feed is not simulated. To the authors` knowledge, these are the first published 3-D results of a thermal spray device. The feed gases are injected through an axisymmetric nozzle into the curved aircap. Argon is injected through the center of the nozzle. Pre-mixed propylene and oxygen are introduced from an annulus in the nozzle, while cooling air is injected between the nozzle and the interior wall of the aircap. The combustion process is modeled assuming instantaneous chemistry. A standard, two-equation, K-{var_epsilon} turbulence model is employed for the turbulent flow field. An implicit, iterative, finite volume numerical technique is used to solve the coupled conservation of mass, momentum, and energy equations for the gas in a sequential manner. Flow fields inside and outside the aircap are presented and discussed.

    3. Argonne's Laboratory computing center - 2007 annual report.

      SciTech Connect (OSTI)

      Bair, R.; Pieper, G. W.

      2008-05-28

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    4. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

      SciTech Connect (OSTI)

      Saffer, Shelley I.

      2014-12-01

      This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

    5. Information regarding previous INCITE awards including selected...

      Office of Science (SC) Website

      on Theory & Experiement (INCITE) ASCR Leadership Computing Challenge (ALCC) Computational Science Graduate Fellowship (CSGF) Research & Evaluation Prototypes (REP) Science...

    6. Computational analysis of storage synthesis in developing Brassica napus L. (oilseed rape) embryos: Flux variability analysis in relation to 13C-metabolic flux analysis

      SciTech Connect (OSTI)

      Hay, J.; Schwender, J.

      2011-08-01

      Plant oils are an important renewable resource, and seed oil content is a key agronomical trait that is in part controlled by the metabolic processes within developing seeds. A large-scale model of cellular metabolism in developing embryos of Brassica napus (bna572) was used to predict biomass formation and to analyze metabolic steady states by flux variability analysis under different physiological conditions. Predicted flux patterns are highly correlated with results from prior 13C metabolic flux analysis of B. napus developing embryos. Minor differences from the experimental results arose because bna572 always selected only one sugar and one nitrogen source from the available alternatives, and failed to predict the use of the oxidative pentose phosphate pathway. Flux variability, indicative of alternative optimal solutions, revealed alternative pathways that can provide pyruvate and NADPH to plastidic fatty acid synthesis. The nutritional values of different medium substrates were compared based on the overall carbon conversion efficiency (CCE) for the biosynthesis of biomass. Although bna572 has a functional nitrogen assimilation pathway via glutamate synthase, the simulations predict an unexpected role of glycine decarboxylase operating in the direction of NH4+ assimilation. Analysis of the light-dependent improvement of carbon economy predicted two metabolic phases. At very low light levels small reductions in CO2 efflux can be attributed to enzymes of the tricarboxylic acid cycle (oxoglutarate dehydrogenase, isocitrate dehydrogenase) and glycine decarboxylase. At higher light levels relevant to the 13C flux studies, ribulose-1,5-bisphosphate carboxylase activity is predicted to account fully for the light-dependent changes in carbon balance.

    7. Development of an Extensible Computational Framework for Centralized Storage and Distributed Curation and Analysis of Genomic Data Genome-scale Metabolic Models

      SciTech Connect (OSTI)

      Stevens, Rick

      2010-08-01

      The DOE funded KBase project of the Stevens group at the University of Chicago was focused on four high-level goals: (i) improve extensibility, accessibility, and scalability of the SEED framework for genome annotation, curation, and analysis; (ii) extend the SEED infrastructure to support transcription regulatory network reconstructions (2.1), metabolic model reconstruction and analysis (2.2), assertions linked to data (2.3), eukaryotic annotation (2.4), and growth phenotype prediction (2.5); (iii) develop a web-API for programmatic remote access to SEED data and services; and (iv) application of all tools to bioenergy-related genomes and organisms. In response to these goals, we enhanced and improved the ModelSEED resource within the SEED to enable new modeling analyses, including improved model reconstruction and phenotype simulation. We also constructed a new website and web-API for the ModelSEED. Further, we constructed a comprehensive web-API for the SEED as a whole. We also made significant strides in building infrastructure in the SEED to support the reconstruction of transcriptional regulatory networks by developing a pipeline to identify sets of consistently expressed genes based on gene expression data. We applied this pipeline to 29 organisms, computing regulons which were subsequently stored in the SEED database and made available on the SEED website (http://pubseed.theseed.org). We developed a new pipeline and database for the use of kmers, or short 8-residue oligomer sequences, to annotate genomes at high speed. Finally, we developed the PlantSEED, or a new pipeline for annotating primary metabolism in plant genomes. All of the work performed within this project formed the early building blocks for the current DOE Knowledgebase system, and the kmer annotation pipeline, plant annotation pipeline, and modeling tools are all still in use in KBase today.

    8. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    9. Microsoft Word - NETL-TRS-X-2015_Field-Generated Foamed Cement Initial Collection, Computed Tomography, and Analysis.final.2015

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Field-Generated Foamed Cement: Initial Collection, Computed Tomography, and Analysis 20 July 2015 Office of Fossil Energy NETL-TRS-5-2015 Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information,

    10. Scientific computations section monthly report, November 1993

      SciTech Connect (OSTI)

      Buckner, M.R.

      1993-12-30

      This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

    11. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2016-02-01 08:07:08

    12. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    13. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    14. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    15. Computing and Computational Sciences Directorate - Computer Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      AWARD Winners: Jess Gehin; Jackie Isaacs; Douglas Kothe; Debbie McCoy; Bonnie Nestor; John Turner; Gilbert Weigand Organization(s): Nuclear Technology Program; Computing and...

    16. A Systematic Comprehensive Computational Model for Stake Estimation in Mission Assurance: Applying Cyber Security Econometrics System (CSES) to Mission Assurance Analysis Protocol (MAAP)

      SciTech Connect (OSTI)

      Abercrombie, Robert K; Sheldon, Frederick T; Grimaila, Michael R

      2010-01-01

      In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper, we discuss how this infrastructure can be used in the subject domain of mission assurance as defined as the full life-cycle engineering process to identify and mitigate design, production, test, and field support deficiencies of mission success. We address the opportunity to apply the Cyberspace Security Econometrics System (CSES) to Carnegie Mellon University and Software Engineering Institute s Mission Assurance Analysis Protocol (MAAP) in this context.

    17. NERSC Enhances PDSF, Genepool Computing Capabilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Capabilities NERSC Enhances PDSF, Genepool Computing Capabilities Linux cluster expansion speeds data access and analysis January 3, 2014 Christmas came early for...

    18. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and...

    19. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    20. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes -- Update to Include Analyses of an Economizer Option and Alternative Winter Water Heating Control Option

      SciTech Connect (OSTI)

      Baxter, Van D

      2006-12-01

      The long range strategic goal of the Department of Energy's Building Technologies (DOE/BT) Program is to create, by 2020, technologies and design approaches that enable the construction of net-zero energy homes at low incremental cost (DOE/BT 2005). A net zero energy home (NZEH) is a residential building with greatly reduced needs for energy through efficiency gains, with the balance of energy needs supplied by renewable technologies. While initially focused on new construction, these technologies and design approaches are intended to have application to buildings constructed before 2020 as well resulting in substantial reduction in energy use for all building types and ages. DOE/BT's Emerging Technologies (ET) team is working to support this strategic goal by identifying and developing advanced heating, ventilating, air-conditioning, and water heating (HVAC/WH) technology options applicable to NZEHs. Although the energy efficiency of heating, ventilating, and air-conditioning (HVAC) equipment has increased substantially in recent years, new approaches are needed to continue this trend. Dramatic efficiency improvements are necessary to enable progress toward the NZEH goals, and will require a radical rethinking of opportunities to improve system performance. The large reductions in HVAC energy consumption necessary to support the NZEH goals require a systems-oriented analysis approach that characterizes each element of energy consumption, identifies alternatives, and determines the most cost-effective combination of options. In particular, HVAC equipment must be developed that addresses the range of special needs of NZEH applications in the areas of reduced HVAC and water heating energy use, humidity control, ventilation, uniform comfort, and ease of zoning. In FY05 ORNL conducted an initial Stage 1 (Applied Research) scoping assessment of HVAC/WH systems options for future NZEHs to help DOE/BT identify and prioritize alternative approaches for further development. Eleven system concepts with central air distribution ducting and nine multi-zone systems were selected and their annual and peak demand performance estimated for five locations: Atlanta (mixed-humid), Houston (hot-humid), Phoenix (hot-dry), San Francisco (marine), and Chicago (cold). Performance was estimated by simulating the systems using the TRNSYS simulation engine (Solar Energy Laboratory et al. 2006) in two 1800-ft{sup 2} houses--a Building America (BA) benchmark house and a prototype NZEH taken from BEopt results at the take-off (or crossover) point (i.e., a house incorporating those design features such that further progress towards ZEH is through the addition of photovoltaic power sources, as determined by current BEopt analyses conducted by NREL). Results were summarized in a project report, HVAC Equipment Design options for Near-Zero-Energy Homes--A Stage 2 Scoping Assessment, ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. In 2006, the two top-ranked options from the 2005 study, air-source and ground-source versions of an integrated heat pump (IHP) system, were subjected to an initial business case study. The IHPs were subjected to a more rigorous hourly-based assessment of their performance potential compared to a baseline suite of equipment of legally minimum efficiency that provided the same heating, cooling, water heating, demand dehumidification, and ventilation services as the IHPs. Results were summarized in a project report, Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes, ORNL/TM-2006/130 (Baxter 2006). The present report is an update to that document. Its primary purpose is to summarize results of an analysis of the potential of adding an outdoor air economizer operating mode to the IHPs to take advantage of free cooling (using outdoor air to cool the house) whenever possible. In addition it provides some additional detail for an alternative winter water heating/space heating (WH/SH) control strategy briefly described in the original report and corrects some minor errors.

    1. computational-fluid-dynamics-training

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Table of Contents Date Location Advanced Hydraulic and Aerodynamic Analysis Using CFD March 27-28, 2013 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 30-31, 2011 Argonne TRACC Argonne, IL Computational Hydraulics for Transportation Workshop September 23-24, 2009 Argonne TRACC West Chicago, IL

    2. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    3. Argonne's Laboratory computing resource center : 2006 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

      2007-05-31

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    4. Computational Analysis of the Pyrolysis of ..beta..-O4 Lignin Model Compounds: Concerted vs. Homolytic Fragmentation

      SciTech Connect (OSTI)

      Clark, J. M.; Robichaud, D. J.; Nimlos, M. R.

      2012-01-01

      The thermochemical conversion of biomass to liquid transportation fuels is a very attractive technology for expanding the utilization of carbon neutral processes and reducing dependency on fossil fuel resources. As with all such emerging technologies, biomass conversion through gasification or pyrolysis has a number of obstacles that need to be overcome to make these processes cost competitive with the refining of fossil fuels. Our current efforts have focused on the investigation of the thermochemistry of the linkages between lignin units using ab initio calculations on dimeric lignin model compounds. All calculations were carried out using M062X density functional theory at the 6-311++G(d,p) basis set. The M062X method has been shown to be consistent with the CBS-QB3 method while being significantly less computationally expensive. To date we have only completed the study on the b-O4 compounds. The theoretical calculations performed in the study indicate that concerted elimination pathways dominate over bond homolysis reactions under typical pyrolysis conditions. However, this does not mean that concerted elimination will be the dominant loss process for lignin. Bimolecular radical chemistry could very well dwarf the unimolecular pathways investigated in this study. These concerted pathways tend to form stable, reasonably non-reactive products that would be more suited producing a fungible bio-oil for the production of liquid transportation fuels.

    5. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2014-12-30

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    6. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2015-01-27

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    7. Multi-processor including data flow accelerator module

      DOE Patents [OSTI]

      Davidson, George S.; Pierce, Paul E.

      1990-01-01

      An accelerator module for a data flow computer includes an intelligent memory. The module is added to a multiprocessor arrangement and uses a shared tagged memory architecture in the data flow computer. The intelligent memory module assigns locations for holding data values in correspondence with arcs leading to a node in a data dependency graph. Each primitive computation is associated with a corresponding memory cell, including a number of slots for operands needed to execute a primitive computation, a primitive identifying pointer, and linking slots for distributing the result of the cell computation to other cells requiring that result as an operand. Circuitry is provided for utilizing tag bits to determine automatically when all operands required by a processor are available and for scheduling the primitive for execution in a queue. Each memory cell of the module may be associated with any of the primitives, and the particular primitive to be executed by the processor associated with the cell is identified by providing an index, such as the cell number for the primitive, to the primitive lookup table of starting addresses. The module thus serves to perform functions previously performed by a number of sections of data flow architectures and coexists with conventional shared memory therein. A multiprocessing system including the module operates in a hybrid mode, wherein the same processing modules are used to perform some processing in a sequential mode, under immediate control of an operating system, while performing other processing in a data flow mode.

    8. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    9. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved within B 174. Use

    10. Magnetohydrodynamic Models of Accretion Including Radiation Transport |

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Argonne Leadership Computing Facility Snapshot of the global structure of a radiation-dominated accretion flow around a black hole computed using the Athena++ code Snapshot of the global structure of a radiation-dominated accretion flow around a black hole computed using the Athena++ code. Left half of the image shows the density (in units of 0.01g/cm^3), and the right half shows the radiation energy density (in units of the energy density for a 10^7 degree black body). Coordinate axes are

    11. System Advisor Model Includes Analysis of Hybrid CSP Option ...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      concepts related to power generation have been missing in the System Advisor Model (SAM). One such concept, until now, is a hybrid integrated solar combined-cycle (ISCC)...

    12. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center for Computational Sciences

    13. Power throttling of collections of computing elements

      DOE Patents [OSTI]

      Bellofatto, Ralph E. (Ridgefield, CT); Coteus, Paul W. (Yorktown Heights, NY); Crumley, Paul G. (Yorktown Heights, NY); Gara, Alan G. (Mount Kidsco, NY); Giampapa, Mark E. (Irvington, NY); Gooding; Thomas M. (Rochester, MN); Haring, Rudolf A. (Cortlandt Manor, NY); Megerian, Mark G. (Rochester, MN); Ohmacht, Martin (Yorktown Heights, NY); Reed, Don D. (Mantorville, MN); Swetz, Richard A. (Mahopac, NY); Takken, Todd (Brewster, NY)

      2011-08-16

      An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

    14. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the microgrid.

    15. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

      SciTech Connect (OSTI)

      Joseph, Earl C.; Conway, Steve; Dekate, Chirag

      2013-09-30

      This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size.  A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

    16. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes There are currently 2632 nodes available on PDSF. The compute (batch) nodes at PDSF are heterogenous, reflecting the periodic procurement of new nodes (and the eventual retirement of old nodes). From the user's perspective they are essentially all equivalent except that some have more memory per job slot. If your jobs have memory requirements beyond the default maximum of 1.1GB you should specify that in your job submission and the batch system will run your job on an

    17. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    18. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Exascale Computing CoDEx Project: A Hardware/Software Codesign Environment for the Exascale Era The next decade will see a rapid evolution of HPC node architectures as power and cooling constraints are limiting increases in microprocessor clock speeds and constraining data movement. Applications and algorithms will need to change and adapt as node architectures evolve. A key element of the strategy as we move forward is the co-design of applications, architectures and programming

    19. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    20. Topic A Note: Includes STEPS Subtopic

      Energy Savers [EERE]

      Topic A Note: Includes STEPS Subtopic 33 Total Projects Developing and Enhancing Workforce Training Programs

    1. High Throughput Computing Impact on Meta Genomics (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

      ScienceCinema (OSTI)

      Gore, Brooklin [Morgridge Institute for Research

      2013-01-22

      This presentation includes a brief background on High Throughput Computing, correlating gene transcription factors, optical mapping, genotype to phenotype mapping via QTL analysis, and current work on next gen sequencing.

    2. High Throughput Computing Impact on Meta Genomics (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

      SciTech Connect (OSTI)

      Gore, Brooklin [Morgridge Institute for Research] [Morgridge Institute for Research

      2011-10-12

      This presentation includes a brief background on High Throughput Computing, correlating gene transcription factors, optical mapping, genotype to phenotype mapping via QTL analysis, and current work on next gen sequencing.

    3. Proposal for grid computing for nuclear applications

      SciTech Connect (OSTI)

      Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat; Muhamad, Shalina Bt. Sheik; Hassan, Hasni; Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri; and others

      2014-02-12

      The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

    4. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    5. Cloud computing security.

      SciTech Connect (OSTI)

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    6. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home › About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and Computational Sciences Directorate Michael Bartell Chief Information Officer Information Technologies Services Division Jim Hack Director, Climate Science Institute National Center for Computational Sciences Shaun Gleason Division Director Computational Sciences and Engineering Barney Maccabe Division Director Computer Science

    7. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes MC-proc.png Compute Node Configuration 6,384 nodes 2 twelve-core AMD 'MagnyCours' 2.1-GHz processors per node (see die image to the right and schematic below) 24 cores per node (153,216 total cores) 32 GB DDR3 1333-MHz memory per node (6,000 nodes) 64 GB DDR3 1333-MHz memory per node (384 nodes) Peak Gflop/s rate: 8.4 Gflops/core 201.6 Gflops/node 1.28 Peta-flops for the entire machine Each core has its own L1 and L2 caches, with 64 KB and 512KB respectively One 6-MB

    8. Dedicated heterogeneous node scheduling including backfill scheduling

      DOE Patents [OSTI]

      Wood, Robert R. (Livermore, CA); Eckert, Philip D. (Livermore, CA); Hommes, Gregg (Pleasanton, CA)

      2006-07-25

      A method and system for job backfill scheduling dedicated heterogeneous nodes in a multi-node computing environment. Heterogeneous nodes are grouped into homogeneous node sub-pools. For each sub-pool, a free node schedule (FNS) is created so that the number of to chart the free nodes over time. For each prioritized job, using the FNS of sub-pools having nodes useable by a particular job, to determine the earliest time range (ETR) capable of running the job. Once determined for a particular job, scheduling the job to run in that ETR. If the ETR determined for a lower priority job (LPJ) has a start time earlier than a higher priority job (HPJ), then the LPJ is scheduled in that ETR if it would not disturb the anticipated start times of any HPJ previously scheduled for a future time. Thus, efficient utilization and throughput of such computing environments may be increased by utilizing resources otherwise remaining idle.

    9. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Resources This page is the repository for sundry items of information relevant to general computing on BooNE. If you have a question or problem that isn't answered here, or a suggestion for improving this page or the information on it, please mail boone-computing@fnal.gov and we'll do our best to address any issues. Note about this page Some links on this page point to www.everything2.com, and are meant to give an idea about a concept or thing without necessarily wading through a whole website

    10. Text analysis methods, text analysis apparatuses, and articles of manufacture

      DOE Patents [OSTI]

      Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M

      2014-10-28

      Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.

    11. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    12. INSTRUMENTATION, INCLUDING NUCLEAR AND PARTICLE DETECTORS; RADIATION

      Office of Scientific and Technical Information (OSTI)

      interval technical basis document Chiaro, P.J. Jr. 44 INSTRUMENTATION, INCLUDING NUCLEAR AND PARTICLE DETECTORS; RADIATION DETECTORS; RADIATION MONITORS; DOSEMETERS;...

    13. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model,...

    14. Internode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

      2013-09-03

      Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

    15. Internode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

      2014-02-11

      Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

    16. Sandia Energy - Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science Home Energy Research Advanced Scientific Computing Research (ASCR) Computational Science Computational Sciencecwdd2015-03-26T13:35:2...

    17. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    18. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Events Computing Events Spotlighting the most advanced scientific and technical applications in the world! Featuring exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies will be seen for the first time in Denver. Supercomputing Conference 13 Denver, Colorado November 17-22, 2013 Spotlighting the most advanced scientific and technical applications in the world, SC13 will bring together the international

    19. User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal-energy storage oupled with district-heating or cooling systems. Volume II. Appendices

      SciTech Connect (OSTI)

      Huber, H.D.; Brown, D.R.; Reilly, R.W.

      1982-04-01

      A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. the AQUASTOR Model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two prinicpal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains all the appendices, including supply and distribution system cost equations and models, descriptions of predefined residential districts, key equations for the cooling degree-hour methodology, a listing of the sample case output, and appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

    20. Gas storage materials, including hydrogen storage materials

      DOE Patents [OSTI]

      Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

      2014-11-25

      A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material, such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

    1. Gas storage materials, including hydrogen storage materials

      DOE Patents [OSTI]

      Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

      2013-02-19

      A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

    2. Communications circuit including a linear quadratic estimator

      DOE Patents [OSTI]

      Ferguson, Dennis D.

      2015-07-07

      A circuit includes a linear quadratic estimator (LQE) configured to receive a plurality of measurements a signal. The LQE is configured to weight the measurements based on their respective uncertainties to produce weighted averages. The circuit further includes a controller coupled to the LQE and configured to selectively adjust at least one data link parameter associated with a communication channel in response to receiving the weighted averages.

    3. Intentionally Including - Engaging Minorities in Physics Careers |

      Office of Environmental Management (EM)

      Department of Energy Intentionally Including - Engaging Minorities in Physics Careers Intentionally Including - Engaging Minorities in Physics Careers April 24, 2013 - 4:37pm Addthis Joining Director Dot Harris (second from left) were Marlene Kaplan, the Deputy Director of Education and director of EPP, National Oceanic and Atmospheric Administration, Claudia Rankins, a Program Officer with the National Science Foundation and Jim Stith, the past Vice-President of the American Institute of

    4. Argonne's Laboratory Computing Resource Center : 2005 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

      2007-06-30

      Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to develop comprehensive scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has begun developing a 'path forward' plan for additional computing resources.

    5. Unsolicited Projects in 2012: Research in Computer Architecture...

      Office of Science (SC) Website

      Computer Science Unsolicited Projects in 2012: Research in Computer Architecture, ... External link Exploration of Exascale In Situ Visualization and Analysis Approaches. ...

    6. NERSC Enhances PDSF, Genepool Computing Capabilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Enhances PDSF, Genepool Computing Capabilities NERSC Enhances PDSF, Genepool Computing Capabilities Linux cluster expansion speeds data access and analysis January 3, 2014 Christmas came early for users of the Parallel Distributed Systems Facility (PDSF) and Genepool systems at Department of Energy's National Energy Research Scientific Computer Center (NERSC). Throughout November members of NERSC's Computational Systems Group were busy expanding the Linux computing resources that support PDSF's

    7. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math » Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs in the next generation of supercomputers. Get Expertise Tim Germann Physics and Chemistry of Materials Email Allen McPherson Energy and Infrastructure Analysis Email Turab Lookman Physics and Condensed Matter and Complex Systems Email Computational co-design involves developing the interacting components of a

    8. Scramjet including integrated inlet and combustor

      SciTech Connect (OSTI)

      Kutschenreuter, P.H. Jr.; Blanton, J.C.

      1992-02-04

      This patent describes a scramjet engine. It comprises: a first surface including an aft facing step; a cowl including: a leading edge and a trailing edge; an upper surface and a lower surface extending between the leading edge and the trailing edge; the cowl upper surface being spaced from and generally parallel to the first surface to define an integrated inlet-combustor therebetween having an inlet for receiving and channeling into the inlet-combustor supersonic inlet airflow; means for injecting fuel into the inlet-combustor at the step for mixing with the supersonic inlet airflow for generating supersonic combustion gases; and further including a spaced pari of sidewalls extending between the first surface to the cowl upper surface and wherein the integrated inlet-combustor is generally rectangular and defined by the sidewall pair, the first surface and the cowl upper surface.

    9. Link failure detection in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-11-09

      Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

    10. Broadcasting a message in a parallel computer

      DOE Patents [OSTI]

      Berg, Jeremy E. (Rochester, MN); Faraj, Ahmad A. (Rochester, MN)

      2011-08-02

      Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

    11. Method and system for knowledge discovery using non-linear statistical analysis and a 1st and 2nd tier computer program

      DOE Patents [OSTI]

      Hively, Lee M.

      2011-07-12

      The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.

    12. Electric Power Monthly, August 1990. [Glossary included

      SciTech Connect (OSTI)

      Not Available

      1990-11-29

      The Electric Power Monthly (EPM) presents monthly summaries of electric utility statistics at the national, Census division, and State level. The purpose of this publication is to provide energy decisionmakers with accurate and timely information that may be used in forming various perspectives on electric issues that lie ahead. Data includes generation by energy source (coal, oil, gas, hydroelectric, and nuclear); generation by region; consumption of fossil fuels for power generation; sales of electric power, cost data; and unusual occurrences. A glossary is included.

    13. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

      DOE Patents [OSTI]

      Faraj, Ahmad (Rochester, MN)

      2012-04-17

      Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

    14. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Jefferson Lab Jefferson Lab Home Search Contact JLab Computing at JLab ---------------------- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org

    15. computer graphics

      Energy Science and Technology Software Center (OSTI)

      2001-06-08

      MUSTAFA is a scientific visualization package for visualizing data in the EXODUSII file format. These data files are typically priduced from Sandia's suite of finite element engineering analysis codes.

    16. Controlling data transfers from an origin compute node to a target compute node

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2011-06-21

      Methods, apparatus, and products are disclosed for controlling data transfers from an origin compute node to a target compute node that include: receiving, by an application messaging module on the target compute node, an indication of a data transfer from an origin compute node to the target compute node; and administering, by the application messaging module on the target compute node, the data transfer using one or more messaging primitives of a system messaging module in dependence upon the indication.

    17. computational-hydraulics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne, Illinois Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. A training course in the use of computational hydraulics and aerodynamics CFD software using CD-adapco's STAR-CCM+ for analysis will be held at TRACC from March 21-22, 2012. The course assumes a basic knowledge of fluid mechanics and will make extensive use of hands on tutorials. CD-adapco will issue

    18. How Do You Reduce Energy Use from Computers and Electronics?...

      Broader source: Energy.gov (indexed) [DOE]

      discussed some ways to reduce the energy used by computers and electronics. Some tips include ensuring your computer is configured for optimal energy savings, turning off devices...

    19. Subterranean barriers including at least one weld

      DOE Patents [OSTI]

      Nickelson, Reva A.; Sloan, Paul A.; Richardson, John G.; Walsh, Stephanie; Kostelnik, Kevin M.

      2007-01-09

      A subterranean barrier and method for forming same are disclosed, the barrier including a plurality of casing strings wherein at least one casing string of the plurality of casing strings may be affixed to at least another adjacent casing string of the plurality of casing strings through at least one weld, at least one adhesive joint, or both. A method and system for nondestructively inspecting a subterranean barrier is disclosed. For instance, a radiographic signal may be emitted from within a casing string toward an adjacent casing string and the radiographic signal may be detected from within the adjacent casing string. A method of repairing a barrier including removing at least a portion of a casing string and welding a repair element within the casing string is disclosed. A method of selectively heating at least one casing string forming at least a portion of a subterranean barrier is disclosed.

    20. Photoactive devices including porphyrinoids with coordinating additives

      DOE Patents [OSTI]

      Forrest, Stephen R; Zimmerman, Jeramy; Yu, Eric K; Thompson, Mark E; Trinh, Cong; Whited, Matthew; Diev, Vlacheslav

      2015-05-12

      Coordinating additives are included in porphyrinoid-based materials to promote intermolecular organization and improve one or more photoelectric characteristics of the materials. The coordinating additives are selected from fullerene compounds and organic compounds having free electron pairs. Combinations of different coordinating additives can be used to tailor the characteristic properties of such porphyrinoid-based materials, including porphyrin oligomers. Bidentate ligands are one type of coordinating additive that can form coordination bonds with a central metal ion of two different porphyrinoid compounds to promote porphyrinoid alignment and/or pi-stacking. The coordinating additives can shift the absorption spectrum of a photoactive material toward higher wavelengths, increase the external quantum efficiency of the material, or both.

    1. Electric power monthly, September 1990. [Glossary included

      SciTech Connect (OSTI)

      Not Available

      1990-12-17

      The purpose of this report is to provide energy decision makers with accurate and timely information that may be used in forming various perspectives on electric issues. The power plants considered include coal, petroleum, natural gas, hydroelectric, and nuclear power plants. Data are presented for power generation, fuel consumption, fuel receipts and cost, sales of electricity, and unusual occurrences at power plants. Data are compared at the national, Census division, and state levels. 4 figs., 52 tabs. (CK)

    2. Power generation method including membrane separation

      DOE Patents [OSTI]

      Lokhandwala, Kaaeid A. (Union City, CA)

      2000-01-01

      A method for generating electric power, such as at, or close to, natural gas fields. The method includes conditioning natural gas containing C.sub.3+ hydrocarbons and/or acid gas by means of a membrane separation step. This step creates a leaner, sweeter, drier gas, which is then used as combustion fuel to run a turbine, which is in turn used for power generation.

    3. Nuclear reactor shield including magnesium oxide

      DOE Patents [OSTI]

      Rouse, Carl A. (Del Mar, CA); Simnad, Massoud T. (La Jolla, CA)

      1981-01-01

      An improvement in nuclear reactor shielding of a type used in reactor applications involving significant amounts of fast neutron flux, the reactor shielding including means providing structural support, neutron moderator material, neutron absorber material and other components as described below, wherein at least a portion of the neutron moderator material is magnesium in the form of magnesium oxide either alone or in combination with other moderator materials such as graphite and iron.

    4. Rotor assembly including superconducting magnetic coil

      DOE Patents [OSTI]

      Snitchler, Gregory L. (Shrewsbury, MA); Gamble, Bruce B. (Wellesley, MA); Voccio, John P. (Somerville, MA)

      2003-01-01

      Superconducting coils and methods of manufacture include a superconductor tape wound concentrically about and disposed along an axis of the coil to define an opening having a dimension which gradually decreases, in the direction along the axis, from a first end to a second end of the coil. Each turn of the superconductor tape has a broad surface maintained substantially parallel to the axis of the coil.

    5. TRAC-PF1/MOD1: an advanced best-estimate computer program for pressurized water reactor thermal-hydraulic analysis

      SciTech Connect (OSTI)

      Liles, D.R.; Mahaffy, J.H.

      1986-07-01

      The Los Alamos National Laboratory is developing the Transient Reactor Analysis Code (TRAC) to provide advanced best-estimate predictions of postulated accidents in light-water reactors. The TRAC-PF1/MOD1 program provides this capability for pressurized water reactors and for many thermal-hydraulic test facilities. The code features either a one- or a three-dimensional treatment of the pressure vessel and its associated internals, a two-fluid nonequilibrium hydrodynamics model with a noncondensable gas field and solute tracking, flow-regime-dependent constitutive equation treatment, optional reflood tracking capability for bottom-flood and falling-film quench fronts, and consistent treatment of entire accident sequences including the generation of consistent initial conditions. The stability-enhancing two-step (SETS) numerical algorithm is used in the one-dimensional hydrodynamics and permits this portion of the fluid dynamics to violate the material Courant condition. This technique permits large time steps and, hence, reduced running time for slow transients.

    6. Smart Grid Computational Tool | Open Energy Information

      Open Energy Info (EERE)

      project benefits. The Smart Grid Computational Tool employs the benefit analysis methodology that DOE uses to evaluate the Recovery Act smart grid projects. How it works: The...

    7. Determination Of Ph Including Hemoglobin Correction

      DOE Patents [OSTI]

      Maynard, John D. (Albuquerque, NM); Hendee, Shonn P. (Albuquerque, NM); Rohrscheib, Mark R. (Albuquerque, NM); Nunez, David (Albuquerque, NM); Alam, M. Kathleen (Cedar Crest, NM); Franke, James E. (Franklin, TN); Kemeny, Gabor J. (Madison, WI)

      2005-09-13

      Methods and apparatuses of determining the pH of a sample. A method can comprise determining an infrared spectrum of the sample, and determining the hemoglobin concentration of the sample. The hemoglobin concentration and the infrared spectrum can then be used to determine the pH of the sample. In some embodiments, the hemoglobin concentration can be used to select an model relating infrared spectra to pH that is applicable at the determined hemoglobin concentration. In other embodiments, a model relating hemoglobin concentration and infrared spectra to pH can be used. An apparatus according to the present invention can comprise an illumination system, adapted to supply radiation to a sample; a collection system, adapted to collect radiation expressed from the sample responsive to the incident radiation; and an analysis system, adapted to relate information about the incident radiation, the expressed radiation, and the hemoglobin concentration of the sample to pH.

    8. High-Performance Computing for Advanced Smart Grid Applications

      SciTech Connect (OSTI)

      Huang, Zhenyu; Chen, Yousu

      2012-07-06

      The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

    9. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    10. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      User Defined Images Archive APEX Home R & D Exascale Computing CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in...

    11. Computer Security Risk Assessment

      Energy Science and Technology Software Center (OSTI)

      1992-02-11

      LAVA/CS (LAVA for Computer Security) is an application of the Los Alamos Vulnerability Assessment (LAVA) methodology specific to computer and information security. The software serves as a generic tool for identifying vulnerabilities in computer and information security safeguards systems. Although it does not perform a full risk assessment, the results from its analysis may provide valuable insights into security problems. LAVA/CS assumes that the system is exposed to both natural and environmental hazards and tomore » deliberate malevolent actions by either insiders or outsiders. The user in the process of answering the LAVA/CS questionnaire identifies missing safeguards in 34 areas ranging from password management to personnel security and internal audit practices. Specific safeguards protecting a generic set of assets (or targets) from a generic set of threats (or adversaries) are considered. There are four generic assets: the facility, the organization''s environment; the hardware, all computer-related hardware; the software, the information in machine-readable form stored both on-line or on transportable media; and the documents and displays, the information in human-readable form stored as hard-copy materials (manuals, reports, listings in full-size or microform), film, and screen displays. Two generic threats are considered: natural and environmental hazards, storms, fires, power abnormalities, water and accidental maintenance damage; and on-site human threats, both intentional and accidental acts attributable to a perpetrator on the facility''s premises.« less

    12. Optical panel system including stackable waveguides

      DOE Patents [OSTI]

      DeSanto, Leonard; Veligdan, James T.

      2007-03-06

      An optical panel system including stackable waveguides is provided. The optical panel system displays a projected light image and comprises a plurality of planar optical waveguides in a stacked state. The optical panel system further comprises a support system that aligns and supports the waveguides in the stacked state. In one embodiment, the support system comprises at least one rod, wherein each waveguide contains at least one hole, and wherein each rod is positioned through a corresponding hole in each waveguide. In another embodiment, the support system comprises at least two opposing edge structures having the waveguides positioned therebetween, wherein each opposing edge structure contains a mating surface, wherein opposite edges of each waveguide contain mating surfaces which are complementary to the mating surfaces of the opposing edge structures, and wherein each mating surface of the opposing edge structures engages a corresponding complementary mating surface of the opposite edges of each waveguide.

    13. Optical panel system including stackable waveguides

      DOE Patents [OSTI]

      DeSanto, Leonard (Dunkirk, MD); Veligdan, James T. (Manorville, NY)

      2007-11-20

      An optical panel system including stackable waveguides is provided. The optical panel system displays a projected light image and comprises a plurality of planar optical waveguides in a stacked state. The optical panel system further comprises a support system that aligns and supports the waveguides in the stacked state. In one embodiment, the support system comprises at least one rod, wherein each waveguide contains at least one hole, and wherein each rod is positioned through a corresponding hole in each waveguide. In another embodiment, the support system comprises at least two opposing edge structures having the waveguides positioned therebetween, wherein each opposing edge structure contains a mating surface, wherein opposite edges of each waveguide contain mating surfaces which are complementary to the mating surfaces of the opposing edge structures, and wherein each mating surface of the opposing edge structures engages a corresponding complementary mating surface of the opposite edges of each waveguide.

    14. Thermovoltaic semiconductor device including a plasma filter

      DOE Patents [OSTI]

      Baldasaro, Paul F. (Clifton Park, NY)

      1999-01-01

      A thermovoltaic energy conversion device and related method for converting thermal energy into an electrical potential. An interference filter is provided on a semiconductor thermovoltaic cell to pre-filter black body radiation. The semiconductor thermovoltaic cell includes a P/N junction supported on a substrate which converts incident thermal energy below the semiconductor junction band gap into electrical potential. The semiconductor substrate is doped to provide a plasma filter which reflects back energy having a wavelength which is above the band gap and which is ineffectively filtered by the interference filter, through the P/N junction to the source of radiation thereby avoiding parasitic absorption of the unusable portion of the thermal radiation energy.

    15. Drapery assembly including insulated drapery liner

      DOE Patents [OSTI]

      Cukierski, Gwendolyn (Ithaca, NY)

      1983-01-01

      A drapery assembly is disclosed for covering a framed wall opening, the assembly including drapery panels hung on a horizontal traverse rod, the rod having a pair of master slides and means for displacing the master slides between open and closed positions. A pair of insulating liner panels are positioned behind the drapery, the remote side edges of the liner panels being connected with the side portions of the opening frame, and the adjacent side edges of the liner panels being connected with a pair of vertically arranged center support members adapted for sliding movement longitudinally of a horizontal track member secured to the upper horizontal portion of the opening frame. Pivotally arranged brackets connect the center support members with the master slides of the traverse rod whereby movement of the master slides to effect opening and closing of the drapery panels effects simultaneous opening and closing of the liner panels.

    16. Computational Science Research in Support of Petascale Electromagnetic Modeling

      SciTech Connect (OSTI)

      Lee, L.-Q.; Akcelik, V; Ge, L; Chen, S; Schussman, G; Candel, A; Li, Z; Xiao, L; Kabel, A; Uplenchwar, R; Ng, C; Ko, K; /SLAC

      2008-06-20

      Computational science research components were vital parts of the SciDAC-1 accelerator project and are continuing to play a critical role in newly-funded SciDAC-2 accelerator project, the Community Petascale Project for Accelerator Science and Simulation (ComPASS). Recent advances and achievements in the area of computational science research in support of petascale electromagnetic modeling for accelerator design analysis are presented, which include shape determination of superconducting RF cavities, mesh-based multilevel preconditioner in solving highly-indefinite linear systems, moving window using h- or p- refinement for time-domain short-range wakefield calculations, and improved scalable application I/O.

    17. Engine lubrication circuit including two pumps

      DOE Patents [OSTI]

      Lane, William H.

      2006-10-03

      A lubrication pump coupled to the engine is sized such that the it can supply the engine with a predetermined flow volume as soon as the engine reaches a peak torque engine speed. In engines that operate predominately at speeds above the peak torque engine speed, the lubrication pump is often producing lubrication fluid in excess of the predetermined flow volume that is bypassed back to a lubrication fluid source. This arguably results in wasted power. In order to more efficiently lubricate an engine, a lubrication circuit includes a lubrication pump and a variable delivery pump. The lubrication pump is operably coupled to the engine, and the variable delivery pump is in communication with a pump output controller that is operable to vary a lubrication fluid output from the variable delivery pump as a function of at least one of engine speed and lubrication flow volume or system pressure. Thus, the lubrication pump can be sized to produce the predetermined flow volume at a speed range at which the engine predominately operates while the variable delivery pump can supplement lubrication fluid delivery from the lubrication pump at engine speeds below the predominant engine speed range.

    18. Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer Modeling

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Analysis Resources | Argonne National Laboratory Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer Modeling and Analysis Resources A private industry success story. PDF icon cat_cummins_computing_success_story_dec_

    19. Barbara Helland Advanced Scientific Computing Research NERSC-HEP Requirements Review

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7-28, 2012 Barbara Helland Advanced Scientific Computing Research NERSC-HEP Requirements Review 1 Science C ase S tudies d rive d iscussions Program R equirements R eviews  Program offices evaluated every two-three years  Participants include program managers, PI/ Scientists, ESnet/NERSC staff and management  User-driven discussion of science opportunities and needs  What: Instruments and facilities, data scale, computational requirements  How: science process, data analysis,

    20. computational-hydraulics-for-transportation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Transportation Workshop Sept. 23-24, 2009 Argonne TRACC Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small The Transportation Research and Analysis Computing Center at Argonne National Laboratory will hold a workshop on the use of computational hydraulics for transportation applications. The goals of the workshop are: Bring together people who are using or would benefit from the use of high performance cluster

    1. Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer Modeling and Analysis Resources PDF icon catcumminscomputingsuccessstorydec2015...

    2. 2D Wavefront Sensor Analysis and Control

      Energy Science and Technology Software Center (OSTI)

      1996-02-19

      This software is designed for data acquisition and analysis of two dimensional wavefront sensors. The software includes data acquisition and control functions for an EPIX frame grabber to acquire data from a computer and all the appropriate analysis functions necessary to produce and display intensity and phase information. This software is written in Visual Basic for windows.

    3. Progress report No. 56, October 1, 1979-September 30, 1980. [Courant Mathematics and Computing Lab. , New York Univ

      SciTech Connect (OSTI)

      1980-10-01

      Research during the period is sketched in a series of abstract-length summaries. The forte of the Laboratory lies in the development and analysis of mathematical models and efficient computing methods for the rapid solution of technological problems of interest to DOE, in particular, the detailed calculation on large computers of complicated fluid flows in which reactions and heat conduction may be taking place. The research program of the Laboratory encompasses two broad categories: analytical and numerical methods, which include applied analysis, computational mathematics, and numerical methods for partial differential equations, and advanced computer concepts, which include software engineering, distributed systems, and high-performance systems. Lists of seminars and publications are included. (RWR)

    4. Analysis

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SunShot Grand Challenge: Regional Test Centers Analysis Home/Tag:Analysis - Electricity use by water service sector and county. Shown are electricity use by (a) large-scale conveyance, (b) groundwater irrigation pumping, (c) surface water irrigation pumping, (d) drinking water, and (e) wastewater. Aggregate electricity use across these sectors (f) is also mapped. Permalink Gallery Sandians Recognized in Environmental Science & Technology's Best Paper Competition Analysis, Capabilities,

    5. Analysis

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Analysis Home/Analysis - Structures of the zwitterionic coatings synthesized for this study. Permalink Gallery Investigations on Anti-biofouling Zwitterionic Coatings for MHK Is Now in Press Analysis, Capabilities, Energy, News, News & Events, Renewable Energy, Research & Capabilities, Water Power Investigations on Anti-biofouling Zwitterionic Coatings for MHK Is Now in Press Sandia's Marine Hydrokinetic (MHK) Advanced Materials program has a new publication on the antifouling efficacy

    6. Ionic liquids, electrolyte solutions including the ionic liquids, and energy storage devices including the ionic liquids

      DOE Patents [OSTI]

      Gering, Kevin L.; Harrup, Mason K.; Rollins, Harry W.

      2015-12-08

      An ionic liquid including a phosphazene compound that has a plurality of phosphorus-nitrogen units and at least one pendant group bonded to each phosphorus atom of the plurality of phosphorus-nitrogen units. One pendant group of the at least one pendant group comprises a positively charged pendant group. Additional embodiments of ionic liquids are disclosed, as are electrolyte solutions and energy storage devices including the embodiments of the ionic liquid.

    7. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      In the early 2000s, members of Fermilab's Computing Division looked ahead to experiments like those at the Large Hadron Collider, which would collect more data than any computing ...

    8. Development of computer graphics

      SciTech Connect (OSTI)

      Nuttall, H.E.

      1989-07-01

      The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

    9. Mira Computational Readiness Assessment | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Mira Computational Readiness Assessment Assess your project's computational readiness for Mira A review of the following computational readiness points in relation to scaling, porting, I/O, memory

    10. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 1 quarter 3 progress report.

      SciTech Connect (OSTI)

      Lottes, S.A.; Kulak, R.F.; Bojanowski, C.

      2011-08-26

      The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. The analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water loads on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of April through June 2011.

    11. Numerical uncertainty in computational engineering and physics

      SciTech Connect (OSTI)

      Hemez, Francois M

      2009-01-01

      Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts of consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.

    12. NREL: Concentrating Solar Power Research - Modeling and Analysis

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Modeling and Analysis NREL has the following capabilities, which include software development, for modeling and analyzing a variety of concentrating solar power technologies: Solar Resource Maps Optical Analysis and Modeling Advanced Coatings Modeling and Analysis Computational Fluid Dynamics (CFD) Systems Analysis Concentrating Solar Deployment System Job and Economic Development Impact (JEDI) A map providing a concentrating solar power siting analysis of the southwestern United States. This

    13. Internal combustion engines: Computer applications. (Latest citations from the EI Compendex plus database). Published Search

      SciTech Connect (OSTI)

      Not Available

      1993-10-01

      The bibliography contains citations concerning the application of computers and computerized simulations in the design, analysis, operation, and evaluation of various types of internal combustion engines and associated components and apparatus. Special attention is given to engine control and performance. (Contains a minimum of 67 citations and includes a subject term index and title list.)

    14. Model Analysis ToolKit

      Energy Science and Technology Software Center (OSTI)

      2015-05-15

      MATK provides basic functionality to facilitate model analysis within the Python computational environment. Model analysis setup within MATK includes: - define parameters - define observations - define model (python function) - define samplesets (sets of parameter combinations) Currently supported functionality includes: - forward model runs - Latin-Hypercube sampling of parameters - multi-dimensional parameter studies - parallel execution of parameter samples - model calibration using internal Levenberg-Marquardt algorithm - model calibration using lmfit package - modelmore » calibration using levmar package - Markov Chain Monte Carlo using pymc package MATK facilitates model analysis using: - scipy - calibration (scipy.optimize) - rpy2 - Python interface to R« less

    15. Sandia Energy - Computations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computations Home Transportation Energy Predictive Simulation of Engines Reacting Flow Applied Math & Software Computations ComputationsAshley Otero2015-10-30T02:18:51+00:00...

    16. NREL: Technology Deployment - Cities-LEAP Energy Profile Tool Includes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Energy Data on More than 23,400 U.S. Cities Cities-LEAP Energy Profile Tool Includes Energy Data on More than 23,400 U.S. Cities News NREL Report Examines Energy Use in Cities and Proposes Next Steps for Energy Innovation Publications Citi-Level Energy Decision Making: Data Use in Energy Planning, Implementation, and Evaluation in U.S. Cities Sponsors DOE's Energy Office of Energy Efficiency and Renewable Energy Policy and Analysis Office Related Stories Hawaii's First Net-Zero Energy

    17. Locating hardware faults in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-04-13

      Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

    18. Mesh Morphing Pier Analysis

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Application of Mesh Morphing in STAR-CCM+ to Analysis of Scour at Cylindrical Piers Mesh morphing is a fluid structure interaction capability in STAR-CCM+ to move vertices in the computational mesh in a way that preserves mesh quality when a boundary moves. The equations being solved include terms that account for the motion of the mesh maintaining mass and property balances during the solution process. Initial work on leveraging the mesh morphing FSI capability for efficient application to

    19. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    20. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

    1. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    2. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or...

    3. Fourth SIAM conference on mathematical and computational issues in the geosciences: Final program and abstracts

      SciTech Connect (OSTI)

      1997-12-31

      The conference focused on computational and modeling issues in the geosciences. Of the geosciences, problems associated with phenomena occurring in the earth`s subsurface were best represented. Topics in this area included petroleum recovery, ground water contamination and remediation, seismic imaging, parameter estimation, upscaling, geostatistical heterogeneity, reservoir and aquifer characterization, optimal well placement and pumping strategies, and geochemistry. Additional sessions were devoted to the atmosphere, surface water and oceans. The central mathematical themes included computational algorithms and numerical analysis, parallel computing, mathematical analysis of partial differential equations, statistical and stochastic methods, optimization, inversion, homogenization and renormalization. The problem areas discussed at this conference are of considerable national importance, with the increasing importance of environmental issues, global change, remediation of waste sites, declining domestic energy sources and an increasing reliance on producing the most out of established oil reservoirs.

    4. C -parameter distribution at N 3 LL ' including power corrections

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Hoang, André H.; Kolodrubetz, Daniel W.; Mateu, Vicent; Stewart, Iain W.

      2015-05-15

      We compute the e⁺e⁻ C-parameter distribution using the soft-collinear effective theory with a resummation to next-to-next-to-next-to-leading-log prime accuracy of the most singular partonic terms. This includes the known fixed-order QCD results up to O(α3s), a numerical determination of the two-loop nonlogarithmic term of the soft function, and all logarithmic terms in the jet and soft functions up to three loops. Our result holds for C in the peak, tail, and far tail regions. Additionally, we treat hadronization effects using a field theoretic nonperturbative soft function, with moments Ωn. To eliminate an O(ΛQCD) renormalon ambiguity in the soft function, we switchmore » from the MS¯ to a short distance “Rgap” scheme to define the leading power correction parameter Ω1. We show how to simultaneously account for running effects in Ω1 due to renormalon subtractions and hadron-mass effects, enabling power correction universality between C-parameter and thrust to be tested in our setup. We discuss in detail the impact of resummation and renormalon subtractions on the convergence. In the relevant fit region for αs(mZ) and Ω1, the perturbative uncertainty in our cross section is ≅ 2.5% at Q=mZ.« less

    5. New challenges in computational biochemistry

      SciTech Connect (OSTI)

      Honig, B.

      1996-12-31

      The new challenges in computational biochemistry to which the title refers include the prediction of the relative binding free energy of different substrates to the same protein, conformational sampling, and other examples of theoretical predictions matching known protein structure and behavior.

    6. Experimental Mathematics and Computational Statistics

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2009-04-30

      The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

    7. Cosmic Reionization On Computers | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      its Cosmic Reionization On Computers (CROC) project, using the Adaptive Refinement Tree (ART) code as its main simulation tool. An important objective of this research is to make...

    8. Computing and Computational Sciences Directorate - Information...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      cost-effective, state-of-the-art computing capabilities for research and development. ... communicates and manages strategy, policy and finance across the portfolio of IT assets. ...

    9. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zrich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    10. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    11. Computing for Finance

      SciTech Connect (OSTI)

      2010-03-24

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    12. Performing a global barrier operation in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

      2014-12-09

      Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.

    13. Combinatorial evaluation of systems including decomposition of a system representation into fundamental cycles

      DOE Patents [OSTI]

      Oliveira, Joseph S. (Richland, WA); Jones-Oliveira, Janet B. (Richland, WA); Bailey, Colin G. (Wellington, NZ); Gull, Dean W. (Seattle, WA)

      2008-07-01

      One embodiment of the present invention includes a computer operable to represent a physical system with a graphical data structure corresponding to a matroid. The graphical data structure corresponds to a number of vertices and a number of edges that each correspond to two of the vertices. The computer is further operable to define a closed pathway arrangement with the graphical data structure and identify each different one of a number of fundamental cycles by evaluating a different respective one of the edges with a spanning tree representation. The fundamental cycles each include three or more of the vertices.

    14. Parallel computing works

      SciTech Connect (OSTI)

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    15. A user`s guide to LUGSAN II. A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

      SciTech Connect (OSTI)

      Dunn, W.N.

      1998-03-01

      LUG and Sway brace ANalysis (LUGSAN) II is an analysis and database computer program that is designed to calculate store lug and sway brace loads for aircraft captive carriage. LUGSAN II combines the rigid body dynamics code, SWAY85, with a Macintosh Hypercard database to function both as an analysis and archival system. This report describes the LUGSAN II application program, which operates on the Macintosh System (Hypercard 2.2 or later) and includes function descriptions, layout examples, and sample sessions. Although this report is primarily a user`s manual, a brief overview of the LUGSAN II computer code is included with suggested resources for programmers.

    16. Computers-BSA.ppt

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers! Boy Scout Troop 405! What is a computer?! Is this a computer?! Charles Babbage: Father of the Computer! 1830s Designed mechanical calculators to reduce human error. *Input device *Memory to store instructions and results *A processors *Output device! Vacuum Tube! Edison 1883 & Lee de Forest 1906 discovered that "vacuum tubes" could serve as electrical switches and amplifiers A switch can be ON (1)" or OFF (0) Electronic computers use Boolean (George Bool 1850) logic

    17. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    18. Theory & Computation > Research > The Energy Materials Center...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Theory & Computation In This Section Computation & Simulation Theory & Computation Computation & Simulation...

    19. Energy and cost analysis of a solar-hydrogen combined heat and power system for remote power supply using a computer simulation

      SciTech Connect (OSTI)

      Shabani, Bahman; Andrews, John; Watkins, Simon

      2010-01-15

      A simulation program, based on Visual Pascal, for sizing and techno-economic analysis of the performance of solar-hydrogen combined heat and power systems for remote applications is described. The accuracy of the submodels is checked by comparing the real performances of the system's components obtained from experimental measurements with model outputs. The use of the heat generated by the PEM fuel cell, and any unused excess hydrogen, is investigated for hot water production or space heating while the solar-hydrogen system is supplying electricity. A 5 kWh daily demand profile and the solar radiation profile of Melbourne have been used in a case study to investigate the typical techno-economic characteristics of the system to supply a remote household. The simulation shows that by harnessing both thermal load and excess hydrogen it is possible to increase the average yearly energy efficiency of the fuel cell in the solar-hydrogen system from just below 40% up to about 80% in both heat and power generation (based on the high heating value of hydrogen). The fuel cell in the system is conventionally sized to meet the peak of the demand profile. However, an economic optimisation analysis illustrates that installing a larger fuel cell could lead to up to a 15% reduction in the unit cost of the electricity to an average of just below 90 c/kWh over the assessment period of 30 years. Further, for an economically optimal size of the fuel cell, nearly a half the yearly energy demand for hot water of the remote household could be supplied by heat recovery from the fuel cell and utilising unused hydrogen in the exit stream. Such a system could then complement a conventional solar water heating system by providing the boosting energy (usually in the order of 40% of the total) normally obtained from gas or electricity. (author)

    20. Computer memory management system

      DOE Patents [OSTI]

      Kirk, III, Whitson John

      2002-01-01

      A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

    1. ASCR Workshop on Quantum Computing for Science

      SciTech Connect (OSTI)

      Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

      2015-06-01

      This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

    2. Computer Model Buildings Contaminated with Radioactive Material

      Energy Science and Technology Software Center (OSTI)

      1998-05-19

      The RESRAD-BUILD computer code is a pathway analysis model designed to evaluate the potential radiological dose incurred by an individual who works or lives in a building contaminated with radioactive material.

    3. Session on computation in biological pathways

      SciTech Connect (OSTI)

      Karp, P.D.; Riley, M.

      1996-12-31

      The papers in this session focus on the development of pathway databases and computational tools for pathway analysis. The discussion involves existing databases of sequenced genomes, as well as techniques for studying regulatory pathways.

    4. Accounts Policy | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Accounts Policy All holders of user accounts must abide by all appropriate Argonne Leadership Computing Facility and Argonne National Laboratory computing usage policies. These are described at the time of the account request and include requirements such as using a sufficiently strong password, appropriate use of the system, and so on. Any user not following these requirements will have their account disabled. Furthermore, ALCF resources are intended to be used as a computing resource for

    5. Computer Networking Group | Stanford Synchrotron Radiation Lightsource

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Networking Group Do you need help? For assistance please submit a CNG Help Request ticket. CNG Logo Chris Ramirez SSRL Computer and Networking Group (650) 926-2901 | email Jerry Camuso SSRL Computer and Networking Group (650) 926-2994 | email Networking Support The Networking group provides connectivity and communications services for SSRL. The services provided by the Networking Support Group include: Local Area Network support for cable and wireless connectivity. Installation and

    6. Computational Tools to Accelerate Commercial Development

      SciTech Connect (OSTI)

      Miller, David C.

      2013-01-01

      The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

    7. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege o

    8. Intranode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

      2013-07-23

      Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

    9. Intranode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

      2014-01-07

      Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a computer node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

    10. Mark Hereld | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Hereld Manager, Visualization and Data Analysis Mark Hereld Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 4139 Argonne, IL 60439 630-252-4170 hereld@mcs.anl.gov Mark Hereld is the ALCF's Visualization and Data Analysis Manager. He is also a member of the research staff in Argonne's Mathematics and Computer Science Division and a Senior Fellow of the Computation Institute with a joint appointment at the University of Chicago. His work in understanding simulation on future

    11. Fermilab | Science at Fermilab | Computing | High-performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lattice QCD Farm at the Grid Computing Center at Fermilab. Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer can perform billions of multiplication and addition operations each second. High-performance parallel computing becomes necessary when computations become too large or too long to complete on a single such machine. In parallel computing, computations are divided up so that many computers can work on the same problem at

    12. Aggregating job exit statuses of a plurality of compute nodes executing a parallel application

      DOE Patents [OSTI]

      Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Mundy, Michael B.

      2015-07-21

      Aggregating job exit statuses of a plurality of compute nodes executing a parallel application, including: identifying a subset of compute nodes in the parallel computer to execute the parallel application; selecting one compute node in the subset of compute nodes in the parallel computer as a job leader compute node; initiating execution of the parallel application on the subset of compute nodes; receiving an exit status from each compute node in the subset of compute nodes, where the exit status for each compute node includes information describing execution of some portion of the parallel application by the compute node; aggregating each exit status from each compute node in the subset of compute nodes; and sending an aggregated exit status for the subset of compute nodes in the parallel computer.

    13. An Analysis of Nuclear Fuel Burnup in the AGR 1 TRISO Fuel Experiment Using Gamma Spectrometry, Mass Spectrometry, and Computational Simulation Techniques

      SciTech Connect (OSTI)

      Jason M. Harp; Paul A. Demkowicz; Phillip L. Winston; James W. Sterbentz

      2014-10-01

      AGR 1 was the first in a series of experiments designed to test US TRISO fuel under high temperature gas-cooled reactor irradiation conditions. This experiment was irradiated in the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL) and is currently undergoing post irradiation examination (PIE) at INL and Oak Ridge National Laboratory. One component of the AGR 1 PIE is the experimental evaluation of the burnup of the fuel by two separate techniques. Gamma spectrometry was used to non destructively evaluate the burnup of all 72 of the TRISO fuel compacts that comprised the AGR 1 experiment. Two methods for evaluating burnup by gamma spectrometry were developed, one based on the Cs 137 activity and the other based on the ratio of Cs 134 and Cs 137 activities. Burnup values determined from both methods compared well with the values predicted from simulations. The highest measured burnup was 20.1 %FIMA for the direct method and 20.0 %FIMA for the ratio method (compared to 19.56% FIMA from simulations). An advantage of the ratio method is that the burnup of the cylindrical fuel compacts can determined in small (2.5 mm) axial increments and an axial burnup profile can be produced. Destructive chemical analysis by inductively coupled mass spectrometry (ICP MS) was then performed on selected compacts that were representative of the expected range of fuel burnups in the experiment to compare with the burnup values determined by gamma spectrometry. The compacts analyzed by mass spectrometry had a burnup range of 19.3 % FIMA to 10.7 % FIMA. The mass spectrometry evaluation of burnup for the four compacts agreed well with the gamma spectrometry burnup evaluations and the expected burnup from simulation. For all four compacts analyzed by mass spectrometry, the maximum range in the three experimentally determined values and the predicted value was 6% or less. The results confirm the accuracy of the nondestructive burnup evaluation from gamma spectrometry for TRISO fuel compacts across a burnup range of approximately 10 to 20 % FIMA and also validate the approach used in the physics simulation of the AGR 1 experiment.

    14. Broadcasting collective operation contributions throughout a parallel computer

      DOE Patents [OSTI]

      Faraj, Ahmad (Rochester, MN)

      2012-02-21

      Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.

    15. Pacing a data transfer operation between compute nodes on a parallel computer

      DOE Patents [OSTI]

      Blocksome, Michael A. (Rochester, MN)

      2011-09-13

      Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

    16. Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery

      SciTech Connect (OSTI)

      Luttman, A.

      2012-03-30

      The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.

    17. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    18. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    19. Cognitive Computing for Security.

      SciTech Connect (OSTI)

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    20. Getting Computer Accounts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Accounts When you first arrive at the lab, you will be presented with lots of forms that must be read and signed in order to get an ID and computer access. You must ensure...

    1. Modular Environment for Graph Research and Analysis with a Persistent

      Energy Science and Technology Software Center (OSTI)

      2009-11-18

      The MEGRAPHS software package provides a front-end to graphs and vectors residing on special-purpose computing resources. It allows these data objects to be instantiated, destroyed, and manipulated. A variety of primitives needed for typical graph analyses are provided. An example program illustrating how MEGRAPHS can be used to implement a PageRank computation is included in the distribution.The MEGRAPHS software package is targeted towards developers of graph algorithms. Programmers using MEGRAPHS would write graph analysis programsmore » in terms of high-level graph and vector operations. These computations are transparently executed on the Cray XMT compute nodes.« less

    2. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    3. Venkatram Vishwanath | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      leadership-class computers, including IO forwarding and power consumption on the Blue GenP and Blue GeneQ systems. Vishwanath won a Department of Energy SciDAC Scientific...

    4. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences and Engineering The Computational Sciences and Engineering Division (CSED) is ORNL's premier source of basic and applied research in the field of data sciences and knowledge discovery. CSED's science agenda is focused on research and development related to knowledge discovery enabled by the explosive growth in the availability, size, and variability of dynamic and disparate data sources. This science agenda encompasses data sciences as well as advanced modeling and

    5. Darshan | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Darshan References Darshan

    6. Projects | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Projects bgclang Compiler Hal Finkel Cobalt Scheduler Bill Allcock, Paul Rich, Brian Toonen, Tom Uram GLEAN: Scalable In Situ Analysis and I/O Acceleration on Leadership Computing Systems Michael E. Papka, Venkat Vishwanath, Mark Hereld, Preeti Malakar, Joe Insley, Silvio Rizzi, Tom Uram Petrel: Data Management and Sharing Pilot Ian Foster, Michael E. Papka, Bill Allcock, Ben Allen, Rachana Ananthakrishnan, Lukasz Lacinski The Swift Parallel Scripting Language for ALCF Systems Michael Wilde,

    7. MADNESS | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] MADNESS Overview MADNESS is a numerical tool kit used to solve integral differential equations using multi-resolution analysis and a low-rank separation representation. MADNESS can solve multi-dimensional equations, currently up

    8. VISTA - computational tools for comparative genomics

      SciTech Connect (OSTI)

      Frazer, Kelly A.; Pachter, Lior; Poliakov, Alexander; Rubin,Edward M.; Dubchak, Inna

      2004-01-01

      Comparison of DNA sequences from different species is a fundamental method for identifying functional elements in genomes. Here we describe the VISTA family of tools created to assist biologists in carrying out this task. Our first VISTA server at http://www-gsd.lbl.gov/VISTA/ was launched in the summer of 2000 and was designed to align long genomic sequences and visualize these alignments with associated functional annotations. Currently the VISTA site includes multiple comparative genomics tools and provides users with rich capabilities to browse pre-computed whole-genome alignments of large vertebrate genomes and other groups of organisms with VISTA Browser, submit their own sequences of interest to several VISTA servers for various types of comparative analysis, and obtain detailed comparative analysis results for a set of cardiovascular genes. We illustrate capabilities of the VISTA site by the analysis of a 180 kilobase (kb) interval on human chromosome 5 that encodes for the kinesin family member3A (KIF3A) protein.

    9. BNL ATLAS Grid Computing

      ScienceCinema (OSTI)

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    10. Computing environment logbook

      DOE Patents [OSTI]

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    11. Mathematical and Computational Epidemiology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    12. Computer virus information update CIAC-2301

      SciTech Connect (OSTI)

      Orvis, W.J.

      1994-01-15

      While CIAC periodically issues bulletins about specific computer viruses, these bulletins do not cover all the computer viruses that affect desktop computers. The purpose of this document is to identify most of the known viruses for the MS-DOS and Macintosh platforms and give an overview of the effects of each virus. The authors also include information on some windows, Atari, and Amiga viruses. This document is revised periodically as new virus information becomes available. This document replaces all earlier versions of the CIAC Computer virus Information Update. The date on the front cover indicates date on which the information in this document was extracted from CIAC`s Virus database.

    13. computational-hydaulics-march-30

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Aerodynamics using STAR-CCM+ for CFD Analysis March 30-31, 2011 Argonne, Illinois Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small A training course in the use of computational hydraulics and aerodynamics CFD software using CD-adapco's STAR-CCM+ for analysis was held at TRACC from March 30-31, 2011. The course assumes a basic knowledge of fluid mechanics and made extensive use of hands on tutorials.

    14. Predictive Dynamic Security Assessment through Advanced Computing

      SciTech Connect (OSTI)

      Huang, Zhenyu; Diao, Ruisheng; Jin, Shuangshuang; Chen, Yousu

      2014-11-30

      Abstract— Traditional dynamic security assessment is limited by several factors and thus falls short in providing real-time information to be predictive for power system operation. These factors include the steady-state assumption of current operating points, static transfer limits, and low computational speed. This addresses these factors and frames predictive dynamic security assessment. The primary objective of predictive dynamic security assessment is to enhance the functionality and computational process of dynamic security assessment through the use of high-speed phasor measurements and the application of advanced computing technologies for faster-than-real-time simulation. This paper presents algorithms, computing platforms, and simulation frameworks that constitute the predictive dynamic security assessment capability. Examples of phasor application and fast computation for dynamic security assessment are included to demonstrate the feasibility and speed enhancement for real-time applications.

    15. Adjoints and Large Data Sets in Computational Fluid Dynamics...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Oana Marin Speaker(s) Title: Postdoctoral Appointee, MCS Optimal flow control and stability analysis are some of the fields within Computational Fluid Dynamics (CFD) that...

    16. Programs | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      INCITE Program ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Featured Science Snapshot of the global structure of a radiation-dominated accretion flow around a black hole computed using the Athena++ code Magnetohydrodynamic Models of Accretion Including Radiation Transport James Stone Allocation Program: INCITE Allocation Hours: 47 Million

    17. Nuclear Arms Control R&D Consortium includes Los Alamos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nuclear Arms Control R&D Consortium includes Los Alamos Nuclear Arms Control R&D Consortium includes Los Alamos A consortium led by the University of Michigan that includes LANL as ...

    18. A compute-Efficient Bitmap Compression Index for Database Applications

      Energy Science and Technology Software Center (OSTI)

      2006-01-01

      FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index,more »which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.« less

    19. Trends and challenges when including microstructure in materials...

      Office of Scientific and Technical Information (OSTI)

      Trends and challenges when including microstructure in materials modeling: Examples of ... Citation Details In-Document Search Title: Trends and challenges when including ...

    20. Newport News in Review, ch. 47, segment includes TEDF groundbreaking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      https:www.jlab.orgnewsarticlesnewport-news-review-ch-47-segment-includes-tedf-groundbreaking-event Newport News in Review, ch. 47, segment includes TEDF groundbreaking event...

    1. FEMP Expands ESPC ENABLE Program to Include More Energy Conservation...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Expands ESPC ENABLE Program to Include More Energy Conservation Measures FEMP Expands ESPC ENABLE Program to Include More Energy Conservation Measures November 13, 2013 - 12:00am...

    2. Property:Number of Plants included in Capacity Estimate | Open...

      Open Energy Info (EERE)

      Plants included in Capacity Estimate Jump to: navigation, search Property Name Number of Plants included in Capacity Estimate Property Type Number Retrieved from "http:...

    3. Property:Number of Plants Included in Planned Estimate | Open...

      Open Energy Info (EERE)

      Number of Plants Included in Planned Estimate Jump to: navigation, search Property Name Number of Plants Included in Planned Estimate Property Type String Description Number of...

    4. Microfluidic devices and methods including porous polymer monoliths...

      Office of Scientific and Technical Information (OSTI)

      Patent: Microfluidic devices and methods including porous polymer monoliths Citation Details In-Document Search Title: Microfluidic devices and methods including porous polymer ...

    5. Solar Energy Education. Reader, Part II. Sun story. [Includes...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reader, Part II. Sun story. Includes glossary Citation Details In-Document Search Title: Solar Energy Education. Reader, Part II. Sun story. Includes glossary You are ...

    6. Prevention of Harassment (Including Sexual Harassment) and Retaliation...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Prevention of Harassment (Including Sexual Harassment) and Retaliation Policy Statement Prevention of Harassment (Including Sexual Harassment) and Retaliation Policy Statement DOE...

    7. Natural Gas Delivered to Consumers in North Carolina (Including...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (Including Vehicle Fuel) (Million Cubic Feet) Natural Gas Delivered to Consumers in North Carolina (Including Vehicle Fuel) (Million Cubic Feet) Year Jan Feb Mar Apr May Jun...

    8. Computation Directorate 2007 Annual Report

      SciTech Connect (OSTI)

      Henson, V E; Guse, J A

      2008-03-06

      If there is a single word that both characterized 2007 and dominated the thoughts and actions of many Laboratory employees throughout the year, it is transition. Transition refers to the major shift that took place on October 1, when the University of California relinquished management responsibility for Lawrence Livermore National Laboratory (LLNL), and Lawrence Livermore National Security, LLC (LLNS), became the new Laboratory management contractor for the Department of Energy's (DOE's) National Nuclear Security Administration (NNSA). In the 55 years under the University of California, LLNL amassed an extraordinary record of significant accomplishments, clever inventions, and momentous contributions in the service of protecting the nation. This legacy provides the new organization with a built-in history, a tradition of excellence, and a solid set of core competencies from which to build the future. I am proud to note that in the nearly seven years I have had the privilege of leading the Computation Directorate, our talented and dedicated staff has made far-reaching contributions to the legacy and tradition we passed on to LLNS. Our place among the world's leaders in high-performance computing, algorithmic research and development, applications, and information technology (IT) services and support is solid. I am especially gratified to report that through all the transition turmoil, and it has been considerable, the Computation Directorate continues to produce remarkable achievements. Our most important asset--the talented, skilled, and creative people who work in Computation--has continued a long-standing Laboratory tradition of delivering cutting-edge science even in the face of adversity. The scope of those achievements is breathtaking, and in 2007, our accomplishments span an amazing range of topics. From making an important contribution to a Nobel Prize-winning effort to creating tools that can detect malicious codes embedded in commercial software; from expanding BlueGene/L, the world's most powerful computer, by 60% and using it to capture the most prestigious prize in the field of computing, to helping create an automated control system for the National Ignition Facility (NIF) that monitors and adjusts more than 60,000 control and diagnostic points; from creating a microarray probe that rapidly detects virulent high-threat organisms, natural or bioterrorist in origin, to replacing large numbers of physical computer servers with small numbers of virtual servers, reducing operating expense by 60%, the people in Computation have been at the center of weighty projects whose impacts are felt across the Laboratory and the DOE community. The accomplishments I just mentioned, and another two dozen or so, make up the stories contained in this report. While they form an exceptionally diverse set of projects and topics, it is what they have in common that excites me. They share the characteristic of being central, often crucial, to the mission-driven business of the Laboratory. Computational science has become fundamental to nearly every aspect of the Laboratory's approach to science and even to the conduct of administration. It is difficult to consider how we would proceed without computing, which occurs at all scales, from handheld and desktop computing to the systems controlling the instruments and mechanisms in the laboratories to the massively parallel supercomputers. The reasons for the dramatic increase in the importance of computing are manifest. Practical, fiscal, or political realities make the traditional approach to science, the cycle of theoretical analysis leading to experimental testing, leading to adjustment of theory, and so on, impossible, impractical, or forbidden. How, for example, can we understand the intricate relationship between human activity and weather and climate? We cannot test our hypotheses by experiment, which would require controlled use of the entire earth over centuries. It is only through extremely intricate, detailed computational simulation that we can test our theories, and simulati

    9. Sandia Energy - High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    10. Microsoft PowerPoint - Microbial Genome and Metagenome Analysis Case Study (NERSC Workshop - May 7-8, 2009).ppt [Compatibility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Microbial Genome & Metagenome Analysis: Computational Challenges Natalia N. Ivanova * Nikos C. Kyrpides * Victor M. Markowitz ** * Genome Biology Program, Joint Genome Institute ** Lawrence Berkeley National Lab Microbial genome & metagenome analysis General aims Understand microbial life Apply to agriculture, bioremediation, biofuels, human health Specific aims include Specific aims include Predict biochemistry & physiology of organisms based on genome sequence Explain known

    11. Method for transferring data from an unsecured computer to a secured computer

      DOE Patents [OSTI]

      Nilsen, Curt A. (Castro Valley, CA)

      1997-01-01

      A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

    12. Computational method and system for modeling, analyzing, and optimizing DNA amplification and synthesis

      DOE Patents [OSTI]

      Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.

      2010-05-04

      A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

    13. The Magellan Final Report on Cloud Computing

      SciTech Connect (OSTI)

      ,; Coghlan, Susan; Yelick, Katherine

      2011-12-21

      The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.

    14. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    15. Energy Aware Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy required to compute is to reduce the power usage on a node. One way to accomplish this is by lowering the frequency at which the CPU operates. However, reducing the clock speed increases the time to solution, creating a potential tradeoff. NERSC continues to examine how such methods impact its operations and its

    16. Monitoring system including an electronic sensor platform and an interrogation transceiver

      DOE Patents [OSTI]

      Kinzel, Robert L.; Sheets, Larry R.

      2003-09-23

      A wireless monitoring system suitable for a wide range of remote data collection applications. The system includes at least one Electronic Sensor Platform (ESP), an Interrogator Transceiver (IT) and a general purpose host computer. The ESP functions as a remote data collector from a number of digital and analog sensors located therein. The host computer provides for data logging, testing, demonstration, installation checkout, and troubleshooting of the system. The IT transmits signals from one or more ESP's to the host computer to the ESP's. The IT host computer may be powered by a common power supply, and each ESP is individually powered by a battery. This monitoring system has an extremely low power consumption which allows remote operation of the ESP for long periods; provides authenticated message traffic over a wireless network; utilizes state-of-health and tamper sensors to ensure that the ESP is secure and undamaged; has robust housing of the ESP suitable for use in radiation environments; and is low in cost. With one base station (host computer and interrogator transceiver), multiple ESP's may be controlled at a single monitoring site.

    17. Personal Computer Inventory System

      Energy Science and Technology Software Center (OSTI)

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    18. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and ...

    19. Announcement of Computer Software

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      All Other Editions Are Obsolete UNITED STATES DEPARTMENT OF ENERGY ANNOUNCEMENT OF COMPUTER SOFTWARE OMB Control Number 1910-1400 (OMB Burden Disclosure Statement is on last...

    20. ASCR Leadership Computing Challenge proposals due February 3

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      For 2015 ALCC, projects of special interest to the DOE include the following: Energy ... leadership computing resources Exploration of new frontiers in physical, ...

    1. Excessing of Computers Used for Unclassified Controlled Information...

      Broader source: Energy.gov (indexed) [DOE]

      of approxiinately 800 infomations ystems, including up to 11 5,000 personal computers, many powerful supercomputers, numerous servers, and a broad array of related...

    2. June 2015 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      Including an examination of the Department of Energys position on quality management Bennett, C.T. (1994) 74 Computational procedures for determining parameters in Ramberg-Osgood ...

    3. Intrepid/Challenger/Surveyor | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      communications tools, so a wide range of science and engineering applications are straightforward to port, including those used by the computational science community for...

    4. Launching applications on compute and service processors running...

      Office of Scientific and Technical Information (OSTI)

      Technical Information Service, Springfield, VA at www.ntis.gov. A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable...

    5. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      60 Years of Computing 60 Years of Computing

    6. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Capabilities Information Science, Computing, Applied Math science-innovationassetsimagesicon-science.jpg Information Science, Computing, Applied Math National security ...

    7. Example Retro-Commissioning Scope of Work to Include Services...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Retro-Commissioning Scope of Work to Include Services as Part of an ESPC Investment-Grade Audit Example Retro-Commissioning Scope of Work to Include Services as Part of an ESPC...

    8. SWS Online Tool now includes Multifamily Content, plus a How...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      SWS Online Tool now includes Multifamily Content, plus a How-To Webinar SWS Online Tool now includes Multifamily Content, plus a How-To Webinar This announcement contains...

    9. Natural Gas Delivered to Consumers in Texas (Including Vehicle...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Texas (Including Vehicle Fuel) (Million Cubic Feet) Natural Gas Delivered to Consumers in Texas (Including Vehicle Fuel) (Million Cubic Feet) Year Jan Feb Mar Apr May Jun Jul Aug...

    10. Natural Gas Delivered to Consumers in New Mexico (Including Vehicle...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mexico (Including Vehicle Fuel) (Million Cubic Feet) Natural Gas Delivered to Consumers in New Mexico (Including Vehicle Fuel) (Million Cubic Feet) Year Jan Feb Mar Apr May Jun Jul...

    11. Low latency, high bandwidth data communications between compute nodes in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-11-02

      Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

    12. Local Orthogonal Cutting Method for Computing Medial Curves and Its

      Office of Scientific and Technical Information (OSTI)

      Biomedical Applications (Journal Article) | SciTech Connect Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications Citation Details In-Document Search Title: Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of

    13. Reach and get capability in a computing environment

      DOE Patents [OSTI]

      Bouchard, Ann M. (Albuquerque, NM); Osbourn, Gordon C. (Albuquerque, NM)

      2012-06-05

      A reach and get technique includes invoking a reach command from a reach location within a computing environment. A user can then navigate to an object within the computing environment and invoke a get command on the object. In response to invoking the get command, the computing environment is automatically navigated back to the reach location and the object copied into the reach location.

    14. President's FY 2017 Budget Includes $878 Million for Fossil Energy

      Energy Savers [EERE]

      Programs | Department of Energy President's FY 2017 Budget Includes $878 Million for Fossil Energy Programs President's FY 2017 Budget Includes $878 Million for Fossil Energy Programs February 9, 2016 - 2:33pm Addthis President Obama's Fiscal Year (FY) 2017 Budget includes a programmatic level of $878 million for the Office of Fossil Energy (FE), including the use of $240 million in prior year funds, to advance technologies related to the reliable, efficient, affordable and environmentally

    15. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    16. Traffic information computing platform for big data

      SciTech Connect (OSTI)

      Duan, Zongtao Li, Ying Zheng, Xibin Liu, Yan Dai, Jiting Kang, Jun

      2014-10-06

      Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

    17. SC e-journals, Computer Science

      Office of Scientific and Technical Information (OSTI)

      & Mathematical Organization Theory Computational Complexity Computational Economics Computational Management ... Technology EURASIP Journal on Information Security ...

    18. Identifying failure in a tree network of a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

      2010-08-24

      Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

    19. Computing contingency statistics in parallel.

      SciTech Connect (OSTI)

      Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

      2010-09-01

      Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

    20. Mobile computing device configured to compute irradiance, glint, and glare of the sun

      DOE Patents [OSTI]

      Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib

      2014-03-11

      Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.

    1. Extreme Scale Computing to Secure the Nation

      SciTech Connect (OSTI)

      Brown, D L; McGraw, J R; Johnson, J R; Frincke, D

      2009-11-10

      Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national security requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This will be accomplished through significant increases in the scientific bases that underlie the computational tools. Computer codes must be de

    2. Sandia Energy - New Project Is the ACME of Computer Science to...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Project Is the ACME of Computer Science to Address Climate Change Home Climate Partnership News Global Climate & Energy News & Events Analysis Modeling Modeling & Analysis New...

    3. Building Energy Consumption Analysis

      Energy Science and Technology Software Center (OSTI)

      2005-03-02

      DOE2.1E-121SUNOS is a set of modules for energy analysis in buildings. Modules are included to calculate the heating and cooling loads for each space in a building for each hour of a year (LOADS), to simulate the operation and response of the equipment and systems that control temperature and humidity and distribute heating, cooling and ventilation to the building (SYSTEMS), to model energy conversion equipment that uses fuel or electricity to provide the required heating,more » cooling and electricity (PLANT), and to compute the cost of energy and building operation based on utility rate schedule and economic parameters (ECONOMICS).« less

    4. Method and system for benchmarking computers

      DOE Patents [OSTI]

      Gustafson, John L.

      1993-09-14

      A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

    5. Applications of Parallel Computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course offered at the University of California, Berkeley. The course is being taught by UC Berkeley professor and LBNL Faculty Scientist Jim Demmel. CS267 is broadcast live over the internet and all NERSC users are invited to monitor the broadcast course, but course credit is available only to student registered for the

    6. Microfluidic devices and methods including porous polymer monoliths

      Office of Scientific and Technical Information (OSTI)

      (Patent) | SciTech Connect Patent: Microfluidic devices and methods including porous polymer monoliths Citation Details In-Document Search Title: Microfluidic devices and methods including porous polymer monoliths Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting

    7. DOE Releases Request for Information on Critical Materials, Including Fuel

      Energy Savers [EERE]

      Cell Platinum Group Metal Catalysts | Department of Energy Request for Information on Critical Materials, Including Fuel Cell Platinum Group Metal Catalysts DOE Releases Request for Information on Critical Materials, Including Fuel Cell Platinum Group Metal Catalysts February 17, 2016 - 3:03pm Addthis The U.S. Department of Energy (DOE) has released a Request for Information (RFI) on critical materials in the energy sector, including fuel cell platinum group metal catalysts. The RFI is

    8. Percentage of Total Natural Gas Commercial Deliveries included in Prices

      Gasoline and Diesel Fuel Update (EIA)

      City Gate Price Residential Price Percentage of Total Residential Deliveries included in Prices Commercial Price Percentage of Total Commercial Deliveries included in Prices Industrial Price Percentage of Total Industrial Deliveries included in Prices Electric Power Price Period: Monthly Annual Download Series History Download Series History Definitions, Sources & Notes Definitions, Sources & Notes Show Data By: Data Series Area Jul-15 Aug-15 Sep-15 Oct-15 Nov-15 Dec-15 View History U.S.

    9. Percentage of Total Natural Gas Industrial Deliveries included in Prices

      U.S. Energy Information Administration (EIA) Indexed Site

      Pipeline and Distribution Use Price City Gate Price Residential Price Percentage of Total Residential Deliveries included in Prices Commercial Price Percentage of Total Commercial Deliveries included in Prices Industrial Price Percentage of Total Industrial Deliveries included in Prices Vehicle Fuel Price Electric Power Price Period: Monthly Annual Download Series History Download Series History Definitions, Sources & Notes Definitions, Sources & Notes Show Data By: Data Series Area 2010

    10. Percentage of Total Natural Gas Industrial Deliveries included in Prices

      U.S. Energy Information Administration (EIA) Indexed Site

      City Gate Price Residential Price Percentage of Total Residential Deliveries included in Prices Commercial Price Percentage of Total Commercial Deliveries included in Prices Industrial Price Percentage of Total Industrial Deliveries included in Prices Electric Power Price Period: Monthly Annual Download Series History Download Series History Definitions, Sources & Notes Definitions, Sources & Notes Show Data By: Data Series Area Jul-15 Aug-15 Sep-15 Oct-15 Nov-15 Dec-15 View History U.S.

    11. Percentage of Total Natural Gas Residential Deliveries included in Prices

      U.S. Energy Information Administration (EIA) Indexed Site

      City Gate Price Residential Price Percentage of Total Residential Deliveries included in Prices Commercial Price Percentage of Total Commercial Deliveries included in Prices Industrial Price Percentage of Total Industrial Deliveries included in Prices Electric Power Price Period: Monthly Annual Download Series History Download Series History Definitions, Sources & Notes Definitions, Sources & Notes Show Data By: Data Series Area Jul-15 Aug-15 Sep-15 Oct-15 Nov-15 Dec-15 View History U.S.

    12. Including Retro-Commissioning in Federal Energy Savings Performance

      Energy Savers [EERE]

      Contracts | Department of Energy Including Retro-Commissioning in Federal Energy Savings Performance Contracts Including Retro-Commissioning in Federal Energy Savings Performance Contracts Document describes guidance on the importance of (and steps to) including retro-commissioning in federal energy savings performance contracts (ESPCs). PDF icon 11_2_includingretrocommissioning.pdf More Documents & Publications Enabling Mass-Scale Financing for Federal Energy, Water, and Sustainability

    13. A Roadmap to Success: Hiring, Retaining, and Including People with

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Disabilities | Department of Energy A Roadmap to Success: Hiring, Retaining, and Including People with Disabilities A Roadmap to Success: Hiring, Retaining, and Including People with Disabilities January 20, 2016 9:00PM to 9:59PM MST Course Title and Description: "A Roadmap to Success: Hiring, Retaining and Including People with Disabilities" was released by OPM in connection with the 24th anniversary of the Americans with Disabilities Act (ADA). This course offers basic

    14. Introduction to Small-Scale Photovoltaic Systems (Including RETScreen...

      Open Energy Info (EERE)

      Photovoltaic Systems (Including RETScreen Case Study) (Webinar) Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Introduction to Small-Scale Photovoltaic Systems...

    15. Introduction to Small-Scale Wind Energy Systems (Including RETScreen...

      Open Energy Info (EERE)

      Case Study) (Webinar) Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Introduction to Small-Scale Wind Energy Systems (Including RETScreen Case Study) (Webinar) Focus...

    16. Numerical simulations for low energy nuclear reactions including...

      Office of Scientific and Technical Information (OSTI)

      Numerical simulations for low energy nuclear reactions including direct channels to validate statistical models Citation Details In-Document Search Title: Numerical simulations for...

    17. Natural Gas Deliveries to Commercial Consumers (Including Vehicle...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      West Virginia (Million Cubic Feet) Natural Gas Deliveries to Commercial Consumers (Including Vehicle Fuel through 1996) in West Virginia (Million Cubic Feet) Year Jan Feb Mar Apr...

    18. Systematic expansion of porous crystals to include large molecules | Center

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      for Gas SeparationsRelevant to Clean Energy Technologies | Blandine Jerome Systematic expansion of porous crystals to include large molecules

    19. METHOD OF FABRICATING ELECTRODES INCLUDING HIGH-CAPACITY, BINDER...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Wind Energy Partners (27) Visual Patent Search Success Stories Find More Like This Return to Search METHOD OF FABRICATING ELECTRODES INCLUDING HIGH-CAPACITY, BINDER-FREE ANODES ...

    20. Energy Department Expands Gas Gouging Reporting System to Include...

      Office of Environmental Management (EM)

      Washington, DC - Energy Secretary Samuel W. Bodman announced today that the Department of Energy has expanded its gas gouging reporting system to include a toll-free telephone ...

    1. U-182: Microsoft Windows Includes Some Invalid Certificates

      Broader source: Energy.gov [DOE]

      The operating system includes some invalid intermediate certificates. The vulnerability is due to the certificate authorities and not the operating system itself.

    2. An Arbitrary Precision Computation Package

      Energy Science and Technology Software Center (OSTI)

      2003-06-14

      This package permits a scientist to perform computations using an arbitrarily high level of numeric precision (the equivalent of hundreds or even thousands of digits), by making only minor changes to conventional C++ or Fortran-90 soruce code. This software takes advantage of certain properties of IEEE floating-point arithmetic, together with advanced numeric algorithms, custom data types and operator overloading. Also included in this package is the "Experimental Mathematician's Toolkit", which incorporates many of these facilitiesmore » into an easy-to-use interactive program.« less

    3. Quantum steady computation

      SciTech Connect (OSTI)

      Castagnoli, G. )

      1991-08-10

      This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

    4. Theory, Modeling and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme...

    5. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      a n n u a l r e p o r t 2 0 1 2 Argonne Leadership Computing Facility Director's Message .............................................................................................................................1 About ALCF ......................................................................................................................................... 2 IntroDuCIng MIrA Introducing Mira

    6. Solar Energy Education. Reader, Part II. Sun story. [Includes glossary

      SciTech Connect (OSTI)

      Not Available

      1981-05-01

      Magazine articles which focus on the subject of solar energy are presented. The booklet prepared is the second of a four part series of the Solar Energy Reader. Excerpts from the magazines include the history of solar energy, mythology and tales, and selected poetry on the sun. A glossary of energy related terms is included. (BCS)

    7. Prevention of Harassment (Including Sexual Harassment) and Retaliation

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Policy Statement | Department of Energy Prevention of Harassment (Including Sexual Harassment) and Retaliation Policy Statement Prevention of Harassment (Including Sexual Harassment) and Retaliation Policy Statement DOE Policy for Preventing Harassment in the Workplace PDF icon Harassment Policy July 2011.pdf More Documents & Publications Policy Statement on Equal Employment Opportunity, Harassment and Retaliation Policy Statement on Equal Employment Opportunity, Harassment, and

    8. Microfluidic devices and methods including porous polymer monoliths

      DOE Patents [OSTI]

      Hatch, Anson V.; Sommer, Gregory j.; Singh, Anup K.; Wang, Ying-Chih; Abhyankar, Vinay

      2015-12-01

      Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous polymer monolith may include surfaces terminated with iniferter species. Capture molecules may then be grafted to the monolith pores.

    9. Microfluidic devices and methods including porous polymer monoliths

      DOE Patents [OSTI]

      Hatch, Anson V; Sommer, Gregory J; Singh, Anup K; Wang, Ying-Chih; Abhyankar, Vinay V

      2014-04-22

      Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous polymer monolith may include surfaces terminated with iniferter species. Capture molecules may then be grafted to the monolith pores.

    10. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ADTSC » CCS » CCS-7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These

    11. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental health, cleaner energy, and national security. Contact Us Group Leader Carl Gable Deputy Group Leader Gilles Bussod Email Profile pages header Search our Profile pages Hari Viswanathan inspects a microfluidic cell used to study the extraction of hydrocarbon fuels from a complex fracture network. EES-16's Subsurface Flow

    12. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    13. Compute Reservation Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Reservation Request Form Compute Reservation Request Form Users can request a scheduled reservation of machine resources if their jobs have special needs that cannot be accommodated through the regular batch system. A reservation brings some portion of the machine to a specific user or project for an agreed upon duration. Typically this is used for interactive debugging at scale or real time processing linked to some experiment or event. It is not intended to be used to guarantee fast

    14. New TRACC Cluster Computer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      TRACC Cluster Computer With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD 16 core, 2.3 GHz, 32 GB processors. See also Computing Resources.

    15. Advanced Simulation and Computing

      National Nuclear Security Administration (NNSA)

      NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction

    16. Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computing Computing Fun fact: Most systems require air conditioning or chilled water to cool super powerful supercomputers, but the Olympus supercomputer at Pacific Northwest National Laboratory is cooled by the location's 65 degree groundwater. Traditional cooling systems could cost up to $61,000 in electricity each year, but this more efficient setup uses 70 percent less energy. | Photo courtesy of PNNL. Fun fact: Most systems require air conditioning or chilled water to cool super powerful

    17. Articles which include chevron film cooling holes, and related processes

      DOE Patents [OSTI]

      Bunker, Ronald Scott; Lacy, Benjamin Paul

      2014-12-09

      An article is described, including an inner surface which can be exposed to a first fluid; an inlet; and an outer surface spaced from the inner surface, which can be exposed to a hotter second fluid. The article further includes at least one row or other pattern of passage holes. Each passage hole includes an inlet bore extending through the substrate from the inlet at the inner surface to a passage hole-exit proximate to the outer surface, with the inlet bore terminating in a chevron outlet adjacent the hole-exit. The chevron outlet includes a pair of wing troughs having a common surface region between them. The common surface region includes a valley which is adjacent the hole-exit; and a plateau adjacent the valley. The article can be an airfoil. Related methods for preparing the passage holes are also described.

    18. Paging memory from random access memory to backing storage in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

      2013-05-21

      Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

    19. Can Cloud Computing Address the Scientific Computing Requirements for DOE

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe January 30, 2012 Jon Bashor, Jbashor@lbl.gov, +1 510-486-5849 Magellan1.jpg Magellan at NERSC After a two-year study of the feasibility of cloud computing systems for meeting the ever-increasing computational needs of scientists,

    20. Computing and Computational Sciences Directorate - Joint Institute for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences Joint Institute for Computational Sciences To help realize the full potential of new-generation computers for advancing scientific discovery, the University of Tennessee (UT) and Oak Ridge National Laboratory (ORNL) have created the Joint Institute for Computational Sciences (JICS). JICS combines the experience and expertise in theoretical and computational science and engineering, computer science, and mathematics in these two institutions and focuses these skills on

    1. Computing and Computational Sciences Directorate - National Center for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences Home National Center for Computational Sciences The National Center for Computational Sciences (NCCS), formed in 1992, is home to two of Oak Ridge National Laboratory's (ORNL's) high-performance computing projects-the Oak Ridge Leadership Computing Facility (OLCF) and the National Climate-Computing Research Center (NCRC). The OLCF (www.olcf.ornl.gov) was established at ORNL in 2004 with the mission of standing up a supercomputer 100 times more powerful than the leading

    2. Turbomachine injection nozzle including a coolant delivery system

      DOE Patents [OSTI]

      Zuo, Baifang (Simpsonville, SC)

      2012-02-14

      An injection nozzle for a turbomachine includes a main body having a first end portion that extends to a second end portion defining an exterior wall having an outer surface. A plurality of fluid delivery tubes extend through the main body. Each of the plurality of fluid delivery tubes includes a first fluid inlet for receiving a first fluid, a second fluid inlet for receiving a second fluid and an outlet. The injection nozzle further includes a coolant delivery system arranged within the main body. The coolant delivery system guides a coolant along at least one of a portion of the exterior wall and around the plurality of fluid delivery tubes.

    3. in High Performance Computing Computer System, Cluster, and Networking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

    4. Applications in Data-Intensive Computing

      SciTech Connect (OSTI)

      Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.; Cannon, William R.; Chavarra-Miranda, Daniel; Choudhury, Sutanay; Gorton, Ian; Gracio, Deborah K.; Halter, Todd D.; Jaitly, Navdeep; Johnson, John R.; Kouzes, Richard T.; Macduff, Matt C.; Marquez, Andres; Monroe, Matthew E.; Oehmen, Christopher S.; Pike, William A.; Scherrer, Chad; Villa, Oreste; Webb-Robertson, Bobbie-Jo M.; Whitney, Paul D.; Zuljevic, Nino

      2010-04-01

      This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications provide timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.

    5. CAD-centric Computation Management System for a Virtual TBM

      SciTech Connect (OSTI)

      Ramakanth Munipalli; K.Y. Szema; P.Y. Huang; C.M. Rowell; A.Ying; M. Abdou

      2011-05-03

      HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of the analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.

    6. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC. Quarterly report January through March 2011. Year 1 Quarter 2 progress report.

      SciTech Connect (OSTI)

      Lottes, S. A.; Kulak, R. F.; Bojanowski, C.

      2011-05-19

      This project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at the Turner-Fairbank Highway Research Center for a period of five years, beginning in October 2010. The analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water loads on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of January through March 2011.

    7. What To Include In The Whistleblower Complaint? | National Nuclear...

      National Nuclear Security Administration (NNSA)

      that you have included in your complaint are true and correct to the best of your knowledge and belief; and An affirmation, as described in Sec. 708.13 of this subpart, that...

    8. Demonstration of a 50% Thermal Efficient Diesel Engine - Including...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      - Including HTCD Program Overview Presentation given at DEER 2006, August 20-24, 2006, Detroit, Michigan. Sponsored by the U.S. DOE's EERE FreedomCar and Fuel Partnership and 21st...

    9. Including Retro-Commissioning in Federal Energy Savings Performance...

      Energy Savers [EERE]

      the cost of the survey. Developing a detailed scope of work and a fixed price for this work is important to eliminate risk to the Agency and the ESCo. Including a detailed scope...

    10. Natural Gas Deliveries to Commercial Consumers (Including Vehicle...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mexico (Million Cubic Feet) Natural Gas Deliveries to Commercial Consumers (Including Vehicle Fuel through 1996) in New Mexico (Million Cubic Feet) Year Jan Feb Mar Apr May Jun Jul...

    11. T-603: Mac OS X Includes Some Invalid Comodo Certificates

      Broader source: Energy.gov [DOE]

      The operating system includes some invalid certificates. The vulnerability is due to the invalid certificates and not the operating system itself. Other browsers, applications, and operating systems are affected.

    12. Scheduling applications for execution on a plurality of compute nodes of a parallel computer to manage temperature of the nodes during execution

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

      2012-10-16

      Methods, apparatus, and products are disclosed for scheduling applications for execution on a plurality of compute nodes of a parallel computer to manage temperature of the plurality of compute nodes during execution that include: identifying one or more applications for execution on the plurality of compute nodes; creating a plurality of physically discontiguous node partitions in dependence upon temperature characteristics for the compute nodes and a physical topology for the compute nodes, each discontiguous node partition specifying a collection of physically adjacent compute nodes; and assigning, for each application, that application to one or more of the discontiguous node partitions for execution on the compute nodes specified by the assigned discontiguous node partitions.

    13. Solar Energy Education. Reader, Part II. Sun story. [Includes glossary]

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (Technical Report) | SciTech Connect Reader, Part II. Sun story. [Includes glossary] Citation Details In-Document Search Title: Solar Energy Education. Reader, Part II. Sun story. [Includes glossary] × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy science and

    14. Solar Energy Education. Renewable energy: a background text. [Includes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      glossary] (Technical Report) | SciTech Connect energy: a background text. [Includes glossary] Citation Details In-Document Search Title: Solar Energy Education. Renewable energy: a background text. [Includes glossary] × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy

    15. Numerical simulations for low energy nuclear reactions including direct

      Office of Scientific and Technical Information (OSTI)

      channels to validate statistical models (Conference) | SciTech Connect Numerical simulations for low energy nuclear reactions including direct channels to validate statistical models Citation Details In-Document Search Title: Numerical simulations for low energy nuclear reactions including direct channels to validate statistical models Authors: Kawano, Toshihiko [1] + Show Author Affiliations Los Alamos National Laboratory [Los Alamos National Laboratory Publication Date: 2014-01-08 OSTI

    16. Hybrid powertrain system including smooth shifting automated transmission

      DOE Patents [OSTI]

      Beaty, Kevin D.; Nellums, Richard A.

      2006-10-24

      A powertrain system is provided that includes a prime mover and a change-gear transmission having an input, at least two gear ratios, and an output. The powertrain system also includes a power shunt configured to route power applied to the transmission by one of the input and the output to the other one of the input and the output. A transmission system and a method for facilitating shifting of a transmission system are also provided.

    17. Annual Technology Baseline (Including Supporting Data); NREL (National

      Office of Scientific and Technical Information (OSTI)

      Renewable Energy Laboratory) (Conference) | SciTech Connect SciTech Connect Search Results Conference: Annual Technology Baseline (Including Supporting Data); NREL (National Renewable Energy Laboratory) Citation Details In-Document Search Title: Annual Technology Baseline (Including Supporting Data); NREL (National Renewable Energy Laboratory) Consistent cost and performance data for various electricity generation technologies can be difficult to find and may change frequently for certain

    18. Annual Technology Baseline (Including Supporting Data); NREL (National

      Office of Scientific and Technical Information (OSTI)

      Renewable Energy Laboratory) (Conference) | SciTech Connect Annual Technology Baseline (Including Supporting Data); NREL (National Renewable Energy Laboratory) Citation Details In-Document Search Title: Annual Technology Baseline (Including Supporting Data); NREL (National Renewable Energy Laboratory) × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a

    19. Comparison of Joint Modeling Approaches Including Eulerian Sliding

      Office of Scientific and Technical Information (OSTI)

      Interfaces (Technical Report) | SciTech Connect Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces Citation Details In-Document Search Title: Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian

    20. Microfluidic devices and methods including porous polymer monoliths

      Office of Scientific and Technical Information (OSTI)

      (Patent) | SciTech Connect Patent: Microfluidic devices and methods including porous polymer monoliths Citation Details In-Document Search Title: Microfluidic devices and methods including porous polymer monoliths × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy science

    1. Demonstration of a 50% Thermal Efficient Diesel Engine - Including HTCD

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Program Overview | Department of Energy a 50% Thermal Efficient Diesel Engine - Including HTCD Program Overview Demonstration of a 50% Thermal Efficient Diesel Engine - Including HTCD Program Overview Presentation given at DEER 2006, August 20-24, 2006, Detroit, Michigan. Sponsored by the U.S. DOE's EERE FreedomCar and Fuel Partnership and 21st Century Truck Programs. PDF icon 2006_deer_milam.pdf More Documents & Publications The Path to a 50% Thermal Efficient Engine Heavy Truck Clean

    2. Trends and challenges when including microstructure in materials modeling:

      Office of Scientific and Technical Information (OSTI)

      Examples of problems studied at Sandia National Laboratories. (Conference) | SciTech Connect Trends and challenges when including microstructure in materials modeling: Examples of problems studied at Sandia National Laboratories. Citation Details In-Document Search Title: Trends and challenges when including microstructure in materials modeling: Examples of problems studied at Sandia National Laboratories. Abstract not provided. Authors: Dingreville, Remi Philippe Michel Publication Date:

    3. Limited Personal Use of Government Office Equipment including Information Technology

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      2005-01-07

      The Order establishes requirements and assigns responsibilities for employees' limited personal use of Government resources (office equipment and other resources including information technology) within DOE, including NNSA. The Order is required to provide guidance on appropriate and inappropriate uses of Government resources. This Order was certified 04/23/2009 as accurate and continues to be relevant and appropriate for use by the Department. Certified 4-23-09. No cancellation.

    4. DOE Revises its NEPA Regulations, Including Categorical Exclusions |

      Office of Environmental Management (EM)

      Department of Energy Revises its NEPA Regulations, Including Categorical Exclusions DOE Revises its NEPA Regulations, Including Categorical Exclusions September 30, 2011 - 2:30pm Addthis On September 27, 2011, the Department of Energy (DOE) approved revisions to its National Environmental Policy Act (NEPA) regulations, and on September 28th, submitted the revisions to the Federal Register. The final regulations, which become effective 30 days after publication in the Federal Register, are

    5. The implications of spatial locality on scientific computing benchmark

      Office of Scientific and Technical Information (OSTI)

      selection and analysis. (Conference) | SciTech Connect spatial locality on scientific computing benchmark selection and analysis. Citation Details In-Document Search Title: The implications of spatial locality on scientific computing benchmark selection and analysis. No abstract prepared. Authors: Kogge, Peter [1] ; Murphy, Richard C. [1] ; Rodrigues, Arun F. [1] ; Underwood, Keith Douglas + Show Author Affiliations (University of Notre Dame, Notre Dame, IN) Publication Date: 2005-08-01 OSTI

    6. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Capabilities » Information Science, Computing, Applied Math /science-innovation/_assets/images/icon-science.jpg Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los Alamos National Laboratory for the best of both. No place on Earth pursues a broader array of world-class scientific endeavors. Computer, Computational, and Statistical Sciences (CCS)» High Performance Computing (HPC)» Extreme Scale Computing, Co-design»

    7. Comparison of International Energy Intensities across the G7 and other parts of Europe, including Ukraine

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Comparison of International Energy Intensities across the G7 and other parts of Europe, including Ukraine Elizabeth Sendich November 2014 Independent Statistics & Analysis www.eia.gov U.S. Energy Information Administration Washington, DC 20585 This paper is released to encourage discussion and critical comment. The analysis and conclusions expressed here are those of the authors and not necessarily those of the U.S. Energy Information Administration. WORKING PAPER SERIES November 2014

    8. Method and computer program product for maintenance and modernization backlogging

      DOE Patents [OSTI]

      Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

      2013-02-19

      According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

    9. Prasanna Balaprakash | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Prasanna Balaprakash Assistant Computer Scientist Prasanna Balaprakash Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 1135 Argonne, IL 60439 630-252-1109 pbalapra@mcs.anl.gov Specialities: Optimization under uncertainty; Artificial intelligence algorithms for large-scale optimization; Automated algorithm tuning; Modeling and prediction/statistical machine learning; Monte Carlo simulation; and Statistical analysis.

    10. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse...

    11. Computer simulation | Open Energy Information

      Open Energy Info (EERE)

      Computer simulation Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Computer simulation Author wikipedia Published wikipedia, 2013 DOI Not Provided...

    12. NREL: Computational Science Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      high-performance computing, computational science, applied mathematics, scientific data management, visualization, and informatics. NREL is home to the largest high performance...

    13. SCC: The Strategic Computing Complex

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computer room, which is an open room about three-fourths the size of a football field. The Strategic Computing Complex (SCC) at the Los Alamos National Laboratory...

    14. Human-computer interface

      DOE Patents [OSTI]

      Anderson, Thomas G.

      2004-12-21

      The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

    15. A model for heterogeneous materials including phase transformations

      SciTech Connect (OSTI)

      Addessio, F.L.; Clements, B.E.; Williams, T.O.

      2005-04-15

      A model is developed for particulate composites, which includes phase transformations in one or all of the constituents. The model is an extension of the method of cells formalism. Representative simulations for a single-phase, brittle particulate (SiC) embedded in a ductile material (Ti), which undergoes a solid-solid phase transformation, are provided. Also, simulations for a tungsten heavy alloy (WHA) are included. In the WHA analyses a particulate composite, composed of tungsten particles embedded in a tungsten-iron-nickel alloy matrix, is modeled. A solid-liquid phase transformation of the matrix material is included in the WHA numerical calculations. The example problems also demonstrate two approaches for generating free energies for the material constituents. Simulations for volumetric compression, uniaxial strain, biaxial strain, and pure shear are used to demonstrate the versatility of the model.

    16. Tunable cavity resonator including a plurality of MEMS beams

      DOE Patents [OSTI]

      Peroulis, Dimitrios; Fruehling, Adam; Small, Joshua Azariah; Liu, Xiaoguang; Irshad, Wasim; Arif, Muhammad Shoaib

      2015-10-20

      A tunable cavity resonator includes a substrate, a cap structure, and a tuning assembly. The cap structure extends from the substrate, and at least one of the substrate and the cap structure defines a resonator cavity. The tuning assembly is positioned at least partially within the resonator cavity. The tuning assembly includes a plurality of fixed-fixed MEMS beams configured for controllable movement relative to the substrate between an activated position and a deactivated position in order to tune a resonant frequency of the tunable cavity resonator.

    17. Thin film solar cell including a spatially modulated intrinsic layer

      DOE Patents [OSTI]

      Guha, Subhendu (Troy, MI); Yang, Chi-Chung (Troy, MI); Ovshinsky, Stanford R. (Bloomfield Hills, MI)

      1989-03-28

      One or more thin film solar cells in which the intrinsic layer of substantially amorphous semiconductor alloy material thereof includes at least a first band gap portion and a narrower band gap portion. The band gap of the intrinsic layer is spatially graded through a portion of the bulk thickness, said graded portion including a region removed from the intrinsic layer-dopant layer interfaces. The band gap of the intrinsic layer is always less than the band gap of the doped layers. The gradation of the intrinsic layer is effected such that the open circuit voltage and/or the fill factor of the one or plural solar cell structure is enhanced.

    18. DOE Considers Natural Gas Utility Service Options: Proposal Includes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      30-mile Natural Gas Pipeline from Pasco to Hanford | Department of Energy Considers Natural Gas Utility Service Options: Proposal Includes 30-mile Natural Gas Pipeline from Pasco to Hanford DOE Considers Natural Gas Utility Service Options: Proposal Includes 30-mile Natural Gas Pipeline from Pasco to Hanford January 23, 2012 - 12:00pm Addthis Media Contacts Cameron Hardy, DOE , (509) 376-5365, Cameron.Hardy@rl.doe.gov RICHLAND, WASH. - The U.S. Department of Energy (DOE) is considering

    19. Solar Energy Education. Renewable energy: a background text. [Includes glossary

      SciTech Connect (OSTI)

      Not Available

      1985-01-01

      Some of the most common forms of renewable energy are presented in this textbook for students. The topics include solar energy, wind power hydroelectric power, biomass ocean thermal energy, and tidal and geothermal energy. The main emphasis of the text is on the sun and the solar energy that it yields. Discussions on the sun's composition and the relationship between the earth, sun and atmosphere are provided. Insolation, active and passive solar systems, and solar collectors are the subtopics included under solar energy. (BCS)

    20. Metal vapor laser including hot electrodes and integral wick

      DOE Patents [OSTI]

      Ault, E.R.; Alger, T.W.

      1995-03-07

      A metal vapor laser, specifically one utilizing copper vapor, is disclosed herein. This laser utilizes a plasma tube assembly including a thermally insulated plasma tube containing a specific metal, e.g., copper, and a buffer gas therein. The laser also utilizes means including hot electrodes located at opposite ends of the plasma tube for electrically exciting the metal vapor and heating its interior to a sufficiently high temperature to cause the metal contained therein to vaporize and for subjecting the vapor to an electrical discharge excitation in order to lase. The laser also utilizes external wicking arrangements, that is, wicking arrangements located outside the plasma tube. 5 figs.

    1. Methods of producing adsorption media including a metal oxide

      DOE Patents [OSTI]

      Mann, Nicholas R; Tranter, Troy J

      2014-03-04

      Methods of producing a metal oxide are disclosed. The method comprises dissolving a metal salt in a reaction solvent to form a metal salt/reaction solvent solution. The metal salt is converted to a metal oxide and a caustic solution is added to the metal oxide/reaction solvent solution to adjust the pH of the metal oxide/reaction solvent solution to less than approximately 7.0. The metal oxide is precipitated and recovered. A method of producing adsorption media including the metal oxide is also disclosed, as is a precursor of an active component including particles of a metal oxide.

    2. Computers for artificial intelligence a technology assessment and forecast

      SciTech Connect (OSTI)

      Miller, R.K.

      1986-01-01

      This study reviews the development and current state-of-the-art in computers for artificial intelligence, including LISP machines, AI workstations, professional and engineering workstations, minicomputers, mainframes, and supercomputers. Major computer systems for AI applications are reviewed. The use of personal computers for expert system development is discussed, and AI software for the IBM PC, Texas Instrument Professional Computer, and Apple MacIntosh is presented. Current research aimed at developing a new computer for artificial intelligence is described, and future technological developments are discussed.

    3. Real time analysis under EDS

      SciTech Connect (OSTI)

      Schneberk, D.

      1985-07-01

      This paper describes the analysis component of the Enrichment Diagnostic System (EDS) developed for the Atomic Vapor Laser Isotope Separation Program (AVLIS) at Lawrence Livermore National Laboratory (LLNL). Four different types of analysis are performed on data acquired through EDS: (1) absorption spectroscopy on laser-generated spectral lines, (2) mass spectrometer analysis, (3) general purpose waveform analysis, and (4) separation performance calculations. The information produced from this data includes: measures of particle density and velocity, partial pressures of residual gases, and overall measures of isotope enrichment. The analysis component supports a variety of real-time modeling tasks, a means for broadcasting data to other nodes, and a great degree of flexibility for tailoring computations to the exact needs of the process. A particular data base structure and program flow is common to all types of analysis. Key elements of the analysis component are: (1) a fast access data base which can configure all types of analysis, (2) a selected set of analysis routines, (3) a general purpose data manipulation and graphics package for the results of real time analysis. Each of these components are described with an emphasis upon how each contributes to overall system capability. 3 figs.

    4. Computation of Wave Loads under Multidirectional Sea States for Floating Offshore Wind Turbines: Preprint

      SciTech Connect (OSTI)

      Duarte, T.; Gueydon, S.; Jonkman, J.; Sarmento, A.

      2014-03-01

      This paper focuses on the analysis of a floating wind turbine under multidirectional wave loading. Special attention is given to the different methods used to synthesize the multidirectional sea state. This analysis includes the double-sum and single-sum methods, as well as an equal-energy discretization of the directional spectrum. These three methods are compared in detail, including the ergodicity of the solution obtained. From the analysis, the equal-energy method proved to be the most computationally efficient while still retaining the ergodicity of the solution. This method was chosen to be implemented in the numerical code FAST. Preliminary results on the influence of these wave loads on a floating wind turbine showed significant additional roll and sway motion of the platform.

    5. cDNA encoding a polypeptide including a hevein sequence

      DOE Patents [OSTI]

      Raikhel, N.V.; Broekaert, W.F.; Namhai Chua; Kush, A.

      1993-02-16

      A cDNA clone (HEV1) encoding hevein was isolated via polymerase chain reaction (PCR) using mixed oligonucleotides corresponding to two regions of hevein as primers and a Hevea brasiliensis latex cDNA library as a template. HEV1 is 1,018 nucleotides long and includes an open reading frame of 204 amino acids.

    6. What To Include In The Whistleblower Complaint? | National Nuclear Security

      National Nuclear Security Administration (NNSA)

      Administration To Include In The Whistleblower Complaint? | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo Gallery Jobs

    7. Accelerating Battery Design Using Computer-Aided Engineering Tools: Preprint

      SciTech Connect (OSTI)

      Pesaran, A.; Heon, G. H.; Smith, K.

      2011-01-01

      Computer-aided engineering (CAE) is a proven pathway, especially in the automotive industry, to improve performance by resolving the relevant physics in complex systems, shortening the product development design cycle, thus reducing cost, and providing an efficient way to evaluate parameters for robust designs. Academic models include the relevant physics details, but neglect engineering complexities. Industry models include the relevant macroscopic geometry and system conditions, but simplify the fundamental physics too much. Most of the CAE battery tools for in-house use are custom model codes and require expert users. There is a need to make these battery modeling and design tools more accessible to end users such as battery developers, pack integrators, and vehicle makers. Developing integrated and physics-based CAE battery tools can reduce the design, build, test, break, re-design, re-build, and re-test cycle and help lower costs. NREL has been involved in developing various models to predict the thermal and electrochemical performance of large-format cells and has used in commercial three-dimensional finite-element analysis and computational fluid dynamics to study battery pack thermal issues. These NREL cell and pack design tools can be integrated to help support the automotive industry and to accelerate battery design.

    8. MHD computations for stellarators

      SciTech Connect (OSTI)

      Johnson, J.L.

      1985-12-01

      Considerable progress has been made in the development of computational techniques for studying the magnetohydrodynamic equilibrium and stability properties of three-dimensional configurations. Several different approaches have evolved to the point where comparison of results determined with different techniques shows good agreement. 55 refs., 7 figs.

    9. Sandia National Laboratories: Advanced Simulation and Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ASC Advanced Simulation and Computing Computational Systems & Software Environment Crack Modeling The Computational Systems & Software Environment program builds integrated,...

    10. Evaporative cooler including one or more rotating cooler louvers

      DOE Patents [OSTI]

      Gerlach, David W

      2015-02-03

      An evaporative cooler may include an evaporative cooler housing with a duct extending therethrough, a plurality of cooler louvers with respective porous evaporative cooler pads, and a working fluid source conduit. The cooler louvers are arranged within the duct and rotatably connected to the cooler housing along respective louver axes. The source conduit provides an evaporative cooler working fluid to the cooler pads during at least one mode of operation.

    11. [Article 1 of 7: Motivates and Includes the Consumer]

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 of 7: Research on the Characteristics of a Modern Grid by the NETL Modern Grid Strategy Team Accommodates All Generation and Storage Options Last month we presented the first Principal Characteristic of a Modern Grid, "Motivates and Includes the Consumer". This month we present a second characteristic, "Accommodates All Generation and Storage Options". This characteristic will fundamentally transition today's grid from a centralized model for generation to one that also has

    12. [Article 1 of 7: Motivates and Includes the Consumer]

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Series on the Seven Principal Characteristics of the Modern Grid [Article 1 of 7: Motivates and Includes the Consumer] In October 2007, Ken Silverstein (Energy Central) wrote an editorial, "Empowering Consumers" that hit a strong, kindred chord with the DOE/National Energy Technology Laboratory (NETL) Modern Grid Strategy team. Through subsequent discussions with Ken and Bill Opalka, Editor- In-Chief, Topics Centers, we decided it would be informative to the industry if the Modern Grid

    13. Conversion of geothermal waste to commercial products including silica

      DOE Patents [OSTI]

      Premuzic, Eugene T. (East Moriches, NY); Lin, Mow S. (Rocky Point, NY)

      2003-01-01

      A process for the treatment of geothermal residue includes contacting the pigmented amorphous silica-containing component with a depigmenting reagent one or more times to depigment the silica and produce a mixture containing depigmented amorphous silica and depigmenting reagent containing pigment material; separating the depigmented amorphous silica and from the depigmenting reagent to yield depigmented amorphous silica. Before or after the depigmenting contacting, the geothermal residue or depigmented silica can be treated with a metal solubilizing agent to produce another mixture containing pigmented or unpigmented amorphous silica-containing component and a solubilized metal-containing component; separating these components from each other to produce an amorphous silica product substantially devoid of metals and at least partially devoid of pigment. The amorphous silica product can be neutralized and thereafter dried at a temperature from about 25.degree. C. to 300.degree. C. The morphology of the silica product can be varied through the process conditions including sequence contacting steps, pH of depigmenting reagent, neutralization and drying conditions to tailor the amorphous silica for commercial use in products including filler for paint, paper, rubber and polymers, and chromatographic material.

    14. Electrolytes including fluorinated solvents for use in electrochemical cells

      DOE Patents [OSTI]

      Tikhonov, Konstantin; Yip, Ka Ki; Lin, Tzu-Yuan

      2015-07-07

      Provided are electrochemical cells and electrolytes used to build such cells. The electrolytes include ion-supplying salts and fluorinated solvents capable of maintaining single phase solutions with the salts at between about -30.degree. C. to about 80.degree. C. The fluorinated solvents, such as fluorinated carbonates, fluorinated esters, and fluorinated esters, are less flammable than their non-fluorinated counterparts and increase safety characteristics of cells containing these solvents. The amount of fluorinated solvents in electrolytes may be between about 30% and 80% by weight not accounting weight of the salts. Fluorinated salts, such as fluoroalkyl-substituted LiPF.sub.6, fluoroalkyl-substituted LiBF.sub.4 salts, linear and cyclic imide salts as well as methide salts including fluorinated alkyl groups, may be used due to their solubility in the fluorinated solvents. In some embodiments, the electrolyte may also include a flame retardant, such as a phosphazene or, more specifically, a cyclic phosphazene and/or one or more ionic liquids.

    15. Department of Defense High Performance Computing Modernization Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      DoD High Performance Computing Science and Engineering Applications Larry Davis 25 April 2007 Department of Defense High Performance Computing Modernization Program Overview  DoD High Performance Computing Modernization Program (HPCMP)  DoD science and engineering applications  Use of modeling and simulation for aircraft certification  HPCMP benchmarking for acquisitions  Overall acquisition process  Validated vendor benchmarking results  Uncertainty analysis in performance

    16. Locating hardware faults in a data communications network of a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-01-12

      Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

    17. Foundational Tools for Petascale Computing

      SciTech Connect (OSTI)

      Miller, Barton

      2014-05-19

      The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, High-Performance Energy Applications and Systems, SC0004061/FG02-10ER25972, UW PRJ36WV.

    18. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs ...

    19. Visitor Hanford Computer Access Request - Hanford Site

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Email Email Page |...

    20. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Faraj, Ahmad A; Inglett, Todd A; Ratterman, Joseph D

      2013-04-16

      Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

    1. Providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Faraj, Ahmad A.; Inglett, Todd A.; Ratterman, Joseph D.

      2012-10-23

      Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier.

    2. Software and High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software and High Performance Computing Software and High Performance Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest Contact thumbnail of Kathleen McDonald Head of Intellectual Property, Business Development Executive Kathleen McDonald Richard P. Feynman Center for Innovation (505) 667-5844 Email Software Computational physics, computer science, applied mathematics, statistics and the

    3. Magellan: A Cloud Computing Testbed

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Magellan News & Announcements Archive Petascale Initiative Exascale Computing APEX Home » R & D » Archive » Magellan: A Cloud Computing Testbed Magellan: A Cloud Computing Testbed Cloud computing is gaining a foothold in the business world, but can clouds meet the specialized needs of scientists? That was one of the questions NERSC's Magellan cloud computing testbed explored between 2009 and 2011. The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Oce

    4. Data aNd Computation Reordering package using temporal and spatial hypergraphs

      Energy Science and Technology Software Center (OSTI)

      2004-08-01

      A package for experimentation with data and computation reordering algorithms. One can input various file formats representing sparse matrices, reorder data, and computation through the specification of command line parameters, and time benchmark computations that use the new data and computation ordering. The package includes existing reordering algorithms and new ones introduced by the authors based on the temporal and spatial locality hypergraph model.

    5. High-performance computing for airborne applications

      SciTech Connect (OSTI)

      Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

      2010-06-28

      Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

    6. Including environmental concerns in management strategies for depleted uranium hexafluoride

      SciTech Connect (OSTI)

      Goldberg, M.; Avci, H.I.; Bradley, C.E.

      1995-12-31

      One of the major programs within the Office of Nuclear Energy, Science, and Technology of the US Department of Energy (DOE) is the depleted uranium hexafluoride (DUF{sub 6}) management program. The program is intended to find a long-term management strategy for the DUF{sub 6} that is currently stored in approximately 46,400 cylinders at Paducah, KY; Portsmouth, OH; and Oak Ridge, TN, USA. The program has four major components: technology assessment, engineering analysis, cost analysis, and the environmental impact statement (EIS). From the beginning of the program, the DOE has incorporated the environmental considerations into the process of strategy selection. Currently, the DOE has no preferred alternative. The results of the environmental impacts assessment from the EIS, as well as the results from the other components of the program, will be factored into the strategy selection process. In addition to the DOE`s current management plan, other alternatives continued storage, reuse, or disposal of depleted uranium, will be considered in the EIS. The EIS is expected to be completed and issued in its final form in the fall of 1997.

    7. Composite material including nanocrystals and methods of making

      DOE Patents [OSTI]

      Bawendi, Moungi G.; Sundar, Vikram C.

      2008-02-05

      Temperature-sensing compositions can include an inorganic material, such as a semiconductor nanocrystal. The nanocrystal can be a dependable and accurate indicator of temperature. The intensity of emission of the nanocrystal varies with temperature and can be highly sensitive to surface temperature. The nanocrystals can be processed with a binder to form a matrix, which can be varied by altering the chemical nature of the surface of the nanocrystal. A nanocrystal with a compatibilizing outer layer can be incorporated into a coating formulation and retain its temperature sensitive emissive properties

    8. Composite material including nanocrystals and methods of making

      DOE Patents [OSTI]

      Bawendi, Moungi G.; Sundar, Vikram C.

      2010-04-06

      Temperature-sensing compositions can include an inorganic material, such as a semiconductor nanocrystal. The nanocrystal can be a dependable and accurate indicator of temperature. The intensity of emission of the nanocrystal varies with temperature and can be highly sensitive to surface temperature. The nanocrystals can be processed with a binder to form a matrix, which can be varied by altering the chemical nature of the surface of the nanocrystal. A nanocrystal with a compatibilizing outer layer can be incorporated into a coating formulation and retain its temperature sensitive emissive properties.

    9. A coke oven model including thermal decomposition kinetics of tar

      SciTech Connect (OSTI)

      Munekane, Fuminori; Yamaguchi, Yukio; Tanioka, Seiichi

      1997-12-31

      A new one-dimensional coke oven model has been developed for simulating the amount and the characteristics of by-products such as tar and gas as well as coke. This model consists of both heat transfer and chemical kinetics including thermal decomposition of coal and tar. The chemical kinetics constants are obtained by estimation based on the results of experiments conducted to investigate the thermal decomposition of both coal and tar. The calculation results using the new model are in good agreement with experimental ones.

    10. cDNA encoding a polypeptide including a hevein sequence

      DOE Patents [OSTI]

      Raikhel, Natasha V. (Okemos, MI); Broekaert, Willem F. (Dilbeek, BE); Chua, Nam-Hai (Scarsdale, NY); Kush, Anil (New York, NY)

      1993-02-16

      A cDNA clone (HEV1) encoding hevein was isolated via polymerase chain reaction (PCR) using mixed oligonucleotides corresponding to two regions of hevein as primers and a Hevea brasiliensis latex cDNA library as a template. HEV1 is 1018 nucleotides long and includes an open reading frame of 204 amino acids. The deduced amino acid sequence contains a pu GOVERNMENT RIGHTS This application was funded under Department of Energy Contract DE-AC02-76ER01338. The U.S. Government has certain rights under this application and any patent issuing thereon.

    11. Composite armor, armor system and vehicle including armor system

      DOE Patents [OSTI]

      Chu, Henry S.; Jones, Warren F.; Lacy, Jeffrey M.; Thinnes, Gary L.

      2013-01-01

      Composite armor panels are disclosed. Each panel comprises a plurality of functional layers comprising at least an outermost layer, an intermediate layer and a base layer. An armor system incorporating armor panels is also disclosed. Armor panels are mounted on carriages movably secured to adjacent rails of a rail system. Each panel may be moved on its associated rail and into partially overlapping relationship with another panel on an adjacent rail for protection against incoming ordnance from various directions. The rail system may be configured as at least a part of a ring, and be disposed about a hatch on a vehicle. Vehicles including an armor system are also disclosed.

    12. Computer Algebra System

      Energy Science and Technology Software Center (OSTI)

      1992-05-04

      DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less

    13. Exploratory Experimentation and Computation

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2010-02-25

      We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

    14. GPU Computational Screening

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      GPU Computational Screening of Carbon Capture Materials J. Kim 1 , A Koniges 1 , R. Martin 1 , M. Haranczyk 1 , J. Swisher 2 , and B. Smit 1,2 1 Lawrence Berkeley National Laboratory, Berkeley, CA 94720 2 Department of Chemical Engineering, University of California, Berkeley, Berkeley, CA 94720 E-mail: jihankim@lbl.gov Abstract. In order to reduce the current costs associated with carbon capture technologies, novel materials such as zeolites and metal-organic frameworks that are based on

    15. Cloud Computing Services

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Services - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    16. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Performance Computing - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    17. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Anti-HIV antibody Software optimized on Mira advances design of mini-proteins for medicines, materials Scientists at the University of Washington are using Mira to virtually design unique, artificial peptides, or short proteins. Read More Celebrating 10 years 10 science highlights celebrating 10 years of Argonne Leadership Computing Facility To celebrate our 10th anniversary, we're highlighting 10 science accomplishments since we opened our doors. Read More Bill Gropp works with students during

    18. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    19. From Federal Computer Week:

      National Nuclear Security Administration (NNSA)

      Federal Computer Week: Energy agency launches performance-based pay system By Richard W. Walker Published on March 27, 2008 The Energy Department's National Nuclear Security Administration has launched a new performance- based pay system involving about 2,000 of its 2,500 employees. NNSA officials described the effort as a pilot project that will test the feasibility of the new system, which collapses the traditional 15 General Schedule pay bands into broader pay bands. The new structure

    20. Computed Tomography Status

      DOE R&D Accomplishments [OSTI]

      Hansche, B. D.

      1983-01-01

      Computed tomography (CT) is a relatively new radiographic technique which has become widely used in the medical field, where it is better known as computerized axial tomographic (CAT) scanning. This technique is also being adopted by the industrial radiographic community, although the greater range of densities, variation in samples sizes, plus possible requirement for finer resolution make it difficult to duplicate the excellent results that the medical scanners have achieved.

    1. Collective network for computer structures

      DOE Patents [OSTI]

      Blumrich, Matthias A. (Ridgefield, CT); Coteus, Paul W. (Yorktown Heights, NY); Chen, Dong (Croton On Hudson, NY); Gara, Alan (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Heidelberger, Philip (Cortlandt Manor, NY); Hoenicke, Dirk (Ossining, NY); Takken, Todd E. (Brewster, NY); Steinmacher-Burow, Burkhard D. (Wernau, DE); Vranas, Pavlos M. (Bedford Hills, NY)

      2011-08-16

      A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices ate included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm.

    2. Collective network for computer structures

      DOE Patents [OSTI]

      Blumrich, Matthias A; Coteus, Paul W; Chen, Dong; Gara, Alan; Giampapa, Mark E; Heidelberger, Philip; Hoenicke, Dirk; Takken, Todd E; Steinmacher-Burow, Burkhard D; Vranas, Pavlos M

      2014-01-07

      A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to the needs of a processing algorithm.

    3. Community Assessment Tool for Public Health Emergencies Including Pandemic Influenza

      SciTech Connect (OSTI)

      ORAU's Oak Ridge Institute for Science Education (HCTT-CHE)

      2011-04-14

      The Community Assessment Tool (CAT) for Public Health Emergencies Including Pandemic Influenza (hereafter referred to as the CAT) was developed as a result of feedback received from several communities. These communities participated in workshops focused on influenza pandemic planning and response. The 2008 through 2011 workshops were sponsored by the Centers for Disease Control and Prevention (CDC). Feedback during those workshops indicated the need for a tool that a community can use to assess its readiness for a disaster - readiness from a total healthcare perspective, not just hospitals, but the whole healthcare system. The CAT intends to do just that - help strengthen existing preparedness plans by allowing the healthcare system and other agencies to work together during an influenza pandemic. It helps reveal each core agency partners (sectors) capabilities and resources, and highlights cases of the same vendors being used for resource supplies (e.g., personal protective equipment [PPE] and oxygen) by the partners (e.g., public health departments, clinics, or hospitals). The CAT also addresses gaps in the community's capabilities or potential shortages in resources. This tool has been reviewed by a variety of key subject matter experts from federal, state, and local agencies and organizations. It also has been piloted with various communities that consist of different population sizes, to include large urban to small rural communities.

    4. Large Scale Computing and Storage Requirements for Basic Energy Sciences:

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Target 2014 Large Scale Computing and Storage Requirements for Basic Energy Sciences: Target 2014 BESFrontcover.png Final Report Large Scale Computing and Storage Requirements for Basic Energy Sciences, Report of the Joint BES/ ASCR / NERSC Workshop conducted February 9-10, 2010 Workshop Agenda The agenda for this workshop is presented here: including presentation times and speaker information. Read More » Workshop Presentations Large Scale Computing and Storage Requirements for Basic

    5. Computational Tools for Accelerating Carbon Capture Process Development

      SciTech Connect (OSTI)

      Miller, David

      2013-01-01

      The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

    6. Experiences using DAKOTA stochastic expansion methods in computational simulations.

      SciTech Connect (OSTI)

      Templeton, Jeremy Alan; Ruthruff, Joseph R.

      2012-01-01

      Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

    7. CY15 Livermore Computing Focus Areas

      SciTech Connect (OSTI)

      Connell, Tom M.; Cupps, Kim C.; D'Hooge, Trent E.; Fahey, Tim J.; Fox, Dave M.; Futral, Scott W.; Gary, Mark R.; Goldstone, Robin J.; Hamilton, Pam G.; Heer, Todd M.; Long, Jeff W.; Mark, Rich J.; Morrone, Chris J.; Shoopman, Jerry D.; Slavec, Joe A.; Smith, David W.; Springmeyer, Becky R; Stearman, Marc D.; Watson, Py C.

      2015-01-20

      The LC team undertook a survey of primary Center drivers for CY15. Identified key drivers included enhancing user experience and productivity, pre-exascale platform preparation, process improvement, data-centric computing paradigms and business expansion. The team organized critical supporting efforts into three cross-cutting focus areas; Improving Service Quality; Monitoring, Automation, Delegation and Center Efficiency; and Next Generation Compute and Data Environments In each area the team detailed high level challenges and identified discrete actions to address these issues during the calendar year. Identifying the Centers primary drivers, issues, and plans is intended to serve as a lens focusing LC personnel, resources, and priorities throughout the year.

    8. Optimized data communications in a parallel computer

      DOE Patents [OSTI]

      Faraj, Daniel A.

      2014-08-19

      A parallel computer includes nodes that include a network adapter that couples the node in a point-to-point network and supports communications in opposite directions of each dimension. Optimized communications include: receiving, by a network adapter of a receiving compute node, a packet--from a source direction--that specifies a destination node and deposit hints. Each hint is associated with a direction within which the packet is to be deposited. If a hint indicates the packet to be deposited in the opposite direction: the adapter delivers the packet to an application on the receiving node; forwards the packet to a next node in the opposite direction if the receiving node is not the destination; and forwards the packet to a node in a direction of a subsequent dimension if the hints indicate that the packet is to be deposited in the direction of the subsequent dimension.

    9. Optimized data communications in a parallel computer

      DOE Patents [OSTI]

      Faraj, Daniel A

      2014-10-21

      A parallel computer includes nodes that include a network adapter that couples the node in a point-to-point network and supports communications in opposite directions of each dimension. Optimized communications include: receiving, by a network adapter of a receiving compute node, a packet--from a source direction--that specifies a destination node and deposit hints. Each hint is associated with a direction within which the packet is to be deposited. If a hint indicates the packet to be deposited in the opposite direction: the adapter delivers the packet to an application on the receiving node; forwards the packet to a next node in the opposite direction if the receiving node is not the destination; and forwards the packet to a node in a direction of a subsequent dimension if the hints indicate that the packet is to be deposited in the direction of the subsequent dimension.

    10. Electra-optical device including a nitrogen containing electrolyte

      DOE Patents [OSTI]

      Bates, J.B.; Dudney, N.J.; Gruzalski, G.R.; Luck, C.F.

      1995-10-03

      Described is a thin-film battery, especially a thin-film microbattery, and a method for making same having application as a backup or primary integrated power source for electronic devices. The battery includes a novel electrolyte which is electrochemically stable and does not react with the lithium anode and a novel vanadium oxide cathode. Configured as a microbattery, the battery can be fabricated directly onto a semiconductor chip, onto the semiconductor die or onto any portion of the chip carrier. The battery can be fabricated to any specified size or shape to meet the requirements of a particular application. The battery is fabricated of solid state materials and is capable of operation between {minus}15 C and 150 C.

    11. Electra-optical device including a nitrogen containing electrolyte

      DOE Patents [OSTI]

      Bates, John B. (Oak Ridge, TN); Dudney, Nancy J. (Knoxville, TN); Gruzalski, Greg R. (Oak Ridge, TN); Luck, Christopher F. (Knoxville, TN)

      1995-01-01

      Described is a thin-film battery, especially a thin-film microbattery, and a method for making same having application as a backup or primary integrated power source for electronic devices. The battery includes a novel electrolyte which is electrochemically stable and does not react with the lithium anode and a novel vanadium oxide cathode Configured as a microbattery, the battery can be fabricated directly onto a semiconductor chip, onto the semiconductor die or onto any portion of the chip carrier. The battery can be fabricated to any specified size or shape to meet the requirements of a particular application. The battery is fabricated of solid state materials and is capable of operation between -15.degree. C. and 150.degree. C.

    12. Hydraulic engine valve actuation system including independent feedback control

      DOE Patents [OSTI]

      Marriott, Craig D

      2013-06-04

      A hydraulic valve actuation assembly may include a housing, a piston, a supply control valve, a closing control valve, and an opening control valve. The housing may define a first fluid chamber, a second fluid chamber, and a third fluid chamber. The piston may be axially secured to an engine valve and located within the first, second and third fluid chambers. The supply control valve may control a hydraulic fluid supply to the piston. The closing control valve may be located between the supply control valve and the second fluid chamber and may control fluid flow from the second fluid chamber to the supply control valve. The opening control valve may be located between the supply control valve and the second fluid chamber and may control fluid flow from the supply control valve to the second fluid chamber.

    13. Nijmegen soft-core potential including two-meson exchange

      SciTech Connect (OSTI)

      Stoks, V.G.J.; Rijken, T.A.

      1995-05-10

      We report on the progress of the construction of the extended soft-core (ESC) Nijmegen potential. Next to the standard one-boson-exchange parts, the model includes the pion-meson-exchange potentials due to the parallel and crossed-box diagrams, as well as the one-pair and two-pair diagrams, vertices for which can be identified with similar interactions appearing in chiral-symmetric Lagrangians. Although the ESC potential is still under construction, it already gives an excellent description of all {ital NN} scattering data below 350 MeV with {chi}{sup 2}/datum=1.3. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.

    14. Pulse transmission transmitter including a higher order time derivate filter

      DOE Patents [OSTI]

      Dress, Jr., William B.; Smith, Stephen F.

      2003-09-23

      Systems and methods for pulse-transmission low-power communication modes are disclosed. A pulse transmission transmitter includes: a clock; a pseudorandom polynomial generator coupled to the clock, the pseudorandom polynomial generator having a polynomial load input; an exclusive-OR gate coupled to the pseudorandom polynomial generator, the exclusive-OR gate having a serial data input; a programmable delay circuit coupled to both the clock and the exclusive-OR gate; a pulse generator coupled to the programmable delay circuit; and a higher order time derivative filter coupled to the pulse generator. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.

    15. Actuator assembly including a single axis of rotation locking member

      DOE Patents [OSTI]

      Quitmeyer, James N.; Benson, Dwayne M.; Geck, Kellan P.

      2009-12-08

      An actuator assembly including an actuator housing assembly and a single axis of rotation locking member fixedly attached to a portion of the actuator housing assembly and an external mounting structure. The single axis of rotation locking member restricting rotational movement of the actuator housing assembly about at least one axis. The single axis of rotation locking member is coupled at a first end to the actuator housing assembly about a Y axis and at a 90.degree. angle to an X and Z axis providing rotation of the actuator housing assembly about the Y axis. The single axis of rotation locking member is coupled at a second end to a mounting structure, and more particularly a mounting pin, about an X axis and at a 90.degree. angle to a Y and Z axis providing rotation of the actuator housing assembly about the X axis. The actuator assembly is thereby restricted from rotation about the Z axis.

    16. Fuel cell repeater unit including frame and separator plate

      DOE Patents [OSTI]

      Yamanis, Jean; Hawkes, Justin R; Chiapetta, Jr., Louis; Bird, Connie E; Sun, Ellen Y; Croteau, Paul F

      2013-11-05

      An example fuel cell repeater includes a separator plate and a frame establishing at least a portion of a flow path that is operative to communicate fuel to or from at least one fuel cell held by the frame relative to the separator plate. The flow path has a perimeter and any fuel within the perimeter flow across the at least one fuel cell in a first direction. The separator plate, the frame, or both establish at least one conduit positioned outside the flow path perimeter. The conduit is outside of the flow path perimeter and is configured to direct flow in a second, different direction. The conduit is fluidly coupled with the flow path.

    17. Copper laser modulator driving assembly including a magnetic compression laser

      DOE Patents [OSTI]

      Cook, Edward G. (Livermore, CA); Birx, Daniel L. (Oakley, CA); Ball, Don G. (Livermore, CA)

      1994-01-01

      A laser modulator (10) having a low voltage assembly (12) with a plurality of low voltage modules (14) with first stage magnetic compression circuits (20) and magnetic assist inductors (28) with a common core (91), such that timing of the first stage magnetic switches (30b) is thereby synchronized. A bipolar second stage of magnetic compression (42) is coupled to the low voltage modules (14) through a bipolar pulse transformer (36) and a third stage of magnetic compression (44) is directly coupled to the second stage of magnetic compression (42). The low voltage assembly (12) includes pressurized boxes (117) for improving voltage standoff between the primary winding assemblies (34) and secondary winding (40) contained therein.

    18. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

      DOE Patents [OSTI]

      Faraj, Ahmad

      2013-02-12

      Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: performing, for each node, a local reduction operation using allreduce contribution data for the cores of that node, yielding, for each node, a local reduction result for one or more representative cores for that node; establishing one or more logical rings among the nodes, each logical ring including only one of the representative cores from each node; performing, for each logical ring, a global allreduce operation using the local reduction result for the representative cores included in that logical ring, yielding a global allreduce result for each representative core included in that logical ring; and performing, for each node, a local broadcast operation using the global allreduce results for each representative core on that node.

    19. Scalable and Energy Efficient Computer Systems - Energy Innovation Portal

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Energy Analysis Energy Analysis Find More Like This Return to Search Scalable and Energy Efficient Computer Systems Los Alamos National Laboratory Contact LANL About This Technology A scissors crossover provides flexible mobility for train cars between parallel tracks. LANL is pursuing a design to allow flexible and efficient movement of data in scalable computer systems. A scissors crossover provides flexible mobility for train cars between parallel tracks. LANL is pursuing a design to allow

    20. Guidance on GENII computer code - July 6, 2004 | Department of Energy

      Office of Environmental Management (EM)

      Guidance on GENII computer code - July 6, 2004 Guidance on GENII computer code - July 6, 2004 July 6, 2004 GENII Computer Code Application Guidance for Documented Safety Analysis This document provides guidance to Department of Energy (DOE) facility analysts in the use of the GENII computer code for supporting Documented Safety Analysis applications. Information is provided herein that supplements information found in the GENII documentation provided by the code developer. GENII is one of six

    1. Executing a gather operation on a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN)

      2012-03-20

      Methods, apparatus, and computer program products are disclosed for executing a gather operation on a parallel computer according to embodiments of the present invention. Embodiments include configuring, by the logical root, a result buffer or the logical root, the result buffer having positions, each position corresponding to a ranked node in the operational group and for storing contribution data gathered from that ranked node. Embodiments also include repeatedly for each position in the result buffer: determining, by each compute node of an operational group, whether the current position in the result buffer corresponds with the rank of the compute node, if the current position in the result buffer corresponds with the rank of the compute node, contributing, by that compute node, the compute node's contribution data, if the current position in the result buffer does not correspond with the rank of the compute node, contributing, by that compute node, a value of zero for the contribution data, and storing, by the logical root in the current position in the result buffer, results of a bitwise OR operation of all the contribution data by all compute nodes of the operational group for the current position, the results received through the global combining network.

    2. High Performance Computing at the Oak Ridge Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing at the Oak Ridge Leadership Computing Facility Go to Menu Page 2 Outline * Our Mission * Computer Systems: Present, Past, Future * Challenges Along the Way * Resources for Users Go to Menu Page 3 Our Mission Go to Menu Page 4 * World's most powerful computing facility * Nation's largest concentration of open source materials research * $1.3B budget * 4,250 employees * 3,900 research guests annually * $350 million invested in modernization * Nation's most diverse energy

    3. Including shielding effects in application of the TPCA method for detection of embedded radiation sources.

      SciTech Connect (OSTI)

      Johnson, William C.; Shokair, Isaac R.

      2011-12-01

      Conventional full spectrum gamma spectroscopic analysis has the objective of quantitative identification of all the radionuclides present in a measurement. For low-energy resolution detectors such as NaI, when photopeaks alone are not sufficient for complete isotopic identification, such analysis requires template spectra for all the radionuclides present in the measurement. When many radionuclides are present it is difficult to make the correct identification and this process often requires many attempts to obtain a statistically valid solution by highly skilled spectroscopists. A previous report investigated using the targeted principal component analysis method (TPCA) for detection of embedded sources for RPM applications. This method uses spatial/temporal information from multiple spectral measurements to test the hypothesis of the presence of a target spectrum of interest in these measurements without the need to identify all the other radionuclides present. The previous analysis showed that the TPCA method has significant potential for automated detection of target radionuclides of interest, but did not include the effects of shielding. This report complements the previous analysis by including the effects of spectral distortion due to shielding effects for the same problem of detection of embedded sources. Two examples, one with one target radionuclide and the other with two, show that the TPCA method can successfully detect shielded targets in the presence of many other radionuclides. The shielding parameters are determined as part of the optimization process using interpolation of library spectra that are defined on a 2D grid of atomic numbers and areal densities.

    4. cDNA encoding a polypeptide including a hevein sequence

      DOE Patents [OSTI]

      Raikhel, Natasha V.; Broekaert, Willem F.; Chua, Nam-Hai; Kush, Anil

      1999-05-04

      A cDNA clone (HEV1) encoding hevein was isolated via polymerase chain reaction (PCR) using mixed oligonucleotides corresponding to two regions of hevein as primers and a Hevea brasiliensis latex cDNA library as a template. HEV1 is 1018 nucleotides long and includes an open reading frame of 204 amino acids. The deduced amino acid sequence contains a putative signal sequence of 17 amino acid residues followed by a 187 amino acid polypeptide. The amino-terminal region (43 amino acids) is identical to hevein and shows homology to several chitin-binding proteins and to the amino-termini of wound-induced genes in potato and poplar. The carboxyl-terminal portion of the polypeptide (144 amino acids) is 74-79% homologous to the carboxyl-terminal region of wound-inducible genes of potato. Wounding, as well as application of the plant hormones abscisic acid and ethylene, resulted in accumulation of hevein transcripts in leaves, stems and latex, but not in roots, as shown by using the cDNA as a probe. A fusion protein was produced in E. coli from the protein of the present invention and maltose binding protein produced by the E. coli.

    5. cDNA encoding a polypeptide including a hevein sequence

      DOE Patents [OSTI]

      Raikhel, N.V.; Broekaert, W.F.; Chua, N.H.; Kush, A.

      1995-03-21

      A cDNA clone (HEV1) encoding hevein was isolated via polymerase chain reaction (PCR) using mixed oligonucleotides corresponding to two regions of hevein as primers and a Hevea brasiliensis latex cDNA library as a template. HEV1 is 1,018 nucleotides long and includes an open reading frame of 204 amino acids. The deduced amino acid sequence contains a putative signal sequence of 17 amino acid residues followed by a 187 amino acid polypeptide. The amino-terminal region (43 amino acids) is identical to hevein and shows homology to several chitin-binding proteins and to the amino-termini of wound-induced genes in potato and poplar. The carboxyl-terminal portion of the polypeptide (144 amino acids) is 74--79% homologous to the carboxyl-terminal region of wound-inducible genes of potato. Wounding, as well as application of the plant hormones abscisic acid and ethylene, resulted in accumulation of hevein transcripts in leaves, stems and latex, but not in roots, as shown by using the cDNA as a probe. A fusion protein was produced in E. coli from the protein of the present invention and maltose binding protein produced by the E. coli. 11 figures.

    6. cDNA encoding a polypeptide including a hevein sequence

      DOE Patents [OSTI]

      Raikhel, N.V.; Broekaert, W.F.; Chua, N.H.; Kush, A.

      1999-05-04

      A cDNA clone (HEV1) encoding hevein was isolated via polymerase chain reaction (PCR) using mixed oligonucleotides corresponding to two regions of hevein as primers and a Hevea brasiliensis latex cDNA library as a template. HEV1 is 1018 nucleotides long and includes an open reading frame of 204 amino acids. The deduced amino acid sequence contains a putative signal sequence of 17 amino acid residues followed by a 187 amino acid polypeptide. The amino-terminal region (43 amino acids) is identical to hevein and shows homology to several chitin-binding proteins and to the amino-termini of wound-induced genes in potato and poplar. The carboxyl-terminal portion of the polypeptide (144 amino acids) is 74--79% homologous to the carboxyl-terminal region of wound-inducible genes of potato. Wounding, as well as application of the plant hormones abscisic acid and ethylene, resulted in accumulation of hevein transcripts in leaves, stems and latex, but not in roots, as shown by using the cDNA as a probe. A fusion protein was produced in E. coli from the protein of the present invention and maltose binding protein produced by the E. coli. 12 figs.

    7. CDNA encoding a polypeptide including a hevein sequence

      DOE Patents [OSTI]

      Raikhel, Natasha V.; Broekaert, Willem F.; Chua, Nam-Hai; Kush, Anil

      1995-03-21

      A cDNA clone (HEV1) encoding hevein was isolated via polymerase chain reaction (PCR) using mixed oligonucleotides corresponding to two regions of hevein as primers and a Hevea brasiliensis latex cDNA library as a template. HEV1 is 1018 nucleotides long and includes an open reading frame of 204 amino acids. The deduced amino acid sequence contains a putative signal sequence of 17 amino acid residues followed by a 187 amino acid polypeptide. The amino-terminal region (43 amino acids) is identical to hevein and shows homology to several chitin-binding proteins and to the amino-termini of wound-induced genes in potato and poplar. The carboxyl-terminal portion of the polypeptide (144 amino acids) is 74-79% homologous to the carboxyl-terminal region of wound-inducible genes of potato. Wounding, as well as application of the plant hormones abscisic acid and ethylene, resulted in accumulation of hevein transcripts in leaves, stems and latex, but not in roots, as shown by using the cDNA as a probe. A fusion protein was produced in E. coli from the protein of the present invention and maltose binding protein produced by the E. coli.

    8. Extractant composition including crown ether and calixarene extractants

      DOE Patents [OSTI]

      Meikrantz, David H.; Todd, Terry A.; Riddle, Catherine L.; Law, Jack D.; Peterman, Dean R.; Mincher, Bruce J.; McGrath, Christopher A.; Baker, John D.

      2009-04-28

      An extractant composition comprising a mixed extractant solvent consisting of calix[4] arene-bis-(tert-octylbenzo)-crown-6 ("BOBCalixC6"), 4',4',(5')-di-(t-butyldicyclo-hexano)-18-crown-6 ("DtBu18C6"), and at least one modifier dissolved in a diluent. The DtBu18C6 may be present at from approximately 0.01M to approximately 0.4M, such as at from approximately 0.086 M to approximately 0.108 M. The modifier may be 1-(2,2,3,3-tetrafluoropropoxy)-3-(4-sec-butylphenoxy)-2-propanol ("Cs-7SB") and may be present at from approximately 0.01M to approximately 0.8M. In one embodiment, the mixed extractant solvent includes approximately 0.15M DtBu18C6, approximately 0.007M BOBCalixC6, and approximately 0.75M Cs-7SB modifier dissolved in an isoparaffinic hydrocarbon diluent. The extractant composition further comprises an aqueous phase. The mixed extractant solvent may be used to remove cesium and strontium from the aqueous phase.

    9. Interim performance criteria for photovoltaic energy systems. [Glossary included

      SciTech Connect (OSTI)

      DeBlasio, R.; Forman, S.; Hogan, S.; Nuss, G.; Post, H.; Ross, R.; Schafft, H.

      1980-12-01

      This document is a response to the Photovoltaic Research, Development, and Demonstration Act of 1978 (P.L. 95-590) which required the generation of performance criteria for photovoltaic energy systems. Since the document is evolutionary and will be updated, the term interim is used. More than 50 experts in the photovoltaic field have contributed in the writing and review of the 179 performance criteria listed in this document. The performance criteria address characteristics of present-day photovoltaic systems that are of interest to manufacturers, government agencies, purchasers, and all others interested in various aspects of photovoltaic system performance and safety. The performance criteria apply to the system as a whole and to its possible subsystems: array, power conditioning, monitor and control, storage, cabling, and power distribution. They are further categorized according to the following performance attributes: electrical, thermal, mechanical/structural, safety, durability/reliability, installation/operation/maintenance, and building/site. Each criterion contains a statement of expected performance (nonprescriptive), a method of evaluation, and a commentary with further information or justification. Over 50 references for background information are also given. A glossary with definitions relevant to photovoltaic systems and a section on test methods are presented in the appendices. Twenty test methods are included to measure performance characteristics of the subsystem elements. These test methods and other parts of the document will be expanded or revised as future experience and needs dictate.

    10. Multiprocessor computing for images

      SciTech Connect (OSTI)

      Cantoni, V. ); Levialdi, S. )

      1988-08-01

      A review of image processing systems developed until now is given, highlighting the weak points of such systems and the trends that have dictated their evolution through the years producing different generations of machines. Each generation may be characterized by the hardware architecture, the programmability features and the relative application areas. The need for multiprocessing hierarchical systems is discussed focusing on pyramidal architectures. Their computational paradigms, their virtual and physical implementation, their programming and software requirements, and capabilities by means of suitable languages, are discussed.

    11. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost® 3 on the TRACC Cluster Oct. 21-22, 2010 Argonne TRACC Dr. Cezary Bojanowski Dr. Ronald F. Kulak This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small The LS-PrePost Introductory Course was held October 21-22, 2010 at TRACC in West Chicago with interactive participation on-site as well as remotely via the Internet. Intended primarily for finite element analysts with

    12. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, William C.

      1998-01-01

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them.

    13. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, W.C.

      1998-03-17

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers is disclosed. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them. 5 figs.

    14. Scanning computed confocal imager

      DOE Patents [OSTI]

      George, John S. (Los Alamos, NM)

      2000-03-14

      There is provided a confocal imager comprising a light source emitting a light, with a light modulator in optical communication with the light source for varying the spatial and temporal pattern of the light. A beam splitter receives the scanned light and direct the scanned light onto a target and pass light reflected from the target to a video capturing device for receiving the reflected light and transferring a digital image of the reflected light to a computer for creating a virtual aperture and outputting the digital image. In a transmissive mode of operation the invention omits the beam splitter means and captures light passed through the target.

    15. Introduction to High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Introduction to High Performance Computing Introduction to High Performance Computing June 10, 2013 Photo on 7 30 12 at 7.10 AM Downloads Download File Gerber-HPC-2.pdf...

    16. Computer Wallpaper | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Wallpaper We've incorporated the tagline, Creating Materials and Energy Solutions, into a computer wallpaper so you can display it on your desktop as a constant reminder....

    17. Analysis of Cluster Management Tools

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Analysis of Configuration Management Tools Computer System, Cluster, and Networking Summer Institute Team: Evan Leeseberg, James Kang, Katherine Nystrom Mentors: Kevin Tegtmeier,...

    18. Techniques for Automated Performance Analysis

      SciTech Connect (OSTI)

      Marcus, Ryan C.

      2014-09-02

      The performance of a particular HPC code depends on a multitude of variables, including compiler selection, optimization flags, OpenMP pool size, file system load, memory usage, MPI configuration, etc. As a result of this complexity, current predictive models have limited applicability, especially at scale. We present a formulation of scientific codes, nodes, and clusters that reduces complex performance analysis to well-known mathematical techniques. Building accurate predictive models and enhancing our understanding of scientific codes at scale is an important step towards exascale computing.

    19. Community Assessment Tool for Public Health Emergencies Including Pandemic Influenza

      SciTech Connect (OSTI)

      HCTT-CHE

      2011-04-14

      The Community Assessment Tool (CAT) for Public Health Emergencies Including Pandemic Influenza (hereafter referred to as the CAT) was developed as a result of feedback received from several communities. These communities participated in workshops focused on influenza pandemic planning and response. The 2008 through 2011 workshops were sponsored by the Centers for Disease Control and Prevention (CDC). Feedback during those workshops indicated the need for a tool that a community can use to assess its readiness for a disasterreadiness from a total healthcare perspective, not just hospitals, but the whole healthcare system. The CAT intends to do just thathelp strengthen existing preparedness plans by allowing the healthcare system and other agencies to work together during an influenza pandemic. It helps reveal each core agency partners' (sectors) capabilities and resources, and highlights cases of the same vendors being used for resource supplies (e.g., personal protective equipment [PPE] and oxygen) by the partners (e.g., public health departments, clinics, or hospitals). The CAT also addresses gaps in the community's capabilities or potential shortages in resources. While the purpose of the CAT is to further prepare the community for an influenza pandemic, its framework is an extension of the traditional all-hazards approach to planning and preparedness. As such, the information gathered by the tool is useful in preparation for most widespread public health emergencies. This tool is primarily intended for use by those involved in healthcare emergency preparedness (e.g., community planners, community disaster preparedness coordinators, 9-1-1 directors, hospital emergency preparedness coordinators). It is divided into sections based on the core agency partners, which may be involved in the community's influenza pandemic influenza response.

    20. Managing internode data communications for an uninitialized process in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

      2014-05-20

      A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.