National Library of Energy BETA

Sample records for analysis including computer

  1. Quantitative Analysis of Biofuel Sustainability, Including Land...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions ...

  2. Quantitative Analysis of Biofuel Sustainability, Including Land...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    life cycle analysis of biofuels continue to improve 2 Feedstock Production Feedstock Logistics, Storage and Transportation Feedstock Conversion Fuel Transportation and...

  3. Human-computer interface including haptically controlled interactions

    DOE Patents [OSTI]

    Anderson, Thomas G.

    2005-10-11

    The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.

  4. Quantitative Analysis of Biofuel Sustainability, Including Land Use Change

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    GHG Emissions | Department of Energy Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions Plenary V: Biofuels and Sustainability: Acknowledging Challenges and Confronting Misconceptions Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions Jennifer B. Dunn, Energy Systems and Sustainability Analyst, Argonne National Laboratory

  5. Code System for Analysis of Piping Reliability Including Seismic Events.

    Energy Science and Technology Software Center (OSTI)

    1999-04-26

    Version 00 PC-PRAISE is a probabilistic fracture mechanics computer code developed for IBM or IBM compatible personal computers to estimate probabilities of leaks and breaks in nuclear power plant cooling piping. It iwas adapted from LLNL's PRAISE computer code.

  6. Radiological Safety Analysis Computer Program

    Energy Science and Technology Software Center (OSTI)

    2001-08-28

    RSAC-6 is the latest version of the RSAC program. It calculates the consequences of a release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory; decay and in-grow the inventory during transport through processes, facilities, and the environment; model the downwind dispersion of the activity; and calculate doses to downwind individuals. Internal dose from the inhalation and ingestion pathways is calculated. External dose from ground surface andmore » plume gamma pathways is calculated. New and exciting updates to the program include the ability to evaluate a release to an enclosed room, resuspension of deposited activity and evaluation of a release up to 1 meter from the release point. Enhanced tools are included for dry deposition, building wake, occupancy factors, respirable fraction, AMAD adjustment, updated and enhanced radionuclide inventory and inclusion of the dose-conversion factors from FOR 11 and 12.« less

  7. Search for Earth-like planets includes LANL star analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    their interiors. Consortium team members at Los Alamos include Joyce Ann Guzik, Paul Bradley, Arthur N. Cox, and Kim Simmons. They will help interpret the stellar oscillation...

  8. Semiconductor Device Analysis on Personal Computers

    Energy Science and Technology Software Center (OSTI)

    1993-02-08

    PC-1D models the internal operation of bipolar semiconductor devices by solving for the concentrations and quasi-one-dimensional flow of electrons and holes resulting from either electrical or optical excitation. PC-1D uses the same detailed physical models incorporated in mainframe computer programs, yet runs efficiently on personal computers. PC-1D was originally developed with DOE funding to analyze solar cells. That continues to be its primary mode of usage, with registered copies in regular use at more thanmore » 100 locations worldwide. The program has been successfully applied to the analysis of silicon, gallium-arsenide, and indium-phosphide solar cells. The program is also suitable for modeling bipolar transistors and diodes, including heterojunction devices. Its easy-to-use graphical interface makes it useful as a teaching tool as well.« less

  9. Impact analysis on a massively parallel computer

    SciTech Connect (OSTI)

    Zacharia, T.; Aramayo, G.A.

    1994-06-01

    Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper.

  10. Computer aided cogeneration feasibility analysis

    SciTech Connect (OSTI)

    Anaya, D.A.; Caltenco, E.J.L.; Robles, L.F.

    1996-12-31

    A successful cogeneration system design depends of several factors, and the optimal configuration can be founded using a steam and power simulation software. The key characteristics of one of this kind of software are described below, and its application on a process plant cogeneration feasibility analysis is shown in this paper. Finally a study case is illustrated. 4 refs., 2 figs.

  11. Application of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the...

  12. Final Report Computational Analysis of Dynamical Systems

    SciTech Connect (OSTI)

    Guckenheimer, John

    2012-05-08

    This is the final report for DOE Grant DE-FG02-93ER25164, initiated in 1993. This grant supported research of John Guckenheimer on computational analysis of dynamical systems. During that period, seventeen individuals received PhD degrees under the supervision of Guckenheimer and over fifty publications related to the grant were produced. This document contains copies of these publications.

  13. Computation of Domain-Averaged Irradiance with a Simple Two-Stream Radiative Transfer Model Including Vertical Cloud Property Correlations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computation of Domain-Averaged Irradiance with a Simple Two-Stream Radiative Transfer Model Including Vertical Cloud Property Correlations S. Kato Center for Atmospheric Sciences Hampton University Hampton, Virginia Introduction Recent development of remote sensing instruments by Atmospheric Radiation Measurement (ARM?) Program provides information of spatial and temporal variability of cloud structures. However it is not clear what cloud properties are required to express complicated cloud

  14. Computer analysis of HIV epitope sequences

    SciTech Connect (OSTI)

    Gupta, G.; Myers, G.

    1990-01-01

    Phylogenetic tree analysis provide us with important general information regarding the extent and rate of HIV variation. Currently we are attempting to extend computer analysis and modeling to the V3 loop of the type 2 virus and its simian homologues, especially in light of the prominent role the latter will play in animal model studies. Moreover, it might be possible to attack the slightly similar V4 loop by this approach. However, the strategy relies very heavily upon natural'' information and constraints, thus there exist severe limitations upon the general applicability, in addition to uncertainties with regard to long-range residue interactions. 5 refs., 3 figs.

  15. PArallel Reacting Multiphase FLOw Computational Fluid Dynamic Analysis

    Energy Science and Technology Software Center (OSTI)

    2002-06-01

    PARMFLO is a parallel multiphase reacting flow computational fluid dynamics (CFD) code. It can perform steady or unsteady simulations in three space dimensions. It is intended for use in engineering CFD analysis of industrial flow system components. Its parallel processing capabilities allow it to be applied to problems that use at least an order of magnitude more computational cells than the number that can be used on a typical single processor workstation (about 106 cellsmore » in parallel processing mode versus about io cells in serial processing mode). Alternately, by spreading the work of a CFD problem that could be run on a single workstation over a group of computers on a network, it can bring the runtime down by an order of magnitude or more (typically from many days to less than one day). The software was implemented using the industry standard Message-Passing Interface (MPI) and domain decomposition in one spatial direction. The phases of a flow problem may include an ideal gas mixture with an arbitrary number of chemical species, and dispersed droplet and particle phases. Regions of porous media may also be included within the domain. The porous media may be packed beds, foams, or monolith catalyst supports. With these features, the code is especially suited to analysis of mixing of reactants in the inlet chamber of catalytic reactors coupled to computation of product yields that result from the flow of the mixture through the catalyst coaled support structure.« less

  16. A Research Roadmap for Computation-Based Human Reliability Analysis

    SciTech Connect (OSTI)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  17. Practical Use of Computationally Frugal Model Analysis Methods

    SciTech Connect (OSTI)

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugal methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts

  18. Practical Use of Computationally Frugal Model Analysis Methods

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less

  19. Distributed Design and Analysis of Computer Experiments

    Energy Science and Technology Software Center (OSTI)

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation

  20. Computational Aerodynamic Analysis of Offshore Upwind and Downwind Turbines

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zhao, Qiuying; Sheng, Chunhua; Afjeh, Abdollah

    2014-01-01

    Aerodynamic interactions of the model NREL 5 MW offshore horizontal axis wind turbines (HAWT) are investigated using a high-fidelity computational fluid dynamics (CFD) analysis. Four wind turbine configurations are considered; three-bladed upwind and downwind and two-bladed upwind and downwind configurations, which operate at two different rotor speeds of 12.1 and 16 RPM. In the present study, both steady and unsteady aerodynamic loads, such as the rotor torque, blade hub bending moment, and base the tower bending moment of the tower, are evaluated in detail to provide overall assessment of different wind turbine configurations. Aerodynamic interactions between the rotor and tower are analyzed,more » including the rotor wake development downstream. The computational analysis provides insight into aerodynamic performance of the upwind and downwind, two- and three-bladed horizontal axis wind turbines.« less

  1. Transportation Research and Analysis Computing Center Fact Sheet | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    National Laboratory Transportation Research and Analysis Computing Center Fact Sheet The Transportation Research and Analysis Computing Center (TRACC) is the intersection of state-of-the-art computing and critical science and engineering research that is improving how the nation plans, builds, and secures a transportation system for the 21st Century. PDF icon TRACC

  2. The Design and Analysis of Computer Experiments | Open Energy...

    Open Energy Info (EERE)

    to library Book: The Design and Analysis of Computer Experiments Authors Thomas J. Santner, Brian J. Williams and William I. Notz Published Springer-Verlag, 2003 DOI Not...

  3. Comparative genome analysis of Pseudomonas genomes including Populus-associated isolates

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jun, Se Ran; Wassenaar, Trudy; Nookaew, Intawat; Hauser, Loren John; Wanchai, Visanu; Land, Miriam L.; Timm, Collin M.; Lu, Tse-Yuan S.; Schadt, Christopher Warren; Doktycz, Mitchel John; et al

    2016-01-01

    The Pseudomonas genus contains a metabolically versatile group of organisms that are known to occupy numerous ecological niches including the rhizosphere and endosphere of many plants influencing phylogenetic diversity and heterogeneity. In this study, comparative genome analysis was performed on over one thousand Pseudomonas genomes, including 21 Pseudomonas strains isolated from the roots of native Populus deltoides. Based on average amino acid identity, genomic clusters were identified within the Pseudomonas genus, which showed agreements with clades by NCBI and cliques by IMG. The P. fluorescens group was organized into 20 distinct genomic clusters, representing enormous diversity and heterogeneity. The speciesmore » P. aeruginosa showed clear distinction in their genomic relatedness compared to other Pseudomonas species groups based on the pan and core genome analysis. The 19 isolates of our 21 Populus-associated isolates formed three distinct subgroups within the P. fluorescens major group, supported by pathway profiles analysis, while two isolates were more closely related to P. chlororaphis and P. putida. The specific genes to Populus-associated subgroups were identified where genes specific to subgroup 1 include several sensory systems such as proteins which act in two-component signal transduction, a TonB-dependent receptor, and a phosphorelay sensor; specific genes to subgroup 2 contain unique hypothetical genes; and genes specific to subgroup 3 organisms have a different hydrolase activity. IMPORTANCE The comparative genome analyses of the genus Pseudomonas that included Populus-associated isolates resulted in novel insights into high diversity of Pseudomonas. Consistent and robust genomic clusters with phylogenetic homogeneity were identified, which resolved species-clades that are not clearly defined by 16S rRNA gene sequence analysis alone. The genomic clusters may be reflective of distinct ecological niches to which the organisms have adapted, but

  4. Comparative genome analysis of Pseudomonas genomes including Populus-associated isolates

    SciTech Connect (OSTI)

    Jun, Se Ran; Wassenaar, Trudy; Nookaew, Intawat; Hauser, Loren John; Wanchai, Visanu; Land, Miriam L.; Timm, Collin M.; Lu, Tse-Yuan S.; Schadt, Christopher Warren; Doktycz, Mitchel John; Pelletier, Dale A; Ussery, David W

    2016-01-01

    The Pseudomonas genus contains a metabolically versatile group of organisms that are known to occupy numerous ecological niches including the rhizosphere and endosphere of many plants influencing phylogenetic diversity and heterogeneity. In this study, comparative genome analysis was performed on over one thousand Pseudomonas genomes, including 21 Pseudomonas strains isolated from the roots of native Populus deltoides. Based on average amino acid identity, genomic clusters were identified within the Pseudomonas genus, which showed agreements with clades by NCBI and cliques by IMG. The P. fluorescens group was organized into 20 distinct genomic clusters, representing enormous diversity and heterogeneity. The species P. aeruginosa showed clear distinction in their genomic relatedness compared to other Pseudomonas species groups based on the pan and core genome analysis. The 19 isolates of our 21 Populus-associated isolates formed three distinct subgroups within the P. fluorescens major group, supported by pathway profiles analysis, while two isolates were more closely related to P. chlororaphis and P. putida. The specific genes to Populus-associated subgroups were identified where genes specific to subgroup 1 include several sensory systems such as proteins which act in two-component signal transduction, a TonB-dependent receptor, and a phosphorelay sensor; specific genes to subgroup 2 contain unique hypothetical genes; and genes specific to subgroup 3 organisms have a different hydrolase activity. IMPORTANCE The comparative genome analyses of the genus Pseudomonas that included Populus-associated isolates resulted in novel insights into high diversity of Pseudomonas. Consistent and robust genomic clusters with phylogenetic homogeneity were identified, which resolved species-clades that are not clearly defined by 16S rRNA gene sequence analysis alone. The genomic clusters may be reflective of distinct ecological niches to which the organisms have adapted, but this

  5. A joint analysis of Planck and BICEP2 B modes including dust polarization uncertainty

    SciTech Connect (OSTI)

    Mortonson, Michael J.; Seljak, Uro E-mail: useljak@berkeley.edu

    2014-10-01

    We analyze BICEP2 and Planck data using a model that includes CMB lensing, gravity waves, and polarized dust. Recently published Planck dust polarization maps have highlighted the difficulty of estimating the amount of dust polarization in low intensity regions, suggesting that the polarization fractions have considerable uncertainties and may be significantly higher than previous predictions. In this paper, we start by assuming nothing about the dust polarization except for the power spectrum shape, which we take to be C{sub l}{sup BB,dust}?l{sup -2.42}. The resulting joint BICEP2+Planck analysis favors solutions without gravity waves, and the upper limit on the tensor-to-scalar ratio is r<0.11, a slight improvement relative to the Planck analysis alone which gives r<0.13 (95% c.l.). The estimated amplitude of the dust polarization power spectrum agrees with expectations for this field based on both HI column density and Planck polarization measurements at 353 GHz in the BICEP2 field. Including the latter constraint on the dust spectrum amplitude in our analysis improves the limit further to r<0.09, placing strong constraints on theories of inflation (e.g., models with r>0.14 are excluded with 99.5% confidence). We address the cross-correlation analysis of BICEP2 at 150 GHz with BICEP1 at 100 GHz as a test of foreground contamination. We find that the null hypothesis of dust and lensing with 0r= gives ??{sup 2}<2 relative to the hypothesis of no dust, so the frequency analysis does not strongly favor either model over the other. We also discuss how more accurate dust polarization maps may improve our constraints. If the dust polarization is measured perfectly, the limit can reach r<0.05 (or the corresponding detection significance if the observed dust signal plus the expected lensing signal is below the BICEP2 observations), but this degrades quickly to almost no improvement if the dust calibration error is 20% or larger or if the dust maps are not

  6. Wind energy conversion system analysis model (WECSAM) computer program documentation

    SciTech Connect (OSTI)

    Downey, W T; Hendrick, P L

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation. Thus, any user-supplied data for WECS performance, application load, utility rates, or wind resource may be entered into the scratch file to override the default data-base value. After the model and the inputs required from the user and derived from the data base are described, the model output and the various output options that can be exercised by the user are detailed. The general operation is set forth and suggestions are made for efficient modes of operation. Sample listings of various input, output, and data-base files are appended. (LEW)

  7. Scalable Computer Performance and Analysis (Hierarchical INTegration)

    Energy Science and Technology Software Center (OSTI)

    1999-09-02

    HINT is a program to measure a wide variety of scalable computer systems. It is capable of demonstrating the benefits of using more memory or processing power, and of improving communications within the system. HINT can be used for measurement of an existing system, while the associated program ANALYTIC HINT can be used to explain the measurements or as a design tool for proposed systems.

  8. Applicaiton of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop...

  9. Surveillance Analysis Computer System (SACS) software requirements specification (SRS)

    SciTech Connect (OSTI)

    Glasscock, J.A.; Flanagan, M.J.

    1995-09-01

    This document is the primary document establishing requirements for the Surveillance Analysis Computer System (SACS) Database, an Impact Level 3Q system. The purpose is to provide the customer and the performing organization with the requirements for the SACS Project.

  10. Process for computing geometric perturbations for probabilistic analysis

    DOE Patents [OSTI]

    Fitch, Simeon H. K.; Riha, David S.; Thacker, Ben H.

    2012-04-10

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  11. Multiscale analysis of nonlinear systems using computational homology

    SciTech Connect (OSTI)

    Konstantin Mischaikow; Michael Schatz; William Kalies; Thomas Wanner

    2010-05-24

    This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization

  12. Multiscale analysis of nonlinear systems using computational homology

    SciTech Connect (OSTI)

    Konstantin Mischaikow, Rutgers University /Georgia Institute of Technology, Michael Schatz, Georgia Institute of Technology, William Kalies, Florida Atlantic University, Thomas Wanner,George Mason University

    2010-05-19

    This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization

  13. Analysis of advanced european nuclear fuel cycle scenarios including transmutation and economical estimates

    SciTech Connect (OSTI)

    Merino Rodriguez, I.; Alvarez-Velarde, F.; Martin-Fuertes, F.

    2013-07-01

    In this work the transition from the existing Light Water Reactors (LWR) to the advanced reactors is analyzed, including Generation III+ reactors in a European framework. Four European fuel cycle scenarios involving transmutation options have been addressed. The first scenario (i.e., reference) is the current fleet using LWR technology and open fuel cycle. The second scenario assumes a full replacement of the initial fleet with Fast Reactors (FR) burning U-Pu MOX fuel. The third scenario is a modification of the second one introducing Minor Actinide (MA) transmutation in a fraction of the FR fleet. Finally, in the fourth scenario, the LWR fleet is replaced using FR with MOX fuel as well as Accelerator Driven Systems (ADS) for MA transmutation. All scenarios consider an intermediate period of GEN-III+ LWR deployment and they extend for a period of 200 years looking for equilibrium mass flows. The simulations were made using the TR-EVOL code, a tool for fuel cycle studies developed by CIEMAT. The results reveal that all scenarios are feasible according to nuclear resources demand (U and Pu). Concerning to no transmutation cases, the second scenario reduces considerably the Pu inventory in repositories compared to the reference scenario, although the MA inventory increases. The transmutation scenarios show that elimination of the LWR MA legacy requires on one hand a maximum of 33% fraction (i.e., a peak value of 26 FR units) of the FR fleet dedicated to transmutation (MA in MOX fuel, homogeneous transmutation). On the other hand a maximum number of ADS plants accounting for 5% of electricity generation are predicted in the fourth scenario (i.e., 35 ADS units). Regarding the economic analysis, the estimations show an increase of LCOE (Levelized cost of electricity) - averaged over the whole period - with respect to the reference scenario of 21% and 29% for FR and FR with transmutation scenarios respectively, and 34% for the fourth scenario. (authors)

  14. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S.

    1998-08-18

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  15. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S.

    1999-10-26

    A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

  16. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S.

    2003-08-19

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  17. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S.

    2001-06-05

    A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

  18. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S.; Wang, Chunwei; Jevons, Luis C.; Bernhart, Derek H.; Lipshutz, Robert J.

    2004-05-11

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  19. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, M.S.

    1998-08-18

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device. 27 figs.

  20. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect (OSTI)

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  1. Large-scale computations in analysis of structures

    SciTech Connect (OSTI)

    McCallen, D.B.; Goudreau, G.L.

    1993-09-01

    Computer hardware and numerical analysis algorithms have progressed to a point where many engineering organizations and universities can perform nonlinear analyses on a routine basis. Through much remains to be done in terms of advancement of nonlinear analysis techniques and characterization on nonlinear material constitutive behavior, the technology exists today to perform useful nonlinear analysis for many structural systems. In the current paper, a survey on nonlinear analysis technologies developed and employed for many years on programmatic defense work at the Lawrence Livermore National Laboratory is provided, and ongoing nonlinear numerical simulation projects relevant to the civil engineering field are described.

  2. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    SciTech Connect (OSTI)

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin; Pasccci, Valerio; Gamblin, Todd; Brunst, Holger

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  3. RDI's Wisdom Way Solar Village Final Report: Includes Utility Bill Analysis of Occupied Homes

    SciTech Connect (OSTI)

    Robb Aldrich, Steven Winter Associates

    2011-07-01

    In 2010, Rural Development, Inc. (RDI) completed construction of Wisdom Way Solar Village (WWSV), a community of ten duplexes (20 homes) in Greenfield, MA. RDI was committed to very low energy use from the beginning of the design process throughout construction. Key features include: 1. Careful site plan so that all homes have solar access (for active and passive); 2. Cellulose insulation providing R-40 walls, R-50 ceiling, and R-40 floors; 3. Triple-pane windows; 4. Airtight construction (~0.1 CFM50/ft2 enclosure area); 5. Solar water heating systems with tankless, gas, auxiliary heaters; 6. PV systems (2.8 or 3.4kWSTC); 7. 2-4 bedrooms, 1,100-1,700 ft2. The design heating loads in the homes were so small that each home is heated with a single, sealed-combustion, natural gas room heater. The cost savings from the simple HVAC systems made possible the tremendous investments in the homes' envelopes. The Consortium for Advanced Residential Buildings (CARB) monitored temperatures and comfort in several homes during the winter of 2009-2010. In the Spring of 2011, CARB obtained utility bill information from 13 occupied homes. Because of efficient lights, appliances, and conscientious home occupants, the energy generated by the solar electric systems exceeded the electric energy used in most homes. Most homes, in fact, had a net credit from the electric utility over the course of a year. On the natural gas side, total gas costs averaged $377 per year (for heating, water heating, cooking, and clothes drying). Total energy costs were even less - $337 per year, including all utility fees. The highest annual energy bill for any home evaluated was $458; the lowest was $171.

  4. Probabilistic Computer Analysis for Rapid Evaluation of Structures.

    Energy Science and Technology Software Center (OSTI)

    2007-03-29

    P-CARES 2.0.0, Probabilistic Computer Analysis for Rapid Evaluation of Structures, was developed for NRC staff use to determine the validity and accuracy of the analysis methods used by various utilities for structural safety evaluations of nuclear power plants. P-CARES provides the capability to effectively evaluate the probabilistic seismic response using simplified soil and structural models and to quickly check the validity and/or accuracy of the SSI data received from applicants and licensees. The code ismore » organized in a modular format with the basic modules of the system performing static, seismic, and nonlinear analysis.« less

  5. Analysis of energy conversion systems, including material and global warming aspects

    SciTech Connect (OSTI)

    Zhang, M.; Reistad, G.M.

    1998-12-31

    This paper addresses a method for the overall evaluation of energy conversion systems, including material and global environmental aspects. To limit the scope of the work reported here, the global environmental aspects have been limited to global warming aspects. A method is presented that uses exergy as an overall evaluation measure of energy conversion systems for their lifetime. The method takes the direct exergy consumption (fuel consumption) of the conventional exergy analyses and adds (1) the exergy of the energy conversion system equipment materials, (2) the fuel production exergy and material exergy, and (3) the exergy needed to recover the total global warming gases (equivalent) of the energy conversion system. This total, termed Total Equivalent Resource Exergy (TERE), provides a measure of the effectiveness of the energy conversion system in its use of natural resources. The results presented here for several example systems illustrate how the method can be used to screen candidate energy conversion systems and perhaps, as data become more available, to optimize systems. It appears that this concept may be particularly useful for comparing systems that have quite different direct energy and/or environmental impacts. This work should be viewed in the context of being primarily a concept paper in that the lack of detailed data available to the authors at this time limits the accuracy of the overall results. The authors are working on refinements to data used in the evaluation.

  6. Surface and grain boundary scattering in nanometric Cu thin films: A quantitative analysis including twin boundaries

    SciTech Connect (OSTI)

    Barmak, Katayun [Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027 and Department of Materials Science and Engineering and Materials Research Science and Engineering Center, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, Pennsylvania 15213 (United States); Darbal, Amith [Department of Materials Science and Engineering and Materials Research Science and Engineering Center, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, Pennsylvania 15213 (United States); Ganesh, Kameswaran J.; Ferreira, Paulo J. [Materials Science and Engineering, The University of Texas at Austin, 1 University Station, Austin, Texas 78712 (United States); Rickman, Jeffrey M. [Department of Materials Science and Engineering and Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States); Sun, Tik; Yao, Bo; Warren, Andrew P.; Coffey, Kevin R., E-mail: kb2612@columbia.edu [Department of Materials Science and Engineering, University of Central Florida, 4000 Central Florida Boulevard, Orlando, Florida 32816 (United States)

    2014-11-01

    The relative contributions of various defects to the measured resistivity in nanocrystalline Cu were investigated, including a quantitative account of twin-boundary scattering. It has been difficult to quantitatively assess the impact twin boundary scattering has on the classical size effect of electrical resistivity, due to limitations in characterizing twin boundaries in nanocrystalline Cu. In this study, crystal orientation maps of nanocrystalline Cu films were obtained via precession-assisted electron diffraction in the transmission electron microscope. These orientation images were used to characterize grain boundaries and to measure the average grain size of a microstructure, with and without considering twin boundaries. The results of these studies indicate that the contribution from grain-boundary scattering is the dominant factor (as compared to surface scattering) leading to enhanced resistivity. The resistivity data can be well-described by the combined FuchsSondheimer surface scattering model and MayadasShatzkes grain-boundary scattering model using Matthiessen's rule with a surface specularity coefficient of p?=?0.48 and a grain-boundary reflection coefficient of R?=?0.26.

  7. Low-frequency computational electromagnetics for antenna analysis

    SciTech Connect (OSTI)

    Miller, E.K. ); Burke, G.J. )

    1991-01-01

    An overview of low-frequency, computational methods for modeling the electromagnetic characteristics of antennas is presented here. The article presents a brief analytical background, and summarizes the essential ingredients of the method of moments, for numerically solving low-frequency antenna problems. Some extensions to the basic models of perfectly conducting objects in free space are also summarized, followed by a consideration of some of the same computational issues that affect model accuracy, efficiency and utility. A variety of representative computations are then presented to illustrate various modeling aspects and capabilities that are currently available. A fairly extensive bibliography is included to suggest further reference material to the reader. 90 refs., 27 figs.

  8. Advanced computational tools for 3-D seismic analysis

    SciTech Connect (OSTI)

    Barhen, J.; Glover, C.W.; Protopopescu, V.A.

    1996-06-01

    The global objective of this effort is to develop advanced computational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advance in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.

  9. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes - Update to Include Evaluation of Impact of Including a Humidifier Option

    SciTech Connect (OSTI)

    Baxter, Van D

    2007-02-01

    --A Stage 2 Scoping Assessment, ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. In 2006, the two top-ranked options from the 2005 study, air-source and ground-source versions of a centrally ducted integrated heat pump (IHP) system, were subjected to an initial business case study. The IHPs were subjected to a more rigorous hourly-based assessment of their performance potential compared to a baseline suite of equipment of legally minimum efficiency that provided the same heating, cooling, water heating, demand dehumidification, and ventilation services as the IHPs. Results were summarized in a project report, Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes, ORNL/TM-2006/130 (Baxter 2006a). The present report is an update to that document which summarizes results of an analysis of the impact of adding a humidifier to the HVAC system to maintain minimum levels of space relative humidity (RH) in winter. The space RH in winter has direct impact on occupant comfort and on control of dust mites, many types of disease bacteria, and 'dry air' electric shocks. Chapter 8 in ASHRAE's 2005 Handbook of Fundamentals (HOF) suggests a 30% lower limit on RH for indoor temperatures in the range of {approx}68-69F based on comfort (ASHRAE 2005). Table 3 in chapter 9 of the same reference suggests a 30-55% RH range for winter as established by a Canadian study of exposure limits for residential indoor environments (EHD 1987). Harriman, et al (2001) note that for RH levels of 35% or higher, electrostatic shocks are minimized and that dust mites cannot live at RH levels below 40%. They also indicate that many disease bacteria life spans are minimized when space RH is held within a 30-60% range. From the foregoing it is reasonable to assume that a winter space RH range of 30-40% would be an acceptable compromise between comfort

  10. Engineering Analysis of Intermediate Loop and Process Heat Exchanger Requirements to Include Configuration Analysis and Materials Needs

    SciTech Connect (OSTI)

    T.M. Lillo; R.L. Williamson; T.R. Reed; C.B. Davis; D.M. Ginosar

    2005-09-01

    The need to locate advanced hydrogen production facilities a finite distance away from a nuclear power source necessitates the need for an intermediate heat transport loop (IHTL). This IHTL must not only efficiently transport energy over distances up to 500 meters but must also be capable of operating at high temperatures (>850oC) for many years. High temperature, long term operation raises concerns of material strength, creep resistance and general material stability (corrosion resistance). IHTL design is currently in the initial stages. Many questions remain to be answered before intelligent design can begin. The report begins to look at some of the issues surrounding the main components of an IHTL. Specifically, a stress analysis of a compact heat exchanger design under expected operating conditions is reported. Also the results of a thermal analysis performed on two ITHL pipe configurations for different heat transport fluids are presented. The configurations consist of separate hot supply and cold return legs as well as annular design in which the hot fluid is carried in an inner pipe and the cold return fluids travels in the opposite direction in the annular space around the hot pipe. The effects of insulation configurations on pipe configuration performance are also reported. Finally, a simple analysis of two different process heat exchanger designs, one a tube in shell type and the other a compact or microchannel reactor are evaluated in light of catalyst requirements. Important insights into the critical areas of research and development are gained from these analyses, guiding the direction of future areas of research.

  11. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  12. A system analysis computer model for the High Flux Isotope Reactor (HFIRSYS Version 1)

    SciTech Connect (OSTI)

    Sozer, M.C.

    1992-04-01

    A system transient analysis computer model (HFIRSYS) has been developed for analysis of small break loss of coolant accidents (LOCA) and operational transients. The computer model is based on the Advanced Continuous Simulation Language (ACSL) that produces the FORTRAN code automatically and that provides integration routines such as the Gear`s stiff algorithm as well as enabling users with numerous practical tools for generating Eigen values, and providing debug outputs and graphics capabilities, etc. The HFIRSYS computer code is structured in the form of the Modular Modeling System (MMS) code. Component modules from MMS and in-house developed modules were both used to configure HFIRSYS. A description of the High Flux Isotope Reactor, theoretical bases for the modeled components of the system, and the verification and validation efforts are reported. The computer model performs satisfactorily including cases in which effects of structural elasticity on the system pressure is significant; however, its capabilities are limited to single phase flow. Because of the modular structure, the new component models from the Modular Modeling System can easily be added to HFIRSYS for analyzing their effects on system`s behavior. The computer model is a versatile tool for studying various system transients. The intent of this report is not to be a users manual, but to provide theoretical bases and basic information about the computer model and the reactor.

  13. MHK technology developments include current

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    technology developments include current energy conversion (CEC) devices, for example, hydrokinetic turbines that extract power from water currents (riverine, tidal, and ocean) and wave energy conversion (WEC) devices that extract power from wave motion. Sandia's MHK research leverages decades of experience in engineering, design, and analysis of wind power technologies, and its vast research complex, including high- performance computing (HPC), advanced materials and coatings, nondestructive

  14. Computing and Computational Sciences Directorate - Computer Science and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mathematics Division Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Our mission includes basic research in computational sciences and application of advanced computing systems, computational, mathematical and analysis techniques to the solution of scientific problems of national importance. We seek to work

  15. Numerical power balance and free energy loss analysis for solar cells including optical, thermodynamic, and electrical aspects

    SciTech Connect (OSTI)

    Greulich, Johannes Höffler, Hannes; Würfel, Uli; Rein, Stefan

    2013-11-28

    A method for analyzing the power losses of solar cells is presented, supplying a complete balance of the incident power, the optical, thermodynamic, and electrical power losses and the electrical output power. The involved quantities have the dimension of a power density (units: W/m{sup 2}), which permits their direct comparison. In order to avoid the over-representation of losses arising from the ultraviolet part of the solar spectrum, a method for the analysis of the electrical free energy losses is extended to include optical losses. This extended analysis does not focus on the incident solar power of, e.g., 1000 W/m{sup 2} and does not explicitly include the thermalization losses and losses due to the generation of entropy. Instead, the usable power, i.e., the free energy or electro-chemical potential of the electron-hole pairs is set as reference value, thereby, overcoming the ambiguities of the power balance. Both methods, the power balance and the free energy loss analysis, are carried out exemplarily for a monocrystalline p-type silicon metal wrap through solar cell with passivated emitter and rear (MWT-PERC) based on optical and electrical measurements and numerical modeling. The methods give interesting insights in photovoltaic (PV) energy conversion, provide quantitative analyses of all loss mechanisms, and supply the basis for the systematic technological improvement of the device.

  16. PACKAGE INCLUDES:

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    PACKAGE INCLUDES: Airfare from Seattle, 4 & 5 Star Hotels, Transfers, Select Meals, Guided Tours and Excursions DAY 01: BANGKOK - ARRIVAL DAY 02: BANGKOK - SIGHTSEEING DAY 03: BANGKOK - FLOATING MARKET DAY 04: BANGKOK - AT LEISURE DAY 05: BANGKOK - CHIANG MAI BY AIR DAY 06: CHIANG MAI - SIGHTSEEING DAY 07: CHIANG MAI - ELEPHANT CAMP DAY 08: CHIANG MAI - PHUKET BY AIR DAY 09: PHUKET - PHI PHI ISLAND BY FERRY DAY 10: PHUKET - AT LEISURE DAY 11: PHUKET - CORAL ISLAND BY SPEEDBOAT DAY 12: PHUKET

  17. Computational analysis of azine-N-oxides as energetic materials

    SciTech Connect (OSTI)

    Ritchie, J.P.

    1994-05-01

    A BKW equation of state in a 1-dimensional hydrodynamic simulation of the cylinder test can be used to estimate the performance of explosives. Using this approach, the novel explosive 1,4-diamino-2,3,5,6-tetrazine-2,5-dioxide (TZX) was analyzed. Despite a high detonation velocity and a predicted CJ pressure comparable to that of RDX, TZX performs relatively poorly in the cylinder test. Theoretical and computational analysis shows this to be the result of a low heat of detonation. A conceptual strategy is proposed to remedy this problem. In order to predict the required heats of formation, new ab initio group equivalents were developed. Crystal structure calculations are also described that show hydrogen-bonding is important in determining the density of TZX and related compounds.

  18. Data analysis using the Gnu R system for statistical computation

    SciTech Connect (OSTI)

    Simone, James; /Fermilab

    2011-07-01

    R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.

  19. Computer analysis of sodium cold trap design and performance. [LMFBR

    SciTech Connect (OSTI)

    McPheeters, C.C.; Raue, D.J.

    1983-11-01

    Normal steam-side corrosion of steam-generator tubes in Liquid Metal Fast Breeder Reactors (LMFBRs) results in liberation of hydrogen, and most of this hydrogen diffuses through the tubes into the heat-transfer sodium and must be removed by the purification system. Cold traps are normally used to purify sodium, and they operate by cooling the sodium to temperatures near the melting point, where soluble impurities including hydrogen and oxygen precipitate as NaH and Na/sub 2/O, respectively. A computer model was developed to simulate the processes that occur in sodium cold traps. The Model for Analyzing Sodium Cold Traps (MASCOT) simulates any desired configuration of mesh arrangements and dimensions and calculates pressure drops and flow distributions, temperature profiles, impurity concentration profiles, and impurity mass distributions.

  20. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing and Storage Requirements Computing and Storage Requirements for FES J. Candy General Atomics, San Diego, CA Presented at DOE Technical Program Review Hilton Washington DC/Rockville Rockville, MD 19-20 March 2013 2 Computing and Storage Requirements Drift waves and tokamak plasma turbulence Role in the context of fusion research * Plasma performance: In tokamak plasmas, performance is limited by turbulent radial transport of both energy and particles. * Gradient-driven: This turbulent

  1. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  2. Analysis of magnetic probe signals including effect of cylindrical conducting wall for field-reversed configuration experiment

    SciTech Connect (OSTI)

    Ikeyama, Taeko; Hiroi, Masanori; Nemoto, Yuuichi; Nogi, Yasuyuki

    2008-06-15

    A confinement field is disturbed by magnetohydrodynamic (MHD) motions of a field-reversed configuration (FRC) plasma in a cylindrical conductor. The effect of the conductor should be included to obtain a spatial structure of the disturbed field with a good precision. For this purpose, a toroidal current in the plasma and an eddy current on a conducting wall are replaced by magnetic dipole and image magnetic dipole moments, respectively. Typical spatial structures of the disturbed field are calculated by using the dipole moments for such MHD motions as radial shift, internal tilt, external tilt, and n=2 mode deformation. Then, analytic formulas for estimating the shift distance, tilt angle, and deformation rate of the MHD motions from magnetic probe signals are derived. It is estimated from the calculations by using the dipole moments that the analytic formulas include an approximately 40% error. Two kinds of experiment are carried out to investigate the reliability of the calculations. First, a magnetic field produced by a circular current is measured in an aluminum pipe to confirm the replacement of the eddy current with the image magnetic dipole moments. The measured fields coincide well with the calculated values including the image magnetic dipole moments. Second, magnetic probe signals measured from the FRC plasma are substituted into the analytic formulas to obtain shift distance and deformation rate. The experimental results are compared to the MHD motions measured by using a radiation from the plasma. If the error included in the analytic formulas and the difference between the magnetic and optical structures in the plasma are considered, the results of the radiation measurement support well those of the magnetic analysis.

  3. Modeling and Analysis of a Lunar Space Reactor with the Computer...

    Office of Scientific and Technical Information (OSTI)

    Reactor with the Computer Code RELAP5-3DATHENA Citation Details In-Document Search Title: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D...

  4. Thermodynamic analysis of a possible CO{sub 2}-laser plant included in a heat engine cycle

    SciTech Connect (OSTI)

    Bisio, G.; Rubatto, G.

    1998-07-01

    In these last years, several plants have been realized in some industrialized countries to recover pressure exergy from various fluids. That has been done by means of suitable turbines in particular for blast-furnace top gas and natural gas. Various papers have examined the topic, considering pros and cons. High-power CO{sub 2}-lasers are being more and more widely used for welding, drilling and cutting in machine shops. In the near future different kinds of metal surface treatments will probably become routine practice with laser units. The industries benefiting most from high power lasers will be: the automotive industry, shipbuilding, the offshore industry, the aerospace industry, the nuclear and the chemical processing industries. Both degradation and cooling problems may be alleviated by allowing the gas to flow through the laser tube and by reducing its pressure outside this tube. Thus, a thermodynamic analysis on high-power CO{sub 2}-lasers with particular reference to a possible energy recovery is justified. In previous papers the critical examination of the concept of efficiency has led one of the present authors to the definition of an operational domain in which the process can be achieved. This domain is confined by regions of no entropy production (upper limit) and no useful effects (lower limit). On the basis of these concepts and of what has been done for pressure exergy recovery from other fluids, exergy investigations and an analysis of losses are performed for a cyclic process including a high performance CO2 laser. Thermodynamic analysis of flow processes in a CO{sub 2}-laser plant shows that the inclusion of a turbine in this plant allows us to recover the most part of the exergy necessary for the compressor; in addition, the water consumption for the refrigeration in the heat exchanger is reduced.

  5. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    SciTech Connect (OSTI)

    Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  6. Computational Fluid Dynamics Analysis of Flexible Duct Junction Box Design

    SciTech Connect (OSTI)

    Beach, Robert; Prahl, Duncan; Lange, Rich

    2013-12-01

    IBACOS explored the relationships between pressure and physical configurations of flexible duct junction boxes by using computational fluid dynamics (CFD) simulations to predict individual box parameters and total system pressure, thereby ensuring improved HVAC performance. Current Air Conditioning Contractors of America (ACCA) guidance (Group 11, Appendix 3, ACCA Manual D, Rutkowski 2009) allows for unconstrained variation in the number of takeoffs, box sizes, and takeoff locations. The only variables currently used in selecting an equivalent length (EL) are velocity of air in the duct and friction rate, given the first takeoff is located at least twice its diameter away from the inlet. This condition does not account for other factors impacting pressure loss across these types of fittings. For each simulation, the IBACOS team converted pressure loss within a box to an EL to compare variation in ACCA Manual D guidance to the simulated variation. IBACOS chose cases to represent flows reasonably correlating to flows typically encountered in the field and analyzed differences in total pressure due to increases in number and location of takeoffs, box dimensions, and velocity of air, and whether an entrance fitting is included. The team also calculated additional balancing losses for all cases due to discrepancies between intended outlet flows and natural flow splits created by the fitting. In certain asymmetrical cases, the balancing losses were significantly higher than symmetrical cases where the natural splits were close to the targets. Thus, IBACOS has shown additional design constraints that can ensure better system performance.

  7. Computational Challenges for Microbial Genome and Metagenome Analysis (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Mavrommatis, Kostas

    2011-06-08

    Kostas Mavrommatis of the DOE JGI gives a presentation on "Computational Challenges for Microbial Genome & Metagenome Analysis" at the JGI/Argonne HPC Workshop on January 26, 2010.

  8. Analysis of gallium arsenide deposition in a horizontal chemical vapor deposition reactor using massively parallel computations

    SciTech Connect (OSTI)

    Salinger, A.G.; Shadid, J.N.; Hutchinson, S.A.

    1998-01-01

    A numerical analysis of the deposition of gallium from trimethylgallium (TMG) and arsine in a horizontal CVD reactor with tilted susceptor and a three inch diameter rotating substrate is performed. The three-dimensional model includes complete coupling between fluid mechanics, heat transfer, and species transport, and is solved using an unstructured finite element discretization on a massively parallel computer. The effects of three operating parameters (the disk rotation rate, inlet TMG fraction, and inlet velocity) and two design parameters (the tilt angle of the reactor base and the reactor width) on the growth rate and uniformity are presented. The nonlinear dependence of the growth rate uniformity on the key operating parameters is discussed in detail. Efficient and robust algorithms for massively parallel reacting flow simulations, as incorporated into our analysis code MPSalsa, make detailed analysis of this complicated system feasible.

  9. SWAAM-LT: The long-term, sodium/water reaction analysis method computer code

    SciTech Connect (OSTI)

    Shin, Y.W.; Chung, H.H.; Wiedermann, A.H.; Tanabe, H.

    1993-01-01

    The SWAAM-LT Code, developed for analysis of long-term effects of sodium/water reactions, is discussed. The theoretical formulation of the code is described, including the introduction of system matrices for ease of computer programming as a general system code. Also, some typical results of the code predictions for available large scale tests are presented. Test data for the steam generator design with the cover-gas feature and without the cover-gas feature are available and analyzed. The capabilities and limitations of the code are then discussed in light of the comparison between the code prediction and the test data.

  10. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management Programs Advanced Nuclear Energy

  11. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. ! Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13 Garching bei München, Germany, July 17, 2007 ScicomP 13, July 17, 2007, Garching Overview * About Bassi * Memory on Bassi * Large Page Memory (It's Great!) * System Configuration * Large Page

  12. TURTLE with MAD input (Trace Unlimited Rays Through Lumped Elements) -- A computer program for simulating charged particle beam transport systems and DECAY TURTLE including decay calculations

    SciTech Connect (OSTI)

    Carey, D.C.

    1999-12-09

    TURTLE is a computer program useful for determining many characteristics of a particle beam once an initial design has been achieved, Charged particle beams are usually designed by adjusting various beam line parameters to obtain desired values of certain elements of a transfer or beam matrix. Such beam line parameters may describe certain magnetic fields and their gradients, lengths and shapes of magnets, spacings between magnetic elements, or the initial beam accepted into the system. For such purposes one typically employs a matrix multiplication and fitting program such as TRANSPORT. TURTLE is designed to be used after TRANSPORT. For convenience of the user, the input formats of the two programs have been made compatible. The use of TURTLE should be restricted to beams with small phase space. The lumped element approximation, described below, precludes the inclusion of the effect of conventional local geometric aberrations (due to large phase space) or fourth and higher order. A reading of the discussion below will indicate clearly the exact uses and limitations of the approach taken in TURTLE.

  13. NASTRAN-based computer program for structural dynamic analysis of horizontal axis wind turbines

    SciTech Connect (OSTI)

    Lobitz, D.W.

    1984-01-01

    This paper describes a computer program developed for structural dynamic analysis of horizontal axis wind turbines (HAWTs). It is based on the finite element method through its reliance on NASTRAN for the development of mass, stiffness, and damping matrices of the tower and rotor, which are treated in NASTRAN as separate structures. The tower is modeled in a stationary frame and the rotor in one rotating at a constant angular velocity. The two structures are subsequently joined together (external to NASTRAN) using a time-dependent transformation consistent with the hub configuration. Aerodynamic loads are computed with an established flow model based on strip theory. Aeroelastic effects are included by incorporating the local velocity and twisting deformation of the blade in the load computation. The turbulent nature of the wind, both in space and time, is modeled by adding in stochastic wind increments. The resulting equations of motion are solved in the time domain using the implicit Newmark-Beta integrator. Preliminary comparisons with data from the Boeing/NASA MOD2 HAWT indicate that the code is capable of accurately and efficiently predicting the response of HAWTs driven by turbulent winds.

  14. THE SAP3 COMPUTER PROGRAM FOR QUANTITATIVE MULTIELEMENT ANALYSIS BY ENERGY DISPERSIVE X-RAY FLUORESCENCE

    SciTech Connect (OSTI)

    Nielson, K. K.; Sanders, R. W.

    1982-04-01

    SAP3 is a dual-function FORTRAN computer program which performs peak analysis of energy-dispersive x-ray fluorescence spectra and then quantitatively interprets the results of the multielement analysis. It was written for mono- or bi-chromatic excitation as from an isotopic or secondary excitation source, and uses the separate incoherent and coherent backscatter intensities to define the bulk sample matrix composition. This composition is used in performing fundamental-parameter matrix corrections for self-absorption, enhancement, and particle-size effects, obviating the need for specific calibrations for a given sample matrix. The generalized calibration is based on a set of thin-film sensitivities, which are stored in a library disk file and used for all sample matrices and thicknesses. Peak overlap factors are also determined from the thin-film standards, and are stored in the library for calculating peak overlap corrections. A detailed description is given of the algorithms and program logic, and the program listing and flow charts are also provided. An auxiliary program, SPCAL, is also given for use in calibrating the backscatter intensities. SAP3 provides numerous analysis options via seventeen control switches which give flexibility in performing the calculations best suited to the sample and the user needs. User input may be limited to the name of the library, the analysis livetime, and the spectrum filename and location. Output includes all peak analysis information, matrix correction factors, and element concentrations, uncertainties and detection limits. Twenty-four elements are typically determined from a 1024-channel spectrum in one-to-two minutes using a PDP-11/34 computer operating under RSX-11M.

  15. Computational design and analysis of flatback airfoil wind tunnel experiment.

    SciTech Connect (OSTI)

    Mayda, Edward A.; van Dam, C.P.; Chao, David D.; Berg, Dale E.

    2008-03-01

    A computational fluid dynamics study of thick wind turbine section shapes in the test section of the UC Davis wind tunnel at a chord Reynolds number of one million is presented. The goals of this study are to validate standard wind tunnel wall corrections for high solid blockage conditions and to reaffirm the favorable effect of a blunt trailing edge or flatback on the performance characteristics of a representative thick airfoil shape prior to building the wind tunnel models and conducting the experiment. The numerical simulations prove the standard wind tunnel corrections to be largely valid for the proposed test of 40% maximum thickness to chord ratio airfoils at a solid blockage ratio of 10%. Comparison of the computed lift characteristics of a sharp trailing edge baseline airfoil and derived flatback airfoils reaffirms the earlier observed trend of reduced sensitivity to surface contamination with increasing trailing edge thickness.

  16. Modeling and Analysis of a Lunar Space Reactor with the Computer Code

    Office of Scientific and Technical Information (OSTI)

    RELAP5-3D/ATHENA (Conference) | SciTech Connect Conference: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA Citation Details In-Document Search Title: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE

  17. Routing performance analysis and optimization within a massively parallel computer

    DOE Patents [OSTI]

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  18. Computational Proteomics: High-throughput Analysis for Systems Biology

    SciTech Connect (OSTI)

    Cannon, William R.; Webb-Robertson, Bobbie-Jo M.

    2007-01-03

    High-throughput (HTP) proteomics is a rapidly developing field that offers the global profiling of proteins from a biological system. The HTP technological advances are fueling a revolution in biology, enabling analyses at the scales of entire systems (e.g., whole cells, tumors, or environmental communities). However, simply identifying the proteins in a cell is insufficient for understanding the underlying complexity and operating mechanisms of the overall system. Systems level investigations are relying more and more on computational analyses, especially in the field of proteomics generating large-scale global data.

  19. Inferring Group Processes from Computer-Mediated Affective Text Analysis

    SciTech Connect (OSTI)

    Schryver, Jack C; Begoli, Edmon; Jose, Ajith; Griffin, Christopher

    2011-02-01

    Political communications in the form of unstructured text convey rich connotative meaning that can reveal underlying group social processes. Previous research has focused on sentiment analysis at the document level, but we extend this analysis to sub-document levels through a detailed analysis of affective relationships between entities extracted from a document. Instead of pure sentiment analysis, which is just positive or negative, we explore nuances of affective meaning in 22 affect categories. Our affect propagation algorithm automatically calculates and displays extracted affective relationships among entities in graphical form in our prototype (TEAMSTER), starting with seed lists of affect terms. Several useful metrics are defined to infer underlying group processes by aggregating affective relationships discovered in a text. Our approach has been validated with annotated documents from the MPQA corpus, achieving a performance gain of 74% over comparable random guessers.

  20. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Oehmen, Chris [PNNL

    2011-06-08

    Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

    1. Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities

      Broader source: Energy.gov [DOE]

      Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop October 25, 2011

    2. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

      SciTech Connect (OSTI)

      Oehmen, Chris [PNNL] [PNNL

      2010-01-25

      Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

    3. Computer Modeling of Violent Intent: A Content Analysis Approach

      SciTech Connect (OSTI)

      Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

      2014-01-03

      We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

    4. Technical support document: Energy conservation standards for consumer products: Dishwashers, clothes washers, and clothes dryers including: Environmental impacts; regulatory impact analysis

      SciTech Connect (OSTI)

      Not Available

      1990-12-01

      The Energy Policy and Conservation Act as amended (P.L. 94-163), establishes energy conservation standards for 12 of the 13 types of consumer products specifically covered by the Act. The legislation requires the Department of Energy (DOE) to consider new or amended standards for these and other types of products at specified times. This Technical Support Document presents the methodology, data and results from the analysis of the energy and economic impacts of standards on dishwashers, clothes washers, and clothes dryers. The economic impact analysis is performed in five major areas: An Engineering Analysis, which establishes technical feasibility and product attributes including costs of design options to improve appliance efficiency. A Consumer Analysis at two levels: national aggregate impacts, and impacts on individuals. The national aggregate impacts include forecasts of appliance sales, efficiencies, energy use, and consumer expenditures. The individual impacts are analyzed by Life-Cycle Cost (LCC), Payback Periods, and Cost of Conserved Energy (CCE), which evaluate the savings in operating expenses relative to increases in purchase price; A Manufacturer Analysis, which provides an estimate of manufacturers' response to the proposed standards. Their response is quantified by changes in several measures of financial performance for a firm. An Industry Impact Analysis shows financial and competitive impacts on the appliance industry. A Utility Analysis that measures the impacts of the altered energy-consumption patterns on electric utilities. A Environmental Effects analysis, which estimates changes in emissions of carbon dioxide, sulfur oxides, and nitrogen oxides, due to reduced energy consumption in the home and at the power plant. A Regulatory Impact Analysis collects the results of all the analyses into the net benefits and costs from a national perspective. 47 figs., 171 tabs. (JF)

    5. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

      SciTech Connect (OSTI)

      Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. Sandia National Labs., Albuquerque, NM; Tills, J. J. Tills and Associates, Inc., Sandia Park, NM

      1997-12-01

      The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

    6. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

      SciTech Connect (OSTI)

      Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

      2015-12-31

      Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

    7. RISKIND: An enhanced computer code for National Environmental Policy Act transportation consequence analysis

      SciTech Connect (OSTI)

      Biwer, B.M.; LePoire, D.J.; Chen, S.Y.

      1996-03-01

      The RISKIND computer program was developed for the analysis of radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel (SNF) or other radioactive materials. The code is intended to provide scenario-specific analyses when evaluating alternatives for environmental assessment activities, including those for major federal actions involving radioactive material transport as required by the National Environmental Policy Act (NEPA). As such, rigorous procedures have been implemented to enhance the code`s credibility and strenuous efforts have been made to enhance ease of use of the code. To increase the code`s reliability and credibility, a new version of RISKIND was produced under a quality assurance plan that covered code development and testing, and a peer review process was conducted. During development of the new version, the flexibility and ease of use of RISKIND were enhanced through several major changes: (1) a Windows{sup {trademark}} point-and-click interface replaced the old DOS menu system, (2) the remaining model input parameters were added to the interface, (3) databases were updated, (4) the program output was revised, and (5) on-line help has been added. RISKIND has been well received by users and has been established as a key component in radiological transportation risk assessments through its acceptance by the U.S. Department of Energy community in recent environmental impact statements (EISs) and its continued use in the current preparation of several EISs.

    8. INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION

      SciTech Connect (OSTI)

      Kenneth D. Luff

      2002-09-30

      Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). Luff Exploration Company is applying these tools for analysis of carbonate reservoirs in the southern Williston Basin. The integrated software programs are designed to be used by small team consisting of an engineer, geologist and geophysicist. The software tools are flexible and robust, allowing application in many environments for hydrocarbon reservoirs. Keystone elements of the software tools include clustering and neural-network techniques. The tools are used to transform seismic attribute data to reservoir characteristics such as storage (phi-h), probable oil-water contacts, structural depths and structural growth history. When these reservoir characteristics are combined with neural network or fuzzy logic solvers, they can provide a more complete description of the reservoir. This leads to better estimates of hydrocarbons in place, areal limits and potential for infill or step-out drilling. These tools were developed and tested using seismic, geologic and well data from the Red River Play in Bowman County, North Dakota and Harding County, South Dakota. The geologic setting for the Red River Formation is shallow-shelf carbonate at a depth from 8000 to 10,000 ft.

    9. Uncertainty Studies of Real Anode Surface Area in Computational Analysis for Molten Salt Electrorefining

      SciTech Connect (OSTI)

      Sungyeol Choi; Jaeyeong Park; Robert O. Hoover; Supathorn Phongikaroon; Michael F. Simpson; Kwang-Rag Kim; Il Soon Hwang

      2011-09-01

      This study examines how much cell potential changes with five differently assumed real anode surface area cases. Determining real anode surface area is a significant issue to be resolved for precisely modeling molten salt electrorefining. Based on a three-dimensional electrorefining model, calculated cell potentials compare with an experimental cell potential variation over 80 hours of operation of the Mark-IV electrorefiner with driver fuel from the Experimental Breeder Reactor II. We succeeded to achieve a good agreement with an overall trend of the experimental data with appropriate selection of a mode for real anode surface area, but there are still local inconsistencies between theoretical calculation and experimental observation. In addition, the results were validated and compared with two-dimensional results to identify possible uncertainty factors that had to be further considered in a computational electrorefining analysis. These uncertainty factors include material properties, heterogeneous material distribution, surface roughness, and current efficiency. Zirconium's abundance and complex behavior have more impact on uncertainty towards the latter period of electrorefining at given batch of fuel. The benchmark results found that anode materials would be dissolved from both axial and radial directions at least for low burn-up metallic fuels after active liquid sodium bonding was dissolved.

    10. INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION

      SciTech Connect (OSTI)

      Mark A. Sippel; William C. Carrigan; Kenneth D. Luff; Lyn Canter

      2003-11-12

      Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). The software tools in ICS have been developed for characterization of reservoir properties and evaluation of hydrocarbon potential using a combination of inter-disciplinary data sources such as geophysical, geologic and engineering variables. The ICS tools provide a means for logical and consistent reservoir characterization and oil reserve estimates. The tools can be broadly characterized as (1) clustering tools, (2) neural solvers, (3) multiple-linear regression, (4) entrapment-potential calculator and (5) file utility tools. ICS tools are extremely flexible in their approach and use, and applicable to most geologic settings. The tools are primarily designed to correlate relationships between seismic information and engineering and geologic data obtained from wells, and to convert or translate seismic information into engineering and geologic terms or units. It is also possible to apply ICS in a simple framework that may include reservoir characterization using only engineering, seismic, or geologic data in the analysis. ICS tools were developed and tested using geophysical, geologic and engineering data obtained from an exploitation and development project involving the Red River Formation in Bowman County, North Dakota and Harding County, South Dakota. Data obtained from 3D seismic surveys, and 2D seismic lines encompassing nine prospective field areas were used in the analysis. The geologic setting of the Red River Formation in Bowman and Harding counties is that of a shallow-shelf, carbonate system. Present-day depth of the Red River formation is approximately 8000 to 10,000 ft below ground surface. This report summarizes production results from well demonstration activity, results of reservoir characterization of the Red River Formation at demonstration sites, descriptions of ICS tools and strategies for their application.

    11. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    12. Analysis of neutron data in the resonance region via the computer code SAMMY

      SciTech Connect (OSTI)

      Larson, N.M.

      1985-01-01

      Procedures for analysis of resonance neutron cross-section data have been implemented in a state-of-the-art computer code SAMMY, developed at the Oak Ridge Electron Linear Accelerator (ORELA) at Oak Ridge National Laboratory. A unique feature of SAMMY is the use of Bayes' equations to determine ''best'' values of parameters, which permits sequential analysis of data sets (or subsets) while giving the same results as would be given by a simultaneous analysis. Another important feature is the inclusion of data-reduction parameters in the fitting procedure. Other features of SAMMY are also described.

    13. Computer code input for thermal hydraulic analysis of Multi-Function Waste Tank Facility Title II design

      SciTech Connect (OSTI)

      Cramer, E.R.

      1994-10-01

      The input files to the P/Thermal computer code are documented for the thermal hydraulic analysis of the Multi-Function Waste Tank Facility Title II design analysis.

    14. Analysis of the cracking behavior of Alloy 600 RVH penetrations. Part 1: Stress analysis and K computation

      SciTech Connect (OSTI)

      Bhandari, S.; Vagner, J.; Garriga-Majo, D.; Amzallag, C.; Faidy, C.

      1996-12-01

      The study presented here concerns the analysis of crack propagation behavior in the Alloy 600 RVH penetrations used in the French 900 and 1300 MWe PWR series. The damage mechanism identified is clearly the SCC in primary water environment. Consequently the analysis presented here is based on: (1) the stress analysis carried out on the RVH penetrations, (2) the SCC model developed in primary water environment and at the operating temperatures, and (3) the fracture mechanics concepts. The different steps involved in the study are: (1) Evaluation of the stress state for the case of the peripheral configuration of RVH penetrations; the case retained here is that of a conic tube with stress analysis conducted using multi-pass welding. (2) Computation of the influence functions (IF) for a polynomial stress distribution in case of a tube of Ri/t ratio (internal diameter/thickness) corresponding to that of an RVH penetration. (3) Establishment of a propagation law based on study and review of data available in the literature. (4) Conduction of a parametric study of crack propagation using several initial defects. (5) Analysis of crack propagation of defects observed in various reactors and comparison with measured propagation rates. This paper (Part 1) deals with the first two steps namely Stress Analysis and K Computation.

    15. Methods and apparatuses for information analysis on shared and distributed computing systems

      DOE Patents [OSTI]

      Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

      2011-02-22

      Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

    16. Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Objectives Current Future New science Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART) Kai Germaschewski, Amitava Bhattacharjee, Barrett Rogers, Will Fox, Yi-Min Huang, and others CICART Space Science Center / Dept. of Physics University of New Hampshire August 3, 2010 Kai Germaschewski CICART Project Objectives Current Future New science Outline 1 Project Information 2 Project summary and scientific objectives 3 Current HPC usage and methods 4 HPC

    17. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

      SciTech Connect (OSTI)

      Krstulovich, S.F.

      1986-11-12

      This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

    18. COBRA-SFS (Spent Fuel Storage): A thermal-hydraulic analysis computer code: Volume 2, User's manual

      SciTech Connect (OSTI)

      Rector, D.R.; Cuta, J.M.; Lombardo, N.J.; Michener, T.E.; Wheeler, C.L.

      1986-11-01

      COBRA-SFS (Spent Fuel Storage) is a general thermal-hydraulic analysis computer code used to predict temperatures and velocities in a wide variety of systems. The code was refined and specialized for spent fuel storage system analyses for the US Department of Energy's Commercial Spent Fuel Management Program. The finite-volume equations governing mass, momentum, and energy conservation are written for an incompressible, single-phase fluid. The flow equations model a wide range of conditions including natural circulation. The energy equations include the effects of solid and fluid conduction, natural convection, and thermal radiation. The COBRA-SFS code is structured to perform both steady-state and transient calculations; however, the transient capability has not yet been validated. This volume contains the input instructions for COBRA-SFS and an auxiliary radiation exchange factor code, RADX-1. It is intended to aid the user in becoming familiar with the capabilities and modeling conventions of the code.

    19. Station for X-ray structural analysis of materials and single crystals (including nanocrystals) on a synchrotron radiation beam from the wiggler at the Siberia-2 storage ring

      SciTech Connect (OSTI)

      Kheiker, D. M. Kovalchuk, M. V.; Korchuganov, V. N.; Shilin, Yu. N.; Shishkov, V. A.; Sulyanov, S. N.; Dorovatovskii, P. V.; Rubinsky, S. V.; Rusakov, A. A.

      2007-11-15

      The design of the station for structural analysis of polycrystalline materials and single crystals (including nanoobjects and macromolecular crystals) on a synchrotron radiation beam from the superconducting wiggler of the Siberia-2 storage ring is described. The wiggler is constructed at the Budker Institute of Nuclear Physics of the Siberian Division of the Russian Academy of Sciences. The X-ray optical scheme of the station involves a (1, -1) double-crystal monochromator with a fixed position of the monochromatic beam and a sagittal bending of the second crystal, segmented mirrors bent by piezoelectric motors, and a (2{theta}, {omega}, {phi}) three-circle goniometer with a fixed tilt angle. Almost all devices of the station are designed and fabricated at the Shubnikov Institute of Crystallography of the Russian Academy of Sciences. The Bruker APEX11 two-dimensional CCD detector will serve as a detector in the station.

    20. Internal air flow analysis of a bladeless micro aerial vehicle hemisphere body using computational fluid dynamic

      SciTech Connect (OSTI)

      Othman, M. N. K. E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Zuradzman, M. Razlan E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Hazry, D. E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Khairunizam, Wan E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Shahriman, A. B. E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Yaacob, S. E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Ahmed, S. Faiz E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; and others

      2014-12-04

      This paper explain the analysis of internal air flow velocity of a bladeless vertical takeoff and landing (VTOL) Micro Aerial Vehicle (MAV) hemisphere body. In mechanical design, before produce a prototype model, several analyses should be done to ensure the product's effectiveness and efficiency. There are two types of analysis method can be done in mechanical design; mathematical modeling and computational fluid dynamic. In this analysis, I used computational fluid dynamic (CFD) by using SolidWorks Flow Simulation software. The idea came through to overcome the problem of ordinary quadrotor UAV which has larger size due to using four rotors and the propellers are exposed to environment. The bladeless MAV body is designed to protect all electronic parts, which means it can be used in rainy condition. It also has been made to increase the thrust produced by the ducted propeller compare to exposed propeller. From the analysis result, the air flow velocity at the ducted area increased to twice the inlet air. This means that the duct contribute to the increasing of air velocity.

    1. The Radiological Safety Analysis Computer Program (RSAC-5) user`s manual. Revision 1

      SciTech Connect (OSTI)

      Wenzel, D.R.

      1994-02-01

      The Radiological Safety Analysis Computer Program (RSAC-5) calculates the consequences of the release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory from either reactor operating history or nuclear criticalities. RSAC-5 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated through the inhalation, immersion, ground surface, and ingestion pathways. RSAC+, a menu-driven companion program to RSAC-5, assists users in creating and running RSAC-5 input files. This user`s manual contains the mathematical models and operating instructions for RSAC-5 and RSAC+. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-5 and RSAC+. These programs are designed for users who are familiar with radiological dose assessment methods.

    2. Analysis and selection of optimal function implementations in massively parallel computer

      DOE Patents [OSTI]

      Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.

      2011-05-31

      An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

    3. High-Performance Computing for Real-Time Grid Analysis and Operation

      SciTech Connect (OSTI)

      Huang, Zhenyu; Chen, Yousu; Chavarría-Miranda, Daniel

      2013-10-31

      Power grids worldwide are undergoing an unprecedented transition as a result of grid evolution meeting information revolution. The grid evolution is largely driven by the desire for green energy. Emerging grid technologies such as renewable generation, smart loads, plug-in hybrid vehicles, and distributed generation provide opportunities to generate energy from green sources and to manage energy use for better system efficiency. With utility companies actively deploying these technologies, a high level of penetration of these new technologies is expected in the next 5-10 years, bringing in a level of intermittency, uncertainties, and complexity that the grid did not see nor design for. On the other hand, the information infrastructure in the power grid is being revolutionized with large-scale deployment of sensors and meters in both the transmission and distribution networks. The future grid will have two-way flows of both electrons and information. The challenge is how to take advantage of the information revolution: pull the large amount of data in, process it in real time, and put information out to manage grid evolution. Without addressing this challenge, the opportunities in grid evolution will remain unfulfilled. This transition poses grand challenges in grid modeling, simulation, and information presentation. The computational complexity of underlying power grid modeling and simulation will significantly increase in the next decade due to an increased model size and a decreased time window allowed to compute model solutions. High-performance computing is essential to enable this transition. The essential technical barrier is to vastly increase the computational speed so operation response time can be reduced from minutes to seconds and sub-seconds. The speed at which key functions such as state estimation and contingency analysis are conducted (typically every 3-5 minutes) needs to be dramatically increased so that the analysis of contingencies is both

    4. COBRA-SFS (Spent Fuel Storage): A thermal-hydraulic analysis computer code: Volume 3, Validation assessments

      SciTech Connect (OSTI)

      Lombardo, N.J.; Cuta, J.M.; Michener, T.E.; Rector, D.R.; Wheeler, C.L.

      1986-12-01

      This report presents the results of the COBRA-SFS (Spent Fuel Storage) computer code validation effort. COBRA-SFS, while refined and specialized for spent fuel storage system analyses, is a lumped-volume thermal-hydraulic analysis computer code that predicts temperature and velocity distributions in a wide variety of systems. Through comparisons of code predictions with spent fuel storage system test data, the code's mathematical, physical, and mechanistic models are assessed, and empirical relations defined. The six test cases used to validate the code and code models include single-assembly and multiassembly storage systems under a variety of fill media and system orientations and include unconsolidated and consolidated spent fuel. In its entirety, the test matrix investigates the contributions of convection, conduction, and radiation heat transfer in spent fuel storage systems. To demonstrate the code's performance for a wide variety of storage systems and conditions, comparisons of code predictions with data are made for 14 runs from the experimental data base. The cases selected exercise the important code models and code logic pathways and are representative of the types of simulations required for spent fuel storage system design and licensing safety analyses. For each test, a test description, a summary of the COBRA-SFS computational model, assumptions, and correlations employed are presented. For the cases selected, axial and radial temperature profile comparisons of code predictions with test data are provided, and conclusions drawn concerning the code models and the ability to predict the data and data trends. Comparisons of code predictions with test data demonstrate the ability of COBRA-SFS to successfully predict temperature distributions in unconsolidated or consolidated single and multiassembly spent fuel storage systems.

    5. Surveillance Analysis Computer System (SACS): Software requirements specification (SRS). Revision 2

      SciTech Connect (OSTI)

      Glasscock, J.A.

      1995-03-08

      This document is the primary document establishing requirements for the Surveillance Analysis Computer System (SACS) database, an Impact Level 3Q system. SACS stores information on tank temperatures, surface levels, and interstitial liquid levels. This information is retrieved by the customer through a PC-based interface and is then available to a number of other software tools. The software requirements specification (SRS) describes the system requirements for the SACS Project, and follows the Standard Engineering Practices (WHC-CM-6-1), Software Practices (WHC-CM-3-10) and Quality Assurance (WHC-CM-4-2, QR 19.0) policies.

    6. Nuclear Engineering Computer Models for In-Core Fuel Management Analysis.

      Energy Science and Technology Software Center (OSTI)

      1992-06-12

      Version 00 VPI-NECM is a nuclear engineering computer system of modules for in-core fuel management analysis. The system consists of 6 independent programs designed to calculate: (1) FARCON - neutron slowing down and epithermal group constants, (2) SLOCON - thermal neutron spectrum and group constants, (3) DISFAC - slow neutron disadvantage factors, (4) ODOG - solution of a one group neutron diffusion equation, (5) ODMUG - three group criticality problem, (6) FUELBURN - fuel burnupmore » in slow neutron fission reactors.« less

    7. Analysis/plot generation code with significance levels computed using Kolmogorov-Smirnov statistics valid for both large and small samples

      SciTech Connect (OSTI)

      Kurtz, S.E.; Fields, D.E.

      1983-10-01

      This report describes a version of the TERPED/P computer code that is very useful for small data sets. A new algorithm for determining the Kolmogorov-Smirnov (KS) statistics is used to extend program applicability. The TERPED/P code facilitates the analysis of experimental data and assists the user in determining its probability distribution function. Graphical and numerical tests are performed interactively in accordance with the user's assumption of normally or log-normally distributed data. Statistical analysis options include computation of the chi-square statistic and the KS one-sample test statistic and the corresponding significance levels. Cumulative probability plots of the user's data are generated either via a local graphics terminal, a local line printer or character-oriented terminal, or a remote high-resolution graphics device such as the FR80 film plotter or the Calcomp paper plotter. Several useful computer methodologies suffer from limitations of their implementations of the KS nonparametric test. This test is one of the more powerful analysis tools for examining the validity of an assumption about the probability distribution of a set of data. KS algorithms are found in other analysis codes, including the Statistical Analysis Subroutine (SAS) package and earlier versions of TERPED. The inability of these algorithms to generate significance levels for sample sizes less than 50 has limited their usefulness. The release of the TERPED code described herein contains algorithms to allow computation of the KS statistic and significance level for data sets of, if the user wishes, as few as three points. Values computed for the KS statistic are within 3% of the correct value for all data set sizes.

    8. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    9. Technical support document: Energy efficiency standards for consumer products: Refrigerators, refrigerator-freezers, and freezers including draft environmental assessment, regulatory impact analysis

      SciTech Connect (OSTI)

      1995-07-01

      The Energy Policy and Conservation Act (P.L. 94-163), as amended by the National Appliance Energy Conservation Act of 1987 (P.L. 100-12) and by the National Appliance Energy Conservation Amendments of 1988 (P.L. 100-357), and by the Energy Policy Act of 1992 (P.L. 102-486), provides energy conservation standards for 12 of the 13 types of consumer products` covered by the Act, and authorizes the Secretary of Energy to prescribe amended or new energy standards for each type (or class) of covered product. The assessment of the proposed standards for refrigerators, refrigerator-freezers, and freezers presented in this document is designed to evaluate their economic impacts according to the criteria in the Act. It includes an engineering analysis of the cost and performance of design options to improve the efficiency of the products; forecasts of the number and average efficiency of products sold, the amount of energy the products will consume, and their prices and operating expenses; a determination of change in investment, revenues, and costs to manufacturers of the products; a calculation of the costs and benefits to consumers, electric utilities, and the nation as a whole; and an assessment of the environmental impacts of the proposed standards.

    10. Computer-aided breast MR image feature analysis for prediction of tumor response to chemotherapy

      SciTech Connect (OSTI)

      Aghaei, Faranak; Tan, Maxine; Liu, Hong; Zheng, Bin; Hollingsworth, Alan B.; Qian, Wei

      2015-11-15

      Purpose: To identify a new clinical marker based on quantitative kinetic image features analysis and assess its feasibility to predict tumor response to neoadjuvant chemotherapy. Methods: The authors assembled a dataset involving breast MR images acquired from 68 cancer patients before undergoing neoadjuvant chemotherapy. Among them, 25 patients had complete response (CR) and 43 had partial and nonresponse (NR) to chemotherapy based on the response evaluation criteria in solid tumors. The authors developed a computer-aided detection scheme to segment breast areas and tumors depicted on the breast MR images and computed a total of 39 kinetic image features from both tumor and background parenchymal enhancement regions. The authors then applied and tested two approaches to classify between CR and NR cases. The first one analyzed each individual feature and applied a simple feature fusion method that combines classification results from multiple features. The second approach tested an attribute selected classifier that integrates an artificial neural network (ANN) with a wrapper subset evaluator, which was optimized using a leave-one-case-out validation method. Results: In the pool of 39 features, 10 yielded relatively higher classification performance with the areas under receiver operating characteristic curves (AUCs) ranging from 0.61 to 0.78 to classify between CR and NR cases. Using a feature fusion method, the maximum AUC = 0.85 ± 0.05. Using the ANN-based classifier, AUC value significantly increased to 0.96 ± 0.03 (p < 0.01). Conclusions: This study demonstrated that quantitative analysis of kinetic image features computed from breast MR images acquired prechemotherapy has potential to generate a useful clinical marker in predicting tumor response to chemotherapy.

    11. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information From here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop. Kerberos. AFS. Printing. Recommended applications for various common tasks. Running CPU- or IO-intensive programs (batch jobs) Commonly encountered problems Computing support within BooNE Bringing a computer to FNAL, or purchasing a new one. Laptops. The Computer Security Program Plan for MiniBooNE The

    12. Methods, computer readable media, and graphical user interfaces for analysis of frequency selective surfaces

      DOE Patents [OSTI]

      Kotter, Dale K [Shelley, ID; Rohrbaugh, David T [Idaho Falls, ID

      2010-09-07

      A frequency selective surface (FSS) and associated methods for modeling, analyzing and designing the FSS are disclosed. The FSS includes a pattern of conductive material formed on a substrate to form an array of resonance elements. At least one aspect of the frequency selective surface is determined by defining a frequency range including multiple frequency values, determining a frequency dependent permittivity across the frequency range for the substrate, determining a frequency dependent conductivity across the frequency range for the conductive material, and analyzing the frequency selective surface using a method of moments analysis at each of the multiple frequency values for an incident electromagnetic energy impinging on the frequency selective surface. The frequency dependent permittivity and the frequency dependent conductivity are included in the method of moments analysis.

    13. TRUMP-BD: A computer code for the analysis of nuclear fuel assemblies under severe accident conditions

      SciTech Connect (OSTI)

      Lombardo, N.J.; Marseille, T.J.; White, M.D.; Lowery, P.S.

      1990-06-01

      TRUMP-BD (Boil Down) is an extension of the TRUMP (Edwards 1972) computer program for the analysis of nuclear fuel assemblies under severe accident conditions. This extension allows prediction of the heat transfer rates, metal-water oxidation rates, fission product release rates, steam generation and consumption rates, and temperature distributions for nuclear fuel assemblies under core uncovery conditions. The heat transfer processes include conduction in solid structures, convection across fluid-solid boundaries, and radiation between interacting surfaces. Metal-water reaction kinetics are modeled with empirical relationships to predict the oxidation rates of steam-exposed Zircaloy and uranium metal. The metal-water oxidation models are parabolic in form with an Arrhenius temperature dependence. Uranium oxidation begins when fuel cladding failure occurs; Zircaloy oxidation occurs continuously at temperatures above 13000{degree}F when metal and steam are available. From the metal-water reactions, the hydrogen generation rate, total hydrogen release, and temporal and spatial distribution of oxide formations are computed. Consumption of steam from the oxidation reactions and the effect of hydrogen on the coolant properties is modeled for independent coolant flow channels. Fission product release from exposed uranium metal Zircaloy-clad fuel is modeled using empirical time and temperature relationships that consider the release to be subject to oxidation and volitization/diffusion ( bake-out'') release mechanisms. Release of the volatile species of iodine (I), tellurium (Te), cesium (Ce), ruthenium (Ru), strontium (Sr), zirconium (Zr), cerium (Cr), and barium (Ba) from uranium metal fuel may be modeled.

    14. An introduction to computer viruses

      SciTech Connect (OSTI)

      Brown, D.R.

      1992-03-01

      This report on computer viruses is based upon a thesis written for the Master of Science degree in Computer Science from the University of Tennessee in December 1989 by David R. Brown. This thesis is entitled An Analysis of Computer Virus Construction, Proliferation, and Control and is available through the University of Tennessee Library. This paper contains an overview of the computer virus arena that can help the reader to evaluate the threat that computer viruses pose. The extent of this threat can only be determined by evaluating many different factors. These factors include the relative ease with which a computer virus can be written, the motivation involved in writing a computer virus, the damage and overhead incurred by infected systems, and the legal implications of computer viruses, among others. Based upon the research, the development of a computer virus seems to require more persistence than technical expertise. This is a frightening proclamation to the computing community. The education of computer professionals to the dangers that viruses pose to the welfare of the computing industry as a whole is stressed as a means of inhibiting the current proliferation of computer virus programs. Recommendations are made to assist computer users in preventing infection by computer viruses. These recommendations support solid general computer security practices as a means of combating computer viruses.

    15. National cyber defense high performance computing and analysis : concepts, planning and roadmap.

      SciTech Connect (OSTI)

      Hamlet, Jason R.; Keliiaa, Curtis M.

      2010-09-01

      There is a national cyber dilemma that threatens the very fabric of government, commercial and private use operations worldwide. Much is written about 'what' the problem is, and though the basis for this paper is an assessment of the problem space, we target the 'how' solution space of the wide-area national information infrastructure through the advancement of science, technology, evaluation and analysis with actionable results intended to produce a more secure national information infrastructure and a comprehensive national cyber defense capability. This cybersecurity High Performance Computing (HPC) analysis concepts, planning and roadmap activity was conducted as an assessment of cybersecurity analysis as a fertile area of research and investment for high value cybersecurity wide-area solutions. This report and a related SAND2010-4765 Assessment of Current Cybersecurity Practices in the Public Domain: Cyber Indications and Warnings Domain report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.

    16. Hydropower generation management under uncertainty via scenario analysis and parallel computation

      SciTech Connect (OSTI)

      Escudero, L.F.; Garcia, C.; Fuente, J.L. de la; Prieto, F.J.

      1996-05-01

      The authors present a modeling framework for the robust solution of hydroelectric power management problems with uncertainty in the values of the water inflows and outflows. A deterministic treatment of the problem provides unsatisfactory results, except for very short time horizons. The authors describe a model based on scenario analysis that allows a satisfactory treatment of uncertainty in the model data for medium and long-term planning problems. Their approach results in a huge model with a network submodel per scenario plus coupling constraints. The size of the problem and the structure of the constraints are adequate for the use of decomposition techniques and parallel computation tools. The authors present computational results for both sequential and parallel implementation versions of the codes, running on a cluster of workstations. The codes have been tested on data obtained from the reservoir network of Iberdrola, a power utility owning 50% of the total installed hydroelectric capacity of Spain, and generating 40% of the total energy demand.

    17. Hydropower generation management under uncertainty via scenario analysis and parallel computation

      SciTech Connect (OSTI)

      Escudero, L.F.; Garcia, C.; Fuente, J.L. de la; Prieto, F.J.

      1995-12-31

      The authors present a modeling framework for the robust solution of hydroelectric power management problems and uncertainty in the values of the water inflows and outflows. A deterministic treatment of the problem provides unsatisfactory results, except for very short time horizons. The authors describe a model based on scenario analysis that allows a satisfactory treatment of uncertainty in the model data for medium and long-term planning problems. This approach results in a huge model with a network submodel per scenario plus coupling constraints. The size of the problem and the structure of the constraints are adequate for the use of decomposition techniques and parallel computation tools. The authors present computational results for both sequential and parallel implementation versions of the codes, running on a cluster of workstations. The code have been tested on data obtained from the reservoir network of Iberdrola, a power utility owning 50% of the total installed hydroelectric capacity of Spain, and generating 40% of the total energy demand.

    18. Pump apparatus including deconsolidator

      DOE Patents [OSTI]

      Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew

      2014-10-07

      A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.

    19. Methods, apparatuses, and computer-readable media for projectional morphological analysis of N-dimensional signals

      DOE Patents [OSTI]

      Glazoff, Michael V.; Gering, Kevin L.; Garnier, John E.; Rashkeev, Sergey N.; Pyt'ev, Yuri Petrovich

      2016-05-17

      Embodiments discussed herein in the form of methods, systems, and computer-readable media deal with the application of advanced "projectional" morphological algorithms for solving a broad range of problems. In a method of performing projectional morphological analysis, an N-dimensional input signal is supplied. At least one N-dimensional form indicative of at least one feature in the N-dimensional input signal is identified. The N-dimensional input signal is filtered relative to the at least one N-dimensional form and an N-dimensional output signal is generated indicating results of the filtering at least as differences in the N-dimensional input signal relative to the at least one N-dimensional form.

    20. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing. The PRIMA Project

      SciTech Connect (OSTI)

      Malony, Allen D.; Wolf, Felix G.

      2014-01-31

      The growing number of cores provided by todays high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish

    1. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing: the PRIMA Project

      SciTech Connect (OSTI)

      Malony, Allen D.; Wolf, Felix G.

      2014-01-31

      The growing number of cores provided by todays high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these

    2. Optical modulator including grapene

      DOE Patents [OSTI]

      Liu, Ming; Yin, Xiaobo; Zhang, Xiang

      2016-06-07

      The present invention provides for a one or more layer graphene optical modulator. In a first exemplary embodiment the optical modulator includes an optical waveguide, a nanoscale oxide spacer adjacent to a working region of the waveguide, and a monolayer graphene sheet adjacent to the spacer. In a second exemplary embodiment, the optical modulator includes at least one pair of active media, where the pair includes an oxide spacer, a first monolayer graphene sheet adjacent to a first side of the spacer, and a second monolayer graphene sheet adjacent to a second side of the spacer, and at least one optical waveguide adjacent to the pair.

    3. SAFE: A computer code for the steady-state and transient thermal analysis of LMR fuel elements

      SciTech Connect (OSTI)

      Hayes, S.L.

      1993-12-01

      SAFE is a computer code developed for both the steady-state and transient thermal analysis of single LMR fuel elements. The code employs a two-dimensional control-volume based finite difference methodology with fully implicit time marching to calculate the temperatures throughout a fuel element and its associated coolant channel for both the steady-state and transient events. The code makes no structural calculations or predictions whatsoever. It does, however, accept as input structural parameters within the fuel such as the distributions of porosity and fuel composition, as well as heat generation, to allow a thermal analysis to be performed on a user-specified fuel structure. The code was developed with ease of use in mind. An interactive input file generator and material property correlations internal to the code are available to expedite analyses using SAFE. This report serves as a complete design description of the code as well as a user`s manual. A sample calculation made with SAFE is included to highlight some of the code`s features. Complete input and output files for the sample problem are provided.

    4. Use of model calibration to achieve high accuracy in analysis of computer networks

      DOE Patents [OSTI]

      Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

      2004-05-11

      A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

    5. Computational Fluid Dynamic Analysis of the VHTR Lower Plenum Standard Problem

      SciTech Connect (OSTI)

      Richard W. Johnson; Richard R. Schultz

      2009-07-01

      The United States Department of Energy is promoting the resurgence of nuclear power in the U. S. for both electrical power generation and production of process heat required for industrial processes such as the manufacture of hydrogen for use as a fuel in automobiles. The DOE project is called the next generation nuclear plant (NGNP) and is based on a Generation IV reactor concept called the very high temperature reactor (VHTR), which will use helium as the coolant at temperatures ranging from 450 ºC to perhaps 1000 ºC. While computational fluid dynamics (CFD) has not been used for past safety analysis for nuclear reactors in the U. S., it is being considered for safety analysis for existing and future reactors. It is fully recognized that CFD simulation codes will have to be validated for flow physics reasonably close to actual fluid dynamic conditions expected in normal and accident operational situations. To this end, experimental data have been obtained in a scaled model of a narrow slice of the lower plenum of a prismatic VHTR. The present report presents results of CFD examinations of these data to explore potential issues with the geometry, the initial conditions, the flow dynamics and the data needed to fully specify the inlet and boundary conditions; results for several turbulence models are examined. Issues are addressed and recommendations about the data are made.

    6. Comparison of different computed radiography systems: Physical characterization and contrast detail analysis

      SciTech Connect (OSTI)

      Rivetti, Stefano; Lanconelli, Nico; Bertolini, Marco; Nitrosi, Andrea; Burani, Aldo; Acchiappati, Domenico

      2010-02-15

      Purpose: In this study, five different units based on three different technologies--traditional computed radiography (CR) units with granular phosphor and single-side reading, granular phosphor and dual-side reading, and columnar phosphor and line-scanning reading--are compared in terms of physical characterization and contrast detail analysis. Methods: The physical characterization of the five systems was obtained with the standard beam condition RQA5. Three of the units have been developed by FUJIFILM (FCR ST-VI, FCR ST-BD, and FCR Velocity U), one by Kodak (Direct View CR 975), and one by Agfa (DX-S). The quantitative comparison is based on the calculation of the modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE). Noise investigation was also achieved by using a relative standard deviation analysis. Psychophysical characterization is assessed by performing a contrast detail analysis with an automatic reading of CDRAD images. Results: The most advanced units based on columnar phosphors provide MTF values in line or better than those from conventional CR systems. The greater thickness of the columnar phosphor improves the efficiency, allowing for enhanced noise properties. In fact, NPS values for standard CR systems are remarkably higher for all the investigated exposures and especially for frequencies up to 3.5 lp/mm. As a consequence, DQE values for the three units based on columnar phosphors and line-scanning reading, or granular phosphor and dual-side reading, are neatly better than those from conventional CR systems. Actually, DQE values of about 40% are easily achievable for all the investigated exposures. Conclusions: This study suggests that systems based on the dual-side reading or line-scanning reading with columnar phosphors provide a remarkable improvement when compared to conventional CR units and yield results in line with those obtained from most digital detectors for radiography.

    7. Computational analysis of an autophagy/translation switch based on mutual inhibition of MTORC1 and ULK1

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Szymańska, Paulina; Martin, Katie R.; MacKeigan, Jeffrey P.; Hlavacek, William S.; Lipniacki, Tomasz

      2015-03-11

      We constructed a mechanistic, computational model for regulation of (macro)autophagy and protein synthesis (at the level of translation). The model was formulated to study the system-level consequences of interactions among the following proteins: two key components of MTOR complex 1 (MTORC1), namely the protein kinase MTOR (mechanistic target of rapamycin) and the scaffold protein RPTOR; the autophagy-initiating protein kinase ULK1; and the multimeric energy-sensing AMP-activated protein kinase (AMPK). Inputs of the model include intrinsic AMPK kinase activity, which is taken as an adjustable surrogate parameter for cellular energy level or AMP:ATP ratio, and rapamycin dose, which controls MTORC1 activity. Outputsmore » of the model include the phosphorylation level of the translational repressor EIF4EBP1, a substrate of MTORC1, and the phosphorylation level of AMBRA1 (activating molecule in BECN1-regulated autophagy), a substrate of ULK1 critical for autophagosome formation. The model incorporates reciprocal regulation of mTORC1 and ULK1 by AMPK, mutual inhibition of MTORC1 and ULK1, and ULK1-mediated negative feedback regulation of AMPK. Through analysis of the model, we find that these processes may be responsible, depending on conditions, for graded responses to stress inputs, for bistable switching between autophagy and protein synthesis, or relaxation oscillations, comprising alternating periods of autophagy and protein synthesis. A sensitivity analysis indicates that the prediction of oscillatory behavior is robust to changes of the parameter values of the model. The model provides testable predictions about the behavior of the AMPK-MTORC1-ULK1 network, which plays a central role in maintaining cellular energy and nutrient homeostasis.« less

    8. Computational analysis of an autophagy/translation switch based on mutual inhibition of MTORC1 and ULK1

      SciTech Connect (OSTI)

      Szymańska, Paulina; Martin, Katie R.; MacKeigan, Jeffrey P.; Hlavacek, William S.; Lipniacki, Tomasz

      2015-03-11

      We constructed a mechanistic, computational model for regulation of (macro)autophagy and protein synthesis (at the level of translation). The model was formulated to study the system-level consequences of interactions among the following proteins: two key components of MTOR complex 1 (MTORC1), namely the protein kinase MTOR (mechanistic target of rapamycin) and the scaffold protein RPTOR; the autophagy-initiating protein kinase ULK1; and the multimeric energy-sensing AMP-activated protein kinase (AMPK). Inputs of the model include intrinsic AMPK kinase activity, which is taken as an adjustable surrogate parameter for cellular energy level or AMP:ATP ratio, and rapamycin dose, which controls MTORC1 activity. Outputs of the model include the phosphorylation level of the translational repressor EIF4EBP1, a substrate of MTORC1, and the phosphorylation level of AMBRA1 (activating molecule in BECN1-regulated autophagy), a substrate of ULK1 critical for autophagosome formation. The model incorporates reciprocal regulation of mTORC1 and ULK1 by AMPK, mutual inhibition of MTORC1 and ULK1, and ULK1-mediated negative feedback regulation of AMPK. Through analysis of the model, we find that these processes may be responsible, depending on conditions, for graded responses to stress inputs, for bistable switching between autophagy and protein synthesis, or relaxation oscillations, comprising alternating periods of autophagy and protein synthesis. A sensitivity analysis indicates that the prediction of oscillatory behavior is robust to changes of the parameter values of the model. The model provides testable predictions about the behavior of the AMPK-MTORC1-ULK1 network, which plays a central role in maintaining cellular energy and nutrient homeostasis.

    9. Radiological Safety Analysis Computer (RSAC) Program Version 7.2 Users’ Manual

      SciTech Connect (OSTI)

      Dr. Bradley J Schrader

      2010-10-01

      The Radiological Safety Analysis Computer (RSAC) Program Version 7.2 (RSAC-7) is the newest version of the RSAC legacy code. It calculates the consequences of a release of radionuclides to the atmosphere. A user can generate a fission product inventory from either reactor operating history or a nuclear criticality event. RSAC-7 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates the decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated for inhalation, air immersion, ground surface, ingestion, and cloud gamma pathways. RSAC-7 can be used as a tool to evaluate accident conditions in emergency response scenarios, radiological sabotage events and to evaluate safety basis accident consequences. This users’ manual contains the mathematical models and operating instructions for RSAC-7. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-7. This program was designed for users who are familiar with radiological dose assessment methods.

    10. Radiological Safety Analysis Computer (RSAC) Program Version 7.0 Users’ Manual

      SciTech Connect (OSTI)

      Dr. Bradley J Schrader

      2009-03-01

      The Radiological Safety Analysis Computer (RSAC) Program Version 7.0 (RSAC-7) is the newest version of the RSAC legacy code. It calculates the consequences of a release of radionuclides to the atmosphere. A user can generate a fission product inventory from either reactor operating history or a nuclear criticality event. RSAC-7 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates the decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated for inhalation, air immersion, ground surface, ingestion, and cloud gamma pathways. RSAC-7 can be used as a tool to evaluate accident conditions in emergency response scenarios, radiological sabotage events and to evaluate safety basis accident consequences. This users’ manual contains the mathematical models and operating instructions for RSAC-7. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-7. This program was designed for users who are familiar with radiological dose assessment methods.

    11. Computational Analysis of an Evolutionarily Conserved VertebrateMuscle Alternative Splicing Program

      SciTech Connect (OSTI)

      Das, Debopriya; Clark, Tyson A.; Schweitzer, Anthony; Marr,Henry; Yamamoto, Miki L.; Parra, Marilyn K.; Arribere, Josh; Minovitsky,Simon; Dubchak, Inna; Blume, John E.; Conboy, John G.

      2006-06-15

      A novel exon microarray format that probes gene expression with single exon resolution was employed to elucidate critical features of a vertebrate muscle alternative splicing program. A dataset of 56 microarray-defined, muscle-enriched exons and their flanking introns were examined computationally in order to investigate coordination of the muscle splicing program. Candidate intron regulatory motifs were required to meet several stringent criteria: significant over-representation near muscle-enriched exons, correlation with muscle expression, and phylogenetic conservation among genomes of several vertebrate orders. Three classes of regulatory motifs were identified in the proximal downstream intron, within 200nt of the target exons: UGCAUG, a specific binding site for Fox-1 related splicing factors; ACUAAC, a novel branchpoint-like element; and UG-/UGC-rich elements characteristic of binding sites for CELF splicing factors. UGCAUG was remarkably enriched, being present in nearly one-half of all cases. These studies suggest that Fox and CELF splicing factors play a major role in enforcing the muscle-specific alternative splicing program, facilitating expression of a set of unique isoforms of cytoskeletal proteins that are critical to muscle cell differentiation. Supplementary materials: There are four supplementary tables and one supplementary figure. The tables provide additional detailed information concerning the muscle-enriched datasets, and about over-represented oligonucleotide sequences in the flanking introns. The supplementary figure shows RT-PCR data confirming the muscle-enriched expression of exons predicted from the microarray analysis.

    12. A computational model for thermal fluid design analysis of nuclear thermal rockets

      SciTech Connect (OSTI)

      Given, J.A.; Anghaie, S.

      1997-01-01

      A computational model for simulation and design analysis of nuclear thermal propulsion systems has been developed. The model simulates a full-topping expander cycle engine system and the thermofluid dynamics of the core coolant flow, accounting for the real gas properties of the hydrogen propellant/coolant throughout the system. Core thermofluid studies reveal that near-wall heat transfer models currently available may not be applicable to conditions encountered within some nuclear rocket cores. Additionally, the possibility of a core thermal fluid instability at low mass fluxes and the effects of the core power distribution are investigated. Results indicate that for tubular core coolant channels, thermal fluid instability is not an issue within the possible range of operating conditions in these systems. Findings also show the advantages of having a nonflat centrally peaking axial core power profile from a fluid dynamic standpoint. The effects of rocket operating conditions on system performance are also investigated. Results show that high temperature and low pressure operation is limited by core structural considerations, while low temperature and high pressure operation is limited by system performance constraints. The utility of these programs for finding these operational limits, optimum operating conditions, and thermal fluid effects is demonstrated.

    13. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

      SciTech Connect (OSTI)

      John D. Bess; Keith C. Bledsoe; Bradley T. Rearden

      2011-02-01

      An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

    14. Noise analysis of genome-scale protein synthesis using a discrete computational model of translation

      SciTech Connect (OSTI)

      Racle, Julien; Hatzimanikatis, Vassily; Stefaniuk, Adam Jan

      2015-07-28

      Noise in genetic networks has been the subject of extensive experimental and computational studies. However, very few of these studies have considered noise properties using mechanistic models that account for the discrete movement of ribosomes and RNA polymerases along their corresponding templates (messenger RNA (mRNA) and DNA). The large size of these systems, which scales with the number of genes, mRNA copies, codons per mRNA, and ribosomes, is responsible for some of the challenges. Additionally, one should be able to describe the dynamics of ribosome exchange between the free ribosome pool and those bound to mRNAs, as well as how mRNA species compete for ribosomes. We developed an efficient algorithm for stochastic simulations that addresses these issues and used it to study the contribution and trade-offs of noise to translation properties (rates, time delays, and rate-limiting steps). The algorithm scales linearly with the number of mRNA copies, which allowed us to study the importance of genome-scale competition between mRNAs for the same ribosomes. We determined that noise is minimized under conditions maximizing the specific synthesis rate. Moreover, sensitivity analysis of the stochastic system revealed the importance of the elongation rate in the resultant noise, whereas the translation initiation rate constant was more closely related to the average protein synthesis rate. We observed significant differences between our results and the noise properties of the most commonly used translation models. Overall, our studies demonstrate that the use of full mechanistic models is essential for the study of noise in translation and transcription.

    15. MCS division researchers help develop new sequencing analysis...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computation Institute has announced a new sequencing analysis service called Globus Genomics. The Globus Genomics team includes two members of Argonne's Mathematics and Computer...

    16. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

    17. BPO crude oil analysis data base user`s guide: Methods, publications, computer access correlations, uses, availability

      SciTech Connect (OSTI)

      Sellers, C.; Fox, B.; Paulz, J.

      1996-03-01

      The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.

    18. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

      SciTech Connect (OSTI)

      Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie; Mandelli, Diego; Smith, Curtis Lee

      2015-09-01

      The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.

    19. Towards Real-Time High Performance Computing For Power Grid Analysis

      SciTech Connect (OSTI)

      Hui, Peter SY; Lee, Barry; Chikkagoudar, Satish

      2012-11-16

      Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored. Imposing real-time constraints on a parallel (cluster) computing environment introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we give a motivating example to demonstrate the need for such a system--- an application to estimate the electromechanical states of the power grid--- and we introduce a formal method for performing verification of certain temporal properties within a system of parallel processes. We describe our work towards a full real-time implementation of the target application--- namely, our progress towards extracting a key mathematical kernel from the application, the formal process by which we analyze the intricate timing behavior of the processes on the cluster, as well as timing measurements taken on our test cluster to demonstrate use of these concepts.

    20. Computing Resources | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Computing Resources Theory and Computing Sciences Building Argonne's Theory and Computing Sciences (TCS) building houses a wide variety of computing systems including some of the most powerful supercomputers in the world. The facility has 25,000 square feet of raised computer floor space and a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. The building also

    1. Fracture Analysis of Vessels. Oak Ridge FAVOR, v06.1, Computer Code: Theory and Implementation of Algorithms, Methods, and Correlations

      SciTech Connect (OSTI)

      Williams, P. T.; Dickson, T. L.; Yin, S.

      2007-12-01

      The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include the NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.

    2. Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA

      SciTech Connect (OSTI)

      Carbajo, Juan J; Qualls, A L

      2008-01-01

      The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE will generate over 6 kWe; the excess power will be needed for the pumps and other power management devices. The reactor will be cooled by NaK (a eutectic mixture of sodium and potassium which is liquid at ambient temperature). This space reactor is intended to be deployed over the surface of the Moon or Mars. The reactor operating life will be 8 to 10 years. The RELAP5-3D/ATHENA code is being developed and maintained by Idaho National Laboratory. The code can employ a variety of coolants in addition to water, the original coolant employed with early versions of the code. The code can also use 3-D volumes and 3-D junctions, thus allowing for more realistic representation of complex geometries. A combination of 3-D and 1-D volumes is employed in this study. The space reactor model consists of a primary loop and two secondary loops connected by two heat exchangers (HXs). Each secondary loop provides heat to four SEs. The primary loop includes the nuclear reactor with the lower and upper plena, the core with 85 fuel pins, and two vertical heat exchangers (HX). The maximum coolant temperature of the primary loop is 900 K. The secondary loops also employ NaK as a coolant at a maximum temperature of 877 K. The SEs heads are at a temperature of 800 K and the cold sinks are at a temperature of ~400 K. Two radiators will be employed to remove heat from the SEs. The SE HXs surrounding the SE heads are of annular design and have been modeled using 3-D volumes. These 3-D models have been used to improve the HX design by optimizing the flows of coolant and maximizing the heat transferred to the SE heads. The transients analyzed include failure of one or more Stirling engines, trip of the reactor pump, and trips of the secondary loop pumps feeding the HXs of the

    3. Information regarding previous INCITE awards including selected highlights

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      | U.S. DOE Office of Science (SC) Information regarding previous INCITE awards including selected highlights Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities User Facilities Accessing ASCR Facilities Innovative & Novel Computational Impact on Theory & Experiement (INCITE) ASCR Leadership Computing Challenge (ALCC) Industrial Users Computational Science Graduate Fellowship (CSGF) Research & Evaluation Prototypes (REP) Science Highlights Benefits of

    4. Development of a three-phase reacting flow computer model for analysis of petroleum cracking

      SciTech Connect (OSTI)

      Chang, S.L.; Lottes, S.A.; Petrick, M.

      1995-07-01

      A general computational fluid dynamics computer code (ICRKFLO) has been developed for the simulation of the multi-phase reacting flow in a petroleum fluid catalytic cracker riser. ICRKFLO has several unique features. A new integral reaction submodel couples calculations of hydrodynamics and cracking kinetics by making the calculations more efficient in achieving stable convergence while still preserving the major physical effects of reaction processes. A new coke transport submodel handles the process of coke formation in gas phase reactions and the subsequent deposition on the surface of adjacent particles. The code was validated by comparing with experimental results of a pilot scale fluid cracker unit. The code can predict the flow characteristics of gas, liquid, and particulate solid phases, vaporization of the oil droplets, and subsequent cracking of the oil in a riser reactor, which may lead to a better understanding of the internal processes of the riser and the impact of riser geometry and operating parameters on the riser performance.

    5. An Analysis Framework for Investigating the Trade-offs Between System Performance and Energy Consumption in a Heterogeneous Computing Environment

      SciTech Connect (OSTI)

      Friese, Ryan; Khemka, Bhavesh; Maciejewski, Anthony A; Siegel, Howard Jay; Koenig, Gregory A; Powers, Sarah S; Hilton, Marcia M; Rambharos, Rajendra; Okonski, Gene D; Poole, Stephen W

      2013-01-01

      Rising costs of energy consumption and an ongoing effort for increases in computing performance are leading to a significant need for energy-efficient computing. Before systems such as supercomputers, servers, and datacenters can begin operating in an energy-efficient manner, the energy consumption and performance characteristics of the system must be analyzed. In this paper, we provide an analysis framework that will allow a system administrator to investigate the tradeoffs between system energy consumption and utility earned by a system (as a measure of system performance). We model these trade-offs as a bi-objective resource allocation problem. We use a popular multi-objective genetic algorithm to construct Pareto fronts to illustrate how different resource allocations can cause a system to consume significantly different amounts of energy and earn different amounts of utility. We demonstrate our analysis framework using real data collected from online benchmarks, and further provide a method to create larger data sets that exhibit similar heterogeneity characteristics to real data sets. This analysis framework can provide system administrators with insight to make intelligent scheduling decisions based on the energy and utility needs of their systems.

    6. Open-cycle ocean thermal energy conversion surface-condenser design analysis and computer program

      SciTech Connect (OSTI)

      Panchal, C.B.; Rabas, T.J.

      1991-05-01

      This report documents a computer program for designing a surface condenser that condenses low-pressure steam in an ocean thermal energy conversion (OTEC) power plant. The primary emphasis is on the open-cycle (OC) OTEC power system, although the same condenser design can be used for conventional and hybrid cycles because of their highly similar operating conditions. In an OC-OTEC system, the pressure level is very low (deep vacuums), temperature differences are small, and the inlet noncondensable gas concentrations are high. Because current condenser designs, such as the shell-and-tube, are not adequate for such conditions, a plate-fin configuration is selected. This design can be implemented in aluminum, which makes it very cost-effective when compared with other state-of-the-art vacuum steam condenser designs. Support for selecting a plate-fin heat exchanger for OC-OTEC steam condensation can be found in the sizing (geometric details) and rating (heat transfer and pressure drop) calculations presented. These calculations are then used in a computer program to obtain all the necessary thermal performance details for developing design specifications for a plate-fin steam condenser. 20 refs., 5 figs., 5 tabs.

    7. Computing Frontier: Distributed Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Frontier: Distributed Computing and Facility Infrastructures Conveners: Kenneth Bloom 1 , Richard Gerber 2 1 Department of Physics and Astronomy, University of Nebraska-Lincoln 2 National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory 1.1 Introduction The field of particle physics has become increasingly reliant on large-scale computing resources to address the challenges of analyzing large datasets, completing specialized computations and

    8. Cogeneration: Economic and technical analysis. (Latest citations from the INSPEC - The Database for Physics, Electronics, and Computing). Published Search

      SciTech Connect (OSTI)

      Not Available

      1993-11-01

      The bibliography contains citations concerning economic and technical analyses of cogeneration systems. Topics include electric power generation, industrial cogeneration, use by utilities, and fuel cell cogeneration. The citations explore steam power station, gas turbine and steam turbine technology, district heating, refuse derived fuels, environmental effects and regulations, bioenergy and solar energy conversion, waste heat and waste product recycling, and performance analysis. (Contains a minimum of 104 citations and includes a subject term index and title list.)

    9. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

      SciTech Connect (OSTI)

      Messer, Bronson; Sewell, Christopher; Heitmann, Katrin; Finkel, Dr. Hal J; Fasel, Patricia; Zagaris, George; Pope, Adrian; Habib, Salman; Parete-Koon, Suzanne T

      2015-01-01

      Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial in situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.

    10. CORCON-MOD3: An integrated computer model for analysis of molten core-concrete interactions. User`s manual

      SciTech Connect (OSTI)

      Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O.

      1993-10-01

      The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.

    11. FRAP-T6: a computer code for the transient analysis of oxide fuel rods. [PWR; BWR

      SciTech Connect (OSTI)

      Siefken, L.J.; Shah, V.N.; Berna, G.A.; Hohorst, J.K.

      1983-06-01

      FRAP-T6 is a computer code which is being developed to calculate the transient behavior of a light water reactor fuel rod. This report is an addendum to the FRAP-T6/MODO user's manual which provides the additional user information needed to use FRAP-T6/MOD1. This includes model changes, improvements, and additions, coding changes and improvements, change in input and control language, and example problem solutions to aid the user. This information is designed to supplement the FRAP-T6/MODO user's manual.

    12. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    13. Inference of tumor evolution during chemotherapy by computational modeling and in situ analysis of genetic and phenotypic cellular diversity

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Almendro, Vanessa; Cheng, Yu -Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muñoz, Montse; Russnes, Hege  G.; Helland, Åslaug; et al

      2014-02-01

      Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatialmore » distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.« less

    14. Inference of tumor evolution during chemotherapy by computational modeling and in situ analysis of genetic and phenotypic cellular diversity

      SciTech Connect (OSTI)

      Almendro, Vanessa; Cheng, Yu -Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muñoz, Montse; Russnes, Hege  G.; Helland, Åslaug; Rye, Inga  H.; Borresen-Dale, Anne -Lise; Maruyama, Reo; van Oudenaarden, Alexander; Dowsett, Mitchell; Jones, Robin  L.; Reis-Filho, Jorge; Gascon, Pere; Gönen, Mithat; Michor, Franziska; Polyak, Kornelia

      2014-02-01

      Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatial distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.

    15. Computational fluid dynamics analysis of a wire-feed, high-velocity oxygen-fuel (HVOF) thermal spray torch

      SciTech Connect (OSTI)

      Lopez, A.R.; Hassan, B.; Oberkampf, W.L.; Neiser, R.A.; Roemer, T.J.

      1996-09-01

      The fluid and particle dynamics of a High-Velocity Oxygen-Fuel Thermal Spray torch are analyzed using computational and experimental techniques. Three-dimensional Computational Fluid Dynamics (CFD) results are presented for a curved aircap used for coating interior surfaces such as engine cylinder bores. The device analyzed is similar to the Metco Diamond Jet Rotating Wire (DJRW) torch. The feed gases are injected through an axisymmetric nozzle into the curved aircap. Premixed propylene and oxygen are introduced from an annulus in the nozzle, while cooling air is injected between the nozzle and the interior wall of the aircap. The combustion process is modeled using a single-step finite-rate chemistry model with a total of 9 gas species which includes dissociation of combustion products. A continually-fed steel wire passes through the center of the nozzle and melting occurs at a conical tip near the exit of the aircap. Wire melting is simulated computationally by injecting liquid steel particles into the flow field near the tip of the wire. Experimental particle velocity measurements during wire feed were also taken using a Laser Two-Focus (L2F) velocimeter system. Flow fields inside and outside the aircap are presented and particle velocity predictions are compared with experimental measurements outside of the aircap.

    16. Scalable Computational Methods for the Analysis of High-Throughput Biological Data

      SciTech Connect (OSTI)

      Langston, Michael A

      2012-09-06

      This primary focus of this research project is elucidating genetic regulatory mechanisms that control an organism?¢????s responses to low-dose ionizing radiation. Although low doses (at most ten centigrays) are not lethal to humans, they elicit a highly complex physiological response, with the ultimate outcome in terms of risk to human health unknown. The tools of molecular biology and computational science will be harnessed to study coordinated changes in gene expression that orchestrate the mechanisms a cell uses to manage the radiation stimulus. High performance implementations of novel algorithms that exploit the principles of fixed-parameter tractability will be used to extract gene sets suggestive of co-regulation. Genomic mining will be performed to scrutinize, winnow and highlight the most promising gene sets for more detailed investigation. The overall goal is to increase our understanding of the health risks associated with exposures to low levels of radiation.

    17. Intelligent Computing System for Reservoir Analysis and Risk Assessment of Red River Formation, Class Revisit

      SciTech Connect (OSTI)

      Sippel, Mark A.

      2002-09-24

      Integrated software was written that comprised the tool kit for the Intelligent Computing System (ICS). The software tools in ICS are for evaluating reservoir and hydrocarbon potential from various seismic, geologic and engineering data sets. The ICS tools provided a means for logical and consistent reservoir characterization. The tools can be broadly characterized as (1) clustering tools, (2) neural solvers, (3) multiple-linear regression, (4) entrapment-potential calculator and (5) combining tools. A flexible approach can be used with the ICS tools. They can be used separately or in a series to make predictions about a desired reservoir objective. The tools in ICS are primarily designed to correlate relationships between seismic information and data obtained from wells; however, it is possible to work with well data alone.

    18. Analysis of low molecular weight hydrocarbons including 1,3-butadiene in engine exhaust gases using an aluminum oxide porous-layer open-tubular fused-silica column

      SciTech Connect (OSTI)

      Pelz, N.; Dempster, N.M.; Shore, P.R. )

      1990-05-01

      A method for the quantitative analysis of individual hydrocarbons in the C1-C8 range emitted in engine exhaust gases is described. The procedure provides base-line or near base-line resolution of C4 components including 1,3-butadiene. With a run time of less than 50 min, the light aromatics (benzene, toluene, ethyl benzene, p- and m-xylene, and o-xylene) are resolved during the same analysis as aliphatic hydrocarbons in the C1-C8 range. It is shown that typical 1,3-butadiene levels in engine exhaust are about 5 ppm at each of two engine conditions. Aromatic hydrocarbon levels show a dependence on engine operating conditions, benzene being about 20 ppm at high speed and about 40 ppm at idle.

    19. Rankine: A computer software package for the analysis and design of steam power generating units

      SciTech Connect (OSTI)

      Somerton, C.W.; Brouillette, T.; Pourciau, C.; Strawn, D.; Whitehouse, L.

      1987-04-01

      A software package has been developed for the analysis of steam power systems. Twenty-eight configurations are considered, all based upon the simple Rankine cycle with various additional components such as feedwater heaters and reheat legs. The package is demonstrated by two examples. In the first, the optimum operating conditions for a simple reheat cycle are determined by using the program. The second example involves calculating the exergetic efficiency of an actual steam power system.

    20. Rapid Detection of Biological and Chemical Threat Agents Using Physical Chemistry, Active Detection, and Computational Analysis

      SciTech Connect (OSTI)

      Chung, Myung; Dong, Li; Fu, Rong; Liotta, Lance; Narayanan, Aarthi; Petricoin, Emanuel; Ross, Mark; Russo, Paul; Zhou, Weidong; Luchini, Alessandra; Manes, Nathan; Chertow, Jessica; Han, Suhua; Kidd, Jessica; Senina, Svetlana; Groves, Stephanie

      2007-01-01

      Basic technologies have been successfully developed within this project: rapid collection of aerosols and a rapid ultra-sensitive immunoassay technique. Water-soluble, humidity-resistant polyacrylamide nano-filters were shown to (1) capture aerosol particles as small as 20 nm, (2) work in humid air and (3) completely liberate their captured particles in an aqueous solution compatible with the immunoassay technique. The immunoassay technology developed within this project combines electrophoretic capture with magnetic bead detection. It allows detection of as few as 150-600 analyte molecules or viruses in only three minutes, something no other known method can duplicate. The technology can be used in a variety of applications where speed of analysis and/or extremely low detection limits are of great importance: in rapid analysis of donor blood for hepatitis, HIV and other blood-borne infections in emergency blood transfusions, in trace analysis of pollutants, or in search of biomarkers in biological fluids. Combined in a single device, the water-soluble filter and ultra-sensitive immunoassay technique may solve the problem of early “warning type” detection of aerosolized pathogens. These two technologies are protected with five patent applications and are ready for commercialization.

    1. Computational Study and Analysis of Structural Imperfections in 1D and 2D Photonic Crystals

      SciTech Connect (OSTI)

      K.R. Maskaly

      2005-06-01

      increasing RMS roughness. Again, the homogenization approximation is able to predict these results. The problem of surface scratches on 1D photonic crystals is also addressed. Although the reflectivity decreases are lower in this study, up to a 15% change in reflectivity is observed in certain scratched photonic crystal structures. However, this reflectivity change can be significantly decreased by adding a low index protective coating to the surface of the photonic crystal. Again, application of homogenization theory to these structures confirms its predictive power for this type of imperfection as well. Additionally, the problem of a circular pores in 2D photonic crystals is investigated, showing that almost a 50% change in reflectivity can occur for some structures. Furthermore, this study reveals trends that are consistent with the 1D simulations: parameter changes that increase the absolute reflectivity of the photonic crystal will also increase its tolerance to structural imperfections. Finally, experimental reflectance spectra from roughened 1D photonic crystals are compared to the results predicted computationally in this thesis. Both the computed and experimental spectra correlate favorably, validating the findings presented herein.

    2. The Use Of Computational Human Performance Modeling As Task Analysis Tool

      SciTech Connect (OSTI)

      Jacuqes Hugo; David Gertman

      2012-07-01

      During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employed to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.

    3. Computational Science and Engineering

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science and Engineering NETL's Computational Science and Engineering competency consists of conducting applied scientific research and developing physics-based simulation models, methods, and tools to support the development and deployment of novel process and equipment designs. Research includes advanced computations to generate information beyond the reach of experiments alone by integrating experimental and computational sciences across different length and time scales. Specific

    4. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw; Gokhale, Maya B.; McCabe, Kevin Peter

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    5. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

      SciTech Connect (OSTI)

      Kersaudy, Pierric; Sudret, Bruno; Varsier, Nadège; Picon, Odile; Wiart, Joe

      2015-04-01

      In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representation of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.

    6. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    7. User's manual for RATEPAC: a digital-computer program for revenue requirements and rate-impact analysis

      SciTech Connect (OSTI)

      Fuller, L.C.

      1981-09-01

      The RATEPAC computer program is designed to model the financial aspects of an electric power plant or other investment requiring capital outlays and having annual operating expenses. The program produces incremental pro forma financial statements showing how an investment will affect the overall financial statements of a business entity. The code accepts parameters required to determine capital investment and expense as a function of time and sums these to determine minimum revenue requirements (cost of service). The code also calculates present worth of revenue requirements and required return on rate base. This user's manual includes a general description of the code as well as the instructions for input data preparation. A complete example case is appended.

    8. Computer, Computational, and Statistical Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCS Computer, Computational, and Statistical Sciences Computational physics, computer science, applied mathematics, statistics and the integration of large data streams are central ...

    9. Parallel computation safety analysis irradiation targets fission product molybdenum in neutronic aspect using the successive over-relaxation algorithm

      SciTech Connect (OSTI)

      Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos

      2014-09-30

      One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo{sup 99} used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 10{sup 6} cm{sup −1}) in a tube, their delta

    10. Argonne's Laboratory computing center - 2007 annual report.

      SciTech Connect (OSTI)

      Bair, R.; Pieper, G. W.

      2008-05-28

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and

    11. Accelerated Aging of BKC 44306-10 Rigid Polyurethane Foam: FT-IR Spectroscopy, Dimensional Analysis, and Micro Computed Tomography

      SciTech Connect (OSTI)

      Gilbertson, Robert D.; Patterson, Brian M.; Smith, Zachary

      2014-01-02

      An accelerated aging study of BKC 44306-10 rigid polyurethane foam was carried out. Foam samples were aged in a nitrogen atmosphere at three different temperatures: 50 °C, 65 °C, and 80 °C. Foam samples were periodically removed from the aging canisters at 1, 3, 6, 9, 12, and 15 month intervals when FT-IR spectroscopy, dimensional analysis, and mechanical testing experiments were performed. Micro Computed Tomography imaging was also employed to study the morphology of the foams. Over the course of the aging study the foams the decreased in size by a magnitude of 0.001 inches per inch of foam. Micro CT showed the heterogeneous nature of the foam structure likely resulting from flow effects during the molding process. The effect of aging on the compression and tensile strength of the foam was minor and no cause for concern. FT-IR spectroscopy was used to follow the foam chemistry. However, it was difficult to draw definitive conclusions about the changes in chemical nature of the materials due to large variability throughout the samples.

    12. Computational analysis of a three-dimensional High-Velocity Oxygen-Fuel (HVOF) Thermal Spray torch

      SciTech Connect (OSTI)

      Hassan, B.; Lopez, A.R.; Oberkampf, W.L.

      1995-07-01

      An analysis of a High-Velocity Oxygen-Fuel Thermal Spray torch is presented using computational fluid dynamics (CFD). Three-dimensional CFD results are presented for a curved aircap used for coating interior surfaces such as engine cylinder bores. The device analyzed is similar to the Metco Diamond Jet Rotating Wire torch, but wire feed is not simulated. To the authors` knowledge, these are the first published 3-D results of a thermal spray device. The feed gases are injected through an axisymmetric nozzle into the curved aircap. Argon is injected through the center of the nozzle. Pre-mixed propylene and oxygen are introduced from an annulus in the nozzle, while cooling air is injected between the nozzle and the interior wall of the aircap. The combustion process is modeled assuming instantaneous chemistry. A standard, two-equation, K-{var_epsilon} turbulence model is employed for the turbulent flow field. An implicit, iterative, finite volume numerical technique is used to solve the coupled conservation of mass, momentum, and energy equations for the gas in a sequential manner. Flow fields inside and outside the aircap are presented and discussed.

    13. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

      SciTech Connect (OSTI)

      Saffer, Shelley I.

      2014-12-01

      This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

    14. Kinetic analysis of the phenyl-shift reaction in $\\beta$-O-4 lignin model compounds: A computational study.

      SciTech Connect (OSTI)

      Beste, Ariana; Buchanan III, A C

      2011-01-01

      The phenyl-shift reaction in $\\beta$-phenethyl phenyl ether ($\\beta - \\rm PhCH_2CH_2OPh$, $\\beta$-PPE) is an integral step in the pyrolysis of PPE, which is a model compound for the $\\beta$-O-4 linkage in lignin. We investigated the influence of natural occurring substituents (hydroxy, methoxy) on the reaction rate by calculating relative rate constant using density functional theory in combination with transition state theory, including anharmonic correction for low-frequency modes. The phenyl-shift reaction proceeds through an intermediate and the overall rate constants were computed invoking the steady-state approximation (its validity was confirmed). Substituents on the phenethyl group have only little influence on the rate constants. If a methoxy substituent is located in para position of the phenyl ring adjacent to the ether oxygen, the energies of the intermediate and second transition state are lowered, but the overall rate constant is not significantly altered. This is a consequence of the dominating first transition from pre-complex to intermediate in the overall rate constant. {\\it O}- and di-{\\it o}-methoxy substituents accelerate the phenyl-migration rate compared to $\\beta$-PPE.

    15. Development of an Extensible Computational Framework for Centralized Storage and Distributed Curation and Analysis of Genomic Data Genome-scale Metabolic Models

      SciTech Connect (OSTI)

      Stevens, Rick

      2010-08-01

      The DOE funded KBase project of the Stevens group at the University of Chicago was focused on four high-level goals: (i) improve extensibility, accessibility, and scalability of the SEED framework for genome annotation, curation, and analysis; (ii) extend the SEED infrastructure to support transcription regulatory network reconstructions (2.1), metabolic model reconstruction and analysis (2.2), assertions linked to data (2.3), eukaryotic annotation (2.4), and growth phenotype prediction (2.5); (iii) develop a web-API for programmatic remote access to SEED data and services; and (iv) application of all tools to bioenergy-related genomes and organisms. In response to these goals, we enhanced and improved the ModelSEED resource within the SEED to enable new modeling analyses, including improved model reconstruction and phenotype simulation. We also constructed a new website and web-API for the ModelSEED. Further, we constructed a comprehensive web-API for the SEED as a whole. We also made significant strides in building infrastructure in the SEED to support the reconstruction of transcriptional regulatory networks by developing a pipeline to identify sets of consistently expressed genes based on gene expression data. We applied this pipeline to 29 organisms, computing regulons which were subsequently stored in the SEED database and made available on the SEED website (http://pubseed.theseed.org). We developed a new pipeline and database for the use of kmers, or short 8-residue oligomer sequences, to annotate genomes at high speed. Finally, we developed the PlantSEED, or a new pipeline for annotating primary metabolism in plant genomes. All of the work performed within this project formed the early building blocks for the current DOE Knowledgebase system, and the kmer annotation pipeline, plant annotation pipeline, and modeling tools are all still in use in KBase today.

    16. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    17. Low-Dose Chest Computed Tomography for Lung Cancer Screening Among Hodgkin Lymphoma Survivors: A Cost-Effectiveness Analysis

      SciTech Connect (OSTI)

      Wattson, Daniel A.; Hunink, M.G. Myriam; DiPiro, Pamela J.; Das, Prajnan; Hodgson, David C.; Mauch, Peter M.; Ng, Andrea K.

      2014-10-01

      Purpose: Hodgkin lymphoma (HL) survivors face an increased risk of treatment-related lung cancer. Screening with low-dose computed tomography (LDCT) may allow detection of early stage, resectable cancers. We developed a Markov decision-analytic and cost-effectiveness model to estimate the merits of annual LDCT screening among HL survivors. Methods and Materials: Population databases and HL-specific literature informed key model parameters, including lung cancer rates and stage distribution, cause-specific survival estimates, and utilities. Relative risks accounted for radiation therapy (RT) technique, smoking status (>10 pack-years or current smokers vs not), age at HL diagnosis, time from HL treatment, and excess radiation from LDCTs. LDCT assumptions, including expected stage-shift, false-positive rates, and likely additional workup were derived from the National Lung Screening Trial and preliminary results from an internal phase 2 protocol that performed annual LDCTs in 53 HL survivors. We assumed a 3% discount rate and a willingness-to-pay (WTP) threshold of $50,000 per quality-adjusted life year (QALY). Results: Annual LDCT screening was cost effective for all smokers. A male smoker treated with mantle RT at age 25 achieved maximum QALYs by initiating screening 12 years post-HL, with a life expectancy benefit of 2.1 months and an incremental cost of $34,841/QALY. Among nonsmokers, annual screening produced a QALY benefit in some cases, but the incremental cost was not below the WTP threshold for any patient subsets. As age at HL diagnosis increased, earlier initiation of screening improved outcomes. Sensitivity analyses revealed that the model was most sensitive to the lung cancer incidence and mortality rates and expected stage-shift from screening. Conclusions: HL survivors are an important high-risk population that may benefit from screening, especially those treated in the past with large radiation fields including mantle or involved-field RT. Screening

    18. Computational Combustion

      SciTech Connect (OSTI)

      Westbrook, C K; Mizobuchi, Y; Poinsot, T J; Smith, P J; Warnatz, J

      2004-08-26

      Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surface and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.

    19. Computational analysis of storage synthesis in developing Brassica napus L. (oilseed rape) embryos: Flux variability analysis in relation to 13C-metabolic flux analysis

      SciTech Connect (OSTI)

      Hay, J.; Schwender, J.

      2011-08-01

      Plant oils are an important renewable resource, and seed oil content is a key agronomical trait that is in part controlled by the metabolic processes within developing seeds. A large-scale model of cellular metabolism in developing embryos of Brassica napus (bna572) was used to predict biomass formation and to analyze metabolic steady states by flux variability analysis under different physiological conditions. Predicted flux patterns are highly correlated with results from prior 13C metabolic flux analysis of B. napus developing embryos. Minor differences from the experimental results arose because bna572 always selected only one sugar and one nitrogen source from the available alternatives, and failed to predict the use of the oxidative pentose phosphate pathway. Flux variability, indicative of alternative optimal solutions, revealed alternative pathways that can provide pyruvate and NADPH to plastidic fatty acid synthesis. The nutritional values of different medium substrates were compared based on the overall carbon conversion efficiency (CCE) for the biosynthesis of biomass. Although bna572 has a functional nitrogen assimilation pathway via glutamate synthase, the simulations predict an unexpected role of glycine decarboxylase operating in the direction of NH4+ assimilation. Analysis of the light-dependent improvement of carbon economy predicted two metabolic phases. At very low light levels small reductions in CO2 efflux can be attributed to enzymes of the tricarboxylic acid cycle (oxoglutarate dehydrogenase, isocitrate dehydrogenase) and glycine decarboxylase. At higher light levels relevant to the 13C flux studies, ribulose-1,5-bisphosphate carboxylase activity is predicted to account fully for the light-dependent changes in carbon balance.

    20. Evaluation of computer-based ultrasonic inservice inspection systems

      SciTech Connect (OSTI)

      Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T.

      1994-03-01

      This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems.

    1. Scientific computations section monthly report, November 1993

      SciTech Connect (OSTI)

      Buckner, M.R.

      1993-12-30

      This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

    2. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2015-03-30 20:55:24...

    3. Microsoft Word - NETL-TRS-X-2015_Field-Generated Foamed Cement Initial Collection, Computed Tomography, and Analysis.final.2015

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Field-Generated Foamed Cement: Initial Collection, Computed Tomography, and Analysis 20 July 2015 Office of Fossil Energy NETL-TRS-5-2015 Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information,

    4. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    5. SIAM Conference on Computational Science and Engineering

      SciTech Connect (OSTI)

      2003-01-01

      The Second SIAM Conference on Computational Science and Engineering was held in San Diego from February 10-12, 2003. Total conference attendance was 553. This is a 23% increase in attendance over the first conference. The focus of this conference was to draw attention to the tremendous range of major computational efforts on large problems in science and engineering, to promote the interdisciplinary culture required to meet these large-scale challenges, and to encourage the training of the next generation of computational scientists. Computational Science & Engineering (CS&E) is now widely accepted, along with theory and experiment, as a crucial third mode of scientific investigation and engineering design. Aerospace, automotive, biological, chemical, semiconductor, and other industrial sectors now rely on simulation for technical decision support. For federal agencies also, CS&E has become an essential support for decisions on resources, transportation, and defense. CS&E is, by nature, interdisciplinary. It grows out of physical applications and it depends on computer architecture, but at its heart are powerful numerical algorithms and sophisticated computer science techniques. From an applied mathematics perspective, much of CS&E has involved analysis, but the future surely includes optimization and design, especially in the presence of uncertainty. Another mathematical frontier is the assimilation of very large data sets through such techniques as adaptive multi-resolution, automated feature search, and low-dimensional parameterization. The themes of the 2003 conference included, but were not limited to: Advanced Discretization Methods; Computational Biology and Bioinformatics; Computational Chemistry and Chemical Engineering; Computational Earth and Atmospheric Sciences; Computational Electromagnetics; Computational Fluid Dynamics; Computational Medicine and Bioengineering; Computational Physics and Astrophysics; Computational Solid Mechanics and Materials; CS

    6. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    7. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    8. NERSC Enhances PDSF, Genepool Computing Capabilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Capabilities NERSC Enhances PDSF, Genepool Computing Capabilities Linux cluster expansion speeds data access and analysis January 3, 2014 Christmas came early for...

    9. A Systematic Comprehensive Computational Model for Stake Estimation in Mission Assurance: Applying Cyber Security Econometrics System (CSES) to Mission Assurance Analysis Protocol (MAAP)

      SciTech Connect (OSTI)

      Abercrombie, Robert K; Sheldon, Frederick T; Grimaila, Michael R

      2010-01-01

      In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper, we discuss how this infrastructure can be used in the subject domain of mission assurance as defined as the full life-cycle engineering process to identify and mitigate design, production, test, and field support deficiencies of mission success. We address the opportunity to apply the Cyberspace Security Econometrics System (CSES) to Carnegie Mellon University and Software Engineering Institute s Mission Assurance Analysis Protocol (MAAP) in this context.

    10. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and...

    11. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    12. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes -- Update to Include Analyses of an Economizer Option and Alternative Winter Water Heating Control Option

      SciTech Connect (OSTI)

      Baxter, Van D

      2006-12-01

      The long range strategic goal of the Department of Energy's Building Technologies (DOE/BT) Program is to create, by 2020, technologies and design approaches that enable the construction of net-zero energy homes at low incremental cost (DOE/BT 2005). A net zero energy home (NZEH) is a residential building with greatly reduced needs for energy through efficiency gains, with the balance of energy needs supplied by renewable technologies. While initially focused on new construction, these technologies and design approaches are intended to have application to buildings constructed before 2020 as well resulting in substantial reduction in energy use for all building types and ages. DOE/BT's Emerging Technologies (ET) team is working to support this strategic goal by identifying and developing advanced heating, ventilating, air-conditioning, and water heating (HVAC/WH) technology options applicable to NZEHs. Although the energy efficiency of heating, ventilating, and air-conditioning (HVAC) equipment has increased substantially in recent years, new approaches are needed to continue this trend. Dramatic efficiency improvements are necessary to enable progress toward the NZEH goals, and will require a radical rethinking of opportunities to improve system performance. The large reductions in HVAC energy consumption necessary to support the NZEH goals require a systems-oriented analysis approach that characterizes each element of energy consumption, identifies alternatives, and determines the most cost-effective combination of options. In particular, HVAC equipment must be developed that addresses the range of special needs of NZEH applications in the areas of reduced HVAC and water heating energy use, humidity control, ventilation, uniform comfort, and ease of zoning. In FY05 ORNL conducted an initial Stage 1 (Applied Research) scoping assessment of HVAC/WH systems options for future NZEHs to help DOE/BT identify and prioritize alternative approaches for further development

    13. computational-fluid-dynamics-training

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Table of Contents Date Location Advanced Hydraulic and Aerodynamic Analysis Using CFD March 27-28, 2013 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 30-31, 2011 Argonne TRACC Argonne, IL Computational Hydraulics for Transportation Workshop September 23-24, 2009 Argonne TRACC West Chicago, IL

    14. Multi-processor including data flow accelerator module

      DOE Patents [OSTI]

      Davidson, George S.; Pierce, Paul E.

      1990-01-01

      An accelerator module for a data flow computer includes an intelligent memory. The module is added to a multiprocessor arrangement and uses a shared tagged memory architecture in the data flow computer. The intelligent memory module assigns locations for holding data values in correspondence with arcs leading to a node in a data dependency graph. Each primitive computation is associated with a corresponding memory cell, including a number of slots for operands needed to execute a primitive computation, a primitive identifying pointer, and linking slots for distributing the result of the cell computation to other cells requiring that result as an operand. Circuitry is provided for utilizing tag bits to determine automatically when all operands required by a processor are available and for scheduling the primitive for execution in a queue. Each memory cell of the module may be associated with any of the primitives, and the particular primitive to be executed by the processor associated with the cell is identified by providing an index, such as the cell number for the primitive, to the primitive lookup table of starting addresses. The module thus serves to perform functions previously performed by a number of sections of data flow architectures and coexists with conventional shared memory therein. A multiprocessing system including the module operates in a hybrid mode, wherein the same processing modules are used to perform some processing in a sequential mode, under immediate control of an operating system, while performing other processing in a data flow mode.

    15. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computer security Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved

    16. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    17. Argonne's Laboratory computing resource center : 2006 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

      2007-05-31

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national

    18. Magnetohydrodynamic Models of Accretion Including Radiation Transport |

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Argonne Leadership Computing Facility Snapshot of the global structure of a radiation-dominated accretion flow around a black hole computed using the Athena++ code Snapshot of the global structure of a radiation-dominated accretion flow around a black hole computed using the Athena++ code. Left half of the image shows the density (in units of 0.01g/cm^3), and the right half shows the radiation energy density (in units of the energy density for a 10^7 degree black body). Coordinate axes are

    19. Parallel computing in enterprise modeling.

      SciTech Connect (OSTI)

      Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

      2008-08-01

      This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

    20. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2015-01-27

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    1. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2014-12-30

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    2. Spectroscopic and computational analysis of the molecular interactions in the ionic liquid ion pair [BMP]{sup +}[TFSI]{sup -}

      SciTech Connect (OSTI)

      Mao, James X; Nulwala, Hunaid B; Luebke, David R; Damodaran, Krishnan

      2012-11-01

      1-Butyl-1-methylpyrrolidinium bis(trifluoromethylsulfonyl)imide ([BMP]{sup +}[TFSI]{sup −}) ion pairs were studied using DFT at the B3LYP/6-31 + G(d) level. Nine locally stable conformations of the ion pair were located. In the most stable conformation, [TFSI]{sup −} takes a cis conformation and lies below the pyrrolidinium ring. Atoms-in-molecules (AIM) and electron density analysis indicated the existence of nine hydrogen bonds. Interaction energies were recalculated at the Second-order Møller–Plesset (MP2) level to show the importance of dispersion interaction. Further investigation through natural bond orbital (NBO) analysis provided insight into the importance of charge transfer interactions in the ion pair. Harmonic vibrations of the ion pair were calculated and compared with vibrations of the free ions as well as the experimental infrared spectrum. Assignments and frequency shifts are discussed in light of the inter-ionic interactions.

    3. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      DesignForward FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Exascale Computing Exascale Computing Moving forward into the exascale era, NERSC users place will place increased demands on NERSC computational facilities. Users will be facing increased complexity in the memory subsystem and node architecture. System designs and programming models will have to evolve to face these new challenges. NERSC staff are active in current initiatives addressing

    4. Computational Analysis of the Pyrolysis of ..beta..-O4 Lignin Model Compounds: Concerted vs. Homolytic Fragmentation

      SciTech Connect (OSTI)

      Clark, J. M.; Robichaud, D. J.; Nimlos, M. R.

      2012-01-01

      The thermochemical conversion of biomass to liquid transportation fuels is a very attractive technology for expanding the utilization of carbon neutral processes and reducing dependency on fossil fuel resources. As with all such emerging technologies, biomass conversion through gasification or pyrolysis has a number of obstacles that need to be overcome to make these processes cost competitive with the refining of fossil fuels. Our current efforts have focused on the investigation of the thermochemistry of the linkages between lignin units using ab initio calculations on dimeric lignin model compounds. All calculations were carried out using M062X density functional theory at the 6-311++G(d,p) basis set. The M062X method has been shown to be consistent with the CBS-QB3 method while being significantly less computationally expensive. To date we have only completed the study on the b-O4 compounds. The theoretical calculations performed in the study indicate that concerted elimination pathways dominate over bond homolysis reactions under typical pyrolysis conditions. However, this does not mean that concerted elimination will be the dominant loss process for lignin. Bimolecular radical chemistry could very well dwarf the unimolecular pathways investigated in this study. These concerted pathways tend to form stable, reasonably non-reactive products that would be more suited producing a fungible bio-oil for the production of liquid transportation fuels.

    5. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    6. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      low-overhead operating system optimized for high performance computing called "Cray Linux Environment" (CLE). This OS supports only a limited number of system calls and UNIX...

    7. Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Advanced Materials Laboratory Center for Integrated Nanotechnologies Combustion Research Facility Computational Science Research Institute Joint BioEnergy Institute About EC News ...

    8. Attack Methodology Analysis: Emerging Trends in Computer-Based Attack Methodologies and Their Applicability to Control System Networks

      SciTech Connect (OSTI)

      Bri Rolston

      2005-06-01

      Threat characterization is a key component in evaluating the threat faced by control systems. Without a thorough understanding of the threat faced by critical infrastructure networks, adequate resources cannot be allocated or directed effectively to the defense of these systems. Traditional methods of threat analysis focus on identifying the capabilities and motivations of a specific attacker, assessing the value the adversary would place on targeted systems, and deploying defenses according to the threat posed by the potential adversary. Too many effective exploits and tools exist and are easily accessible to anyone with access to an Internet connection, minimal technical skills, and a significantly reduced motivational threshold to be able to narrow the field of potential adversaries effectively. Understanding how hackers evaluate new IT security research and incorporate significant new ideas into their own tools provides a means of anticipating how IT systems are most likely to be attacked in the future. This research, Attack Methodology Analysis (AMA), could supply pertinent information on how to detect and stop new types of attacks. Since the exploit methodologies and attack vectors developed in the general Information Technology (IT) arena can be converted for use against control system environments, assessing areas in which cutting edge exploit development and remediation techniques are occurring can provide significance intelligence for control system network exploitation, defense, and a means of assessing threat without identifying specific capabilities of individual opponents. Attack Methodology Analysis begins with the study of what exploit technology and attack methodologies are being developed in the Information Technology (IT) security research community within the black and white hat community. Once a solid understanding of the cutting edge security research is established, emerging trends in attack methodology can be identified and the gap between

    9. System Advisor Model Includes Analysis of Hybrid CSP Option ...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      concepts related to power generation have been missing in the System Advisor Model (SAM). One such concept, until now, is a hybrid integrated solar combined-cycle (ISCC)...

    10. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and...

    11. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center for Computational Sciences

    12. Computation supporting biodefense

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Conference on High-Speed Computing LANL / LLNL / SNL Salishan Lodge, Gleneden Beach, Oregon 24 April 2003 Murray Wolinsky murray@lanl.gov The Role of Computation in Biodefense 1. Biothreat 101 2. Bioinformatics 101 Examples 3. Sequence analysis: mpiBLAST Feng 4. Detection: KPATH Slezak 5. Protein structure: ROSETTA Strauss 6. Real-time epidemiology: EpiSIMS Eubank 7. Forensics: VESPA Myers, Korber 8. Needs System level analytical capabilities Enhanced phylogenetic algorithms Novel

    13. Power throttling of collections of computing elements

      DOE Patents [OSTI]

      Bellofatto, Ralph E.; Coteus, Paul W.; Crumley, Paul G.; Gara, Alan G.; Giampapa, Mark E.; Gooding; Thomas M.; Haring, Rudolf A.; Megerian, Mark G.; Ohmacht, Martin; Reed, Don D.; Swetz, Richard A.; Takken, Todd

      2011-08-16

      An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

    14. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the microgrid.

    15. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the .

    16. Automotive Underhood Thermal Management Analysis Using 3-D Coupled Thermal-Hydrodynamic Computer Models: Thermal Radiation Modeling

      SciTech Connect (OSTI)

      Pannala, S; D'Azevedo, E; Zacharia, T

      2002-02-26

      The goal of the radiation modeling effort was to develop and implement a radiation algorithm that is fast and accurate for the underhood environment. As part of this CRADA, a net-radiation model was chosen to simulate radiative heat transfer in an underhood of a car. The assumptions (diffuse-gray and uniform radiative properties in each element) reduce the problem tremendously and all the view factors for radiation thermal calculations can be calculated once and for all at the beginning of the simulation. The cost for online integration of heat exchanges due to radiation is found to be less than 15% of the baseline CHAD code and thus very manageable. The off-line view factor calculation is constructed to be very modular and has been completely integrated to read CHAD grid files and the output from this code can be read into the latest version of CHAD. Further integration has to be performed to accomplish the same with STAR-CD. The main outcome of this effort is to obtain a highly scalable and portable simulation capability to model view factors for underhood environment (for e.g. a view factor calculation which took 14 hours on a single processor only took 14 minutes on 64 processors). The code has also been validated using a simple test case where analytical solutions are available. This simulation capability gives underhood designers in the automotive companies the ability to account for thermal radiation - which usually is critical in the underhood environment and also turns out to be one of the most computationally expensive components of underhood simulations. This report starts off with the original work plan as elucidated in the proposal in section B. This is followed by Technical work plan to accomplish the goals of the project in section C. In section D, background to the current work is provided with references to the previous efforts this project leverages on. The results are discussed in section 1E. This report ends with conclusions and future scope of

    17. Topic A Note: Includes STEPS Subtopic

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Topic A Note: Includes STEPS Subtopic 33 Total Projects Developing and Enhancing Workforce Training Programs

    18. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

      SciTech Connect (OSTI)

      Joseph, Earl C.; Conway, Steve; Dekate, Chirag

      2013-09-30

      This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size.  A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

    19. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    20. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Exascale Computing CoDEx Project: A Hardware/Software Codesign Environment for the Exascale Era The next decade will see a rapid evolution of HPC node architectures as power and cooling constraints are limiting increases in microprocessor clock speeds and constraining data movement. Applications and algorithms will need to change and adapt as node architectures evolve. A key element of the strategy as we move forward is the co-design of applications, architectures and programming

    1. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    2. Collectively loading an application in a parallel computer

      DOE Patents [OSTI]

      Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Miller, Samuel J.; Mundy, Michael B.

      2016-01-05

      Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

    3. Dedicated heterogeneous node scheduling including backfill scheduling

      DOE Patents [OSTI]

      Wood, Robert R.; Eckert, Philip D.; Hommes, Gregg

      2006-07-25

      A method and system for job backfill scheduling dedicated heterogeneous nodes in a multi-node computing environment. Heterogeneous nodes are grouped into homogeneous node sub-pools. For each sub-pool, a free node schedule (FNS) is created so that the number of to chart the free nodes over time. For each prioritized job, using the FNS of sub-pools having nodes useable by a particular job, to determine the earliest time range (ETR) capable of running the job. Once determined for a particular job, scheduling the job to run in that ETR. If the ETR determined for a lower priority job (LPJ) has a start time earlier than a higher priority job (HPJ), then the LPJ is scheduled in that ETR if it would not disturb the anticipated start times of any HPJ previously scheduled for a future time. Thus, efficient utilization and throughput of such computing environments may be increased by utilizing resources otherwise remaining idle.

    4. High Throughput Computing Impact on Meta Genomics (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

      SciTech Connect (OSTI)

      Gore, Brooklin [Morgridge Institute for Research] [Morgridge Institute for Research

      2011-10-12

      This presentation includes a brief background on High Throughput Computing, correlating gene transcription factors, optical mapping, genotype to phenotype mapping via QTL analysis, and current work on next gen sequencing.

    5. High Throughput Computing Impact on Meta Genomics (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

      ScienceCinema (OSTI)

      Gore, Brooklin [Morgridge Institute for Research

      2013-01-22

      This presentation includes a brief background on High Throughput Computing, correlating gene transcription factors, optical mapping, genotype to phenotype mapping via QTL analysis, and current work on next gen sequencing.

    6. Proposal for grid computing for nuclear applications

      SciTech Connect (OSTI)

      Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat; Muhamad, Shalina Bt. Sheik; Hassan, Hasni; Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri; and others

      2014-02-12

      The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

    7. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    8. Computational Nuclear Structure | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Excellent scaling is achieved by the production Automatic Dynamic Load Balancing (ADLB) library on the BG/P. Computational Nuclear Structure PI Name: David Dean Hai Nam PI Email: namha@ornl.gov deandj@ornl.gov Institution: Oak Ridge National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 15 Million Year: 2010 Research Domain: Physics Researchers from Oak Ridge and Argonne national laboratories are using complementary techniques, including Green's Function Monte Carlo, the No

    9. INSTRUMENTATION, INCLUDING NUCLEAR AND PARTICLE DETECTORS; RADIATION

      Office of Scientific and Technical Information (OSTI)

      interval technical basis document Chiaro, P.J. Jr. 44 INSTRUMENTATION, INCLUDING NUCLEAR AND PARTICLE DETECTORS; RADIATION DETECTORS; RADIATION MONITORS; DOSEMETERS;...

    10. Cloud computing security.

      SciTech Connect (OSTI)

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    11. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Resources This page is the repository for sundry items of information relevant to general computing on BooNE. If you have a question or problem that isn't answered here, or a suggestion for improving this page or the information on it, please mail boone-computing@fnal.gov and we'll do our best to address any issues. Note about this page Some links on this page point to www.everything2.com, and are meant to give an idea about a concept or thing without necessarily wading through a whole website

    12. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    13. Computational analysis of kidney scintigrams

      SciTech Connect (OSTI)

      Vrincianu, D.; Puscasu, E.; Creanga, D.; Stefanescu, C.

      2013-11-13

      The scintigraphic investigation of normal and pathological kidneys was carried out using specialized gamma-camera device from nuclear medicine hospital department. Technetium 90m isotope with gamma radiation emission, coupled with vector molecules for kidney tissues was introduced into the subject body, its dynamics being recorded as data source for kidney clearance capacity. Two representative data series were investigated, corresponding to healthy and pathological organs respectively. The semi-quantitative tests applied for the comparison of the two distinct medical situations were: the shape of probability distribution histogram, the power spectrum, the auto-correlation function and the Lyapunov exponent. While power spectrum led to similar results in both cases, significant differences were revealed by means of distribution probability, Lyapunov exponent and correlation time, recommending these numerical tests as possible complementary tools in clinical diagnosis.

    14. Computational trigonometry

      SciTech Connect (OSTI)

      Gustafson, K.

      1994-12-31

      By means of the author`s earlier theory of antieigenvalues and antieigenvectors, a new computational approach to iterative methods is presented. This enables an explicit trigonometric understanding of iterative convergence and provides new insights into the sharpness of error bounds. Direct applications to Gradient descent, Conjugate gradient, GCR(k), Orthomin, CGN, GMRES, CGS, and other matrix iterative schemes will be given.

    15. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, ... The DOE Office of Science's Advanced Scientific Computing Research (ASCR) program ...

    16. Theory, Simulation, and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer, Computational, and Statistical Sciences (CCS) Division is an international ... and statistics The deployment and integration of computational technology, ...

    17. Gas storage materials, including hydrogen storage materials

      DOE Patents [OSTI]

      Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

      2013-02-19

      A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

    18. Communications circuit including a linear quadratic estimator

      DOE Patents [OSTI]

      Ferguson, Dennis D.

      2015-07-07

      A circuit includes a linear quadratic estimator (LQE) configured to receive a plurality of measurements a signal. The LQE is configured to weight the measurements based on their respective uncertainties to produce weighted averages. The circuit further includes a controller coupled to the LQE and configured to selectively adjust at least one data link parameter associated with a communication channel in response to receiving the weighted averages.

    19. Intentionally Including - Engaging Minorities in Physics Careers |

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Department of Energy Intentionally Including - Engaging Minorities in Physics Careers Intentionally Including - Engaging Minorities in Physics Careers April 24, 2013 - 4:37pm Addthis Joining Director Dot Harris (second from left) were Marlene Kaplan, the Deputy Director of Education and director of EPP, National Oceanic and Atmospheric Administration, Claudia Rankins, a Program Officer with the National Science Foundation and Jim Stith, the past Vice-President of the American Institute of

    20. Gas storage materials, including hydrogen storage materials

      DOE Patents [OSTI]

      Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

      2014-11-25

      A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material, such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

    1. Internode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

      2013-09-03

      Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

    2. Internode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

      2014-02-11

      Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

    3. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

    4. User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal energy storage coupled with district heating or cooling systems. Volume I. Main text

      SciTech Connect (OSTI)

      Huber, H.D.; Brown, D.R.; Reilly, R.W.

      1982-04-01

      A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. The AQUASTOR model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two principal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains the main text, including introduction, program description, input data instruction, a description of the output, and Appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

    5. RATIO COMPUTER

      DOE Patents [OSTI]

      Post, R.F.

      1958-11-11

      An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.

    6. Scramjet including integrated inlet and combustor

      SciTech Connect (OSTI)

      Kutschenreuter, P.H. Jr.; Blanton, J.C.

      1992-02-04

      This patent describes a scramjet engine. It comprises: a first surface including an aft facing step; a cowl including: a leading edge and a trailing edge; an upper surface and a lower surface extending between the leading edge and the trailing edge; the cowl upper surface being spaced from and generally parallel to the first surface to define an integrated inlet-combustor therebetween having an inlet for receiving and channeling into the inlet-combustor supersonic inlet airflow; means for injecting fuel into the inlet-combustor at the step for mixing with the supersonic inlet airflow for generating supersonic combustion gases; and further including a spaced pari of sidewalls extending between the first surface to the cowl upper surface and wherein the integrated inlet-combustor is generally rectangular and defined by the sidewall pair, the first surface and the cowl upper surface.

    7. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    8. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Events Computing Events Spotlighting the most advanced scientific and technical applications in the world! Featuring exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies will be seen for the first time in Denver. Supercomputing Conference 13 Denver, Colorado November 17-22, 2013 Spotlighting the most advanced scientific and technical applications in the world, SC13 will bring together the international

    9. Electric Power Monthly, August 1990. [Glossary included

      SciTech Connect (OSTI)

      Not Available

      1990-11-29

      The Electric Power Monthly (EPM) presents monthly summaries of electric utility statistics at the national, Census division, and State level. The purpose of this publication is to provide energy decisionmakers with accurate and timely information that may be used in forming various perspectives on electric issues that lie ahead. Data includes generation by energy source (coal, oil, gas, hydroelectric, and nuclear); generation by region; consumption of fossil fuels for power generation; sales of electric power, cost data; and unusual occurrences. A glossary is included.

    10. Text analysis methods, text analysis apparatuses, and articles of manufacture

      DOE Patents [OSTI]

      Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M

      2014-10-28

      Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.

    11. Argonne's Laboratory Computing Resource Center : 2005 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

      2007-06-30

      Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to develop

    12. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math » Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs in the next generation of supercomputers. Get Expertise Tim Germann Physics and Chemistry of Materials Email Allen McPherson Energy and Infrastructure Analysis Email Turab Lookman Physics and Condensed Matter and Complex Systems Email Computational co-design involves developing the interacting components of a

    13. NERSC Enhances PDSF, Genepool Computing Capabilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Enhances PDSF, Genepool Computing Capabilities NERSC Enhances PDSF, Genepool Computing Capabilities Linux cluster expansion speeds data access and analysis January 3, 2014 Christmas came early for users of the Parallel Distributed Systems Facility (PDSF) and Genepool systems at Department of Energy's National Energy Research Scientific Computer Center (NERSC). Throughout November members of NERSC's Computational Systems Group were busy expanding the Linux computing resources that support PDSF's

    14. Subterranean barriers including at least one weld

      DOE Patents [OSTI]

      Nickelson, Reva A.; Sloan, Paul A.; Richardson, John G.; Walsh, Stephanie; Kostelnik, Kevin M.

      2007-01-09

      A subterranean barrier and method for forming same are disclosed, the barrier including a plurality of casing strings wherein at least one casing string of the plurality of casing strings may be affixed to at least another adjacent casing string of the plurality of casing strings through at least one weld, at least one adhesive joint, or both. A method and system for nondestructively inspecting a subterranean barrier is disclosed. For instance, a radiographic signal may be emitted from within a casing string toward an adjacent casing string and the radiographic signal may be detected from within the adjacent casing string. A method of repairing a barrier including removing at least a portion of a casing string and welding a repair element within the casing string is disclosed. A method of selectively heating at least one casing string forming at least a portion of a subterranean barrier is disclosed.

    15. Photoactive devices including porphyrinoids with coordinating additives

      DOE Patents [OSTI]

      Forrest, Stephen R; Zimmerman, Jeramy; Yu, Eric K; Thompson, Mark E; Trinh, Cong; Whited, Matthew; Diev, Vlacheslav

      2015-05-12

      Coordinating additives are included in porphyrinoid-based materials to promote intermolecular organization and improve one or more photoelectric characteristics of the materials. The coordinating additives are selected from fullerene compounds and organic compounds having free electron pairs. Combinations of different coordinating additives can be used to tailor the characteristic properties of such porphyrinoid-based materials, including porphyrin oligomers. Bidentate ligands are one type of coordinating additive that can form coordination bonds with a central metal ion of two different porphyrinoid compounds to promote porphyrinoid alignment and/or pi-stacking. The coordinating additives can shift the absorption spectrum of a photoactive material toward higher wavelengths, increase the external quantum efficiency of the material, or both.

    16. Electric power monthly, September 1990. [Glossary included

      SciTech Connect (OSTI)

      Not Available

      1990-12-17

      The purpose of this report is to provide energy decision makers with accurate and timely information that may be used in forming various perspectives on electric issues. The power plants considered include coal, petroleum, natural gas, hydroelectric, and nuclear power plants. Data are presented for power generation, fuel consumption, fuel receipts and cost, sales of electricity, and unusual occurrences at power plants. Data are compared at the national, Census division, and state levels. 4 figs., 52 tabs. (CK)

    17. Nuclear reactor shield including magnesium oxide

      DOE Patents [OSTI]

      Rouse, Carl A.; Simnad, Massoud T.

      1981-01-01

      An improvement in nuclear reactor shielding of a type used in reactor applications involving significant amounts of fast neutron flux, the reactor shielding including means providing structural support, neutron moderator material, neutron absorber material and other components as described below, wherein at least a portion of the neutron moderator material is magnesium in the form of magnesium oxide either alone or in combination with other moderator materials such as graphite and iron.

    18. Rotor assembly including superconducting magnetic coil

      DOE Patents [OSTI]

      Snitchler, Gregory L.; Gamble, Bruce B.; Voccio, John P.

      2003-01-01

      Superconducting coils and methods of manufacture include a superconductor tape wound concentrically about and disposed along an axis of the coil to define an opening having a dimension which gradually decreases, in the direction along the axis, from a first end to a second end of the coil. Each turn of the superconductor tape has a broad surface maintained substantially parallel to the axis of the coil.

    19. Power generation method including membrane separation

      DOE Patents [OSTI]

      Lokhandwala, Kaaeid A.

      2000-01-01

      A method for generating electric power, such as at, or close to, natural gas fields. The method includes conditioning natural gas containing C.sub.3+ hydrocarbons and/or acid gas by means of a membrane separation step. This step creates a leaner, sweeter, drier gas, which is then used as combustion fuel to run a turbine, which is in turn used for power generation.

    20. Geant4 Computing Performance Benchmarking and Monitoring

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

      2015-12-23

      Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

    1. Geant4 Computing Performance Benchmarking and Monitoring

      SciTech Connect (OSTI)

      Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

      2015-12-23

      Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

    2. Broadcasting a message in a parallel computer

      DOE Patents [OSTI]

      Berg, Jeremy E.; Faraj, Ahmad A.

      2011-08-02

      Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

    3. Link failure detection in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Megerian, Mark G.; Smith, Brian E.

      2010-11-09

      Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

    4. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

      DOE Patents [OSTI]

      Faraj, Ahmad

      2012-04-17

      Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

    5. computer graphics

      Energy Science and Technology Software Center (OSTI)

      2001-06-08

      MUSTAFA is a scientific visualization package for visualizing data in the EXODUSII file format. These data files are typically priduced from Sandia's suite of finite element engineering analysis codes.

    6. User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal-energy storage oupled with district-heating or cooling systems. Volume II. Appendices

      SciTech Connect (OSTI)

      Huber, H.D.; Brown, D.R.; Reilly, R.W.

      1982-04-01

      A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. the AQUASTOR Model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two prinicpal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains all the appendices, including supply and distribution system cost equations and models, descriptions of predefined residential districts, key equations for the cooling degree-hour methodology, a listing of the sample case output, and appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

    7. Controlling data transfers from an origin compute node to a target compute node

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

      2011-06-21

      Methods, apparatus, and products are disclosed for controlling data transfers from an origin compute node to a target compute node that include: receiving, by an application messaging module on the target compute node, an indication of a data transfer from an origin compute node to the target compute node; and administering, by the application messaging module on the target compute node, the data transfer using one or more messaging primitives of a system messaging module in dependence upon the indication.

    8. Determination Of Ph Including Hemoglobin Correction

      DOE Patents [OSTI]

      Maynard, John D.; Hendee, Shonn P.; Rohrscheib, Mark R.; Nunez, David; Alam, M. Kathleen; Franke, James E.; Kemeny, Gabor J.

      2005-09-13

      Methods and apparatuses of determining the pH of a sample. A method can comprise determining an infrared spectrum of the sample, and determining the hemoglobin concentration of the sample. The hemoglobin concentration and the infrared spectrum can then be used to determine the pH of the sample. In some embodiments, the hemoglobin concentration can be used to select an model relating infrared spectra to pH that is applicable at the determined hemoglobin concentration. In other embodiments, a model relating hemoglobin concentration and infrared spectra to pH can be used. An apparatus according to the present invention can comprise an illumination system, adapted to supply radiation to a sample; a collection system, adapted to collect radiation expressed from the sample responsive to the incident radiation; and an analysis system, adapted to relate information about the incident radiation, the expressed radiation, and the hemoglobin concentration of the sample to pH.

    9. computational-hydraulics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne, Illinois Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. A training course in the use of computational hydraulics and aerodynamics CFD software using CD-adapco's STAR-CCM+ for analysis will be held at TRACC from March 21-22, 2012. The course assumes a basic knowledge of fluid mechanics and will make extensive use of hands on tutorials. CD-adapco will issue

    10. How Do You Reduce Energy Use from Computers and Electronics?...

      Broader source: Energy.gov (indexed) [DOE]

      discussed some ways to reduce the energy used by computers and electronics. Some tips include ensuring your computer is configured for optimal energy savings, turning off devices...

    11. Optical panel system including stackable waveguides

      DOE Patents [OSTI]

      DeSanto, Leonard; Veligdan, James T.

      2007-03-06

      An optical panel system including stackable waveguides is provided. The optical panel system displays a projected light image and comprises a plurality of planar optical waveguides in a stacked state. The optical panel system further comprises a support system that aligns and supports the waveguides in the stacked state. In one embodiment, the support system comprises at least one rod, wherein each waveguide contains at least one hole, and wherein each rod is positioned through a corresponding hole in each waveguide. In another embodiment, the support system comprises at least two opposing edge structures having the waveguides positioned therebetween, wherein each opposing edge structure contains a mating surface, wherein opposite edges of each waveguide contain mating surfaces which are complementary to the mating surfaces of the opposing edge structures, and wherein each mating surface of the opposing edge structures engages a corresponding complementary mating surface of the opposite edges of each waveguide.

    12. Optical panel system including stackable waveguides

      DOE Patents [OSTI]

      DeSanto, Leonard; Veligdan, James T.

      2007-11-20

      An optical panel system including stackable waveguides is provided. The optical panel system displays a projected light image and comprises a plurality of planar optical waveguides in a stacked state. The optical panel system further comprises a support system that aligns and supports the waveguides in the stacked state. In one embodiment, the support system comprises at least one rod, wherein each waveguide contains at least one hole, and wherein each rod is positioned through a corresponding hole in each waveguide. In another embodiment, the support system comprises at least two opposing edge structures having the waveguides positioned therebetween, wherein each opposing edge structure contains a mating surface, wherein opposite edges of each waveguide contain mating surfaces which are complementary to the mating surfaces of the opposing edge structures, and wherein each mating surface of the opposing edge structures engages a corresponding complementary mating surface of the opposite edges of each waveguide.

    13. Drapery assembly including insulated drapery liner

      DOE Patents [OSTI]

      Cukierski, Gwendolyn (Ithaca, NY)

      1983-01-01

      A drapery assembly is disclosed for covering a framed wall opening, the assembly including drapery panels hung on a horizontal traverse rod, the rod having a pair of master slides and means for displacing the master slides between open and closed positions. A pair of insulating liner panels are positioned behind the drapery, the remote side edges of the liner panels being connected with the side portions of the opening frame, and the adjacent side edges of the liner panels being connected with a pair of vertically arranged center support members adapted for sliding movement longitudinally of a horizontal track member secured to the upper horizontal portion of the opening frame. Pivotally arranged brackets connect the center support members with the master slides of the traverse rod whereby movement of the master slides to effect opening and closing of the drapery panels effects simultaneous opening and closing of the liner panels.

    14. Thermovoltaic semiconductor device including a plasma filter

      DOE Patents [OSTI]

      Baldasaro, Paul F.

      1999-01-01

      A thermovoltaic energy conversion device and related method for converting thermal energy into an electrical potential. An interference filter is provided on a semiconductor thermovoltaic cell to pre-filter black body radiation. The semiconductor thermovoltaic cell includes a P/N junction supported on a substrate which converts incident thermal energy below the semiconductor junction band gap into electrical potential. The semiconductor substrate is doped to provide a plasma filter which reflects back energy having a wavelength which is above the band gap and which is ineffectively filtered by the interference filter, through the P/N junction to the source of radiation thereby avoiding parasitic absorption of the unusable portion of the thermal radiation energy.

    15. Method and system for knowledge discovery using non-linear statistical analysis and a 1st and 2nd tier computer program

      DOE Patents [OSTI]

      Hively, Lee M.

      2011-07-12

      The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.

    16. Smart Grid Computational Tool | Open Energy Information

      Open Energy Info (EERE)

      project benefits. The Smart Grid Computational Tool employs the benefit analysis methodology that DOE uses to evaluate the Recovery Act smart grid projects. How it works: The...

    17. Engine lubrication circuit including two pumps

      DOE Patents [OSTI]

      Lane, William H.

      2006-10-03

      A lubrication pump coupled to the engine is sized such that the it can supply the engine with a predetermined flow volume as soon as the engine reaches a peak torque engine speed. In engines that operate predominately at speeds above the peak torque engine speed, the lubrication pump is often producing lubrication fluid in excess of the predetermined flow volume that is bypassed back to a lubrication fluid source. This arguably results in wasted power. In order to more efficiently lubricate an engine, a lubrication circuit includes a lubrication pump and a variable delivery pump. The lubrication pump is operably coupled to the engine, and the variable delivery pump is in communication with a pump output controller that is operable to vary a lubrication fluid output from the variable delivery pump as a function of at least one of engine speed and lubrication flow volume or system pressure. Thus, the lubrication pump can be sized to produce the predetermined flow volume at a speed range at which the engine predominately operates while the variable delivery pump can supplement lubrication fluid delivery from the lubrication pump at engine speeds below the predominant engine speed range.

    18. Articles including thin film monolayers and multilayers

      DOE Patents [OSTI]

      Li, DeQuan; Swanson, Basil I.

      1995-01-01

      Articles of manufacture including: (a) a base substrate having an oxide surface layer, and a multidentate ligand, capable of binding a metal ion, attached to the oxide surface layer of the base substrate, (b) a base substrate having an oxide surface layer, a multidentate ligand, capable of binding a metal ion, attached to the oxide surface layer of the base substrate, and a metal species attached to the multidentate ligand, (c) a base substrate having an oxide surface layer, a multidentate ligand, capable of binding a metal ion, attached to the oxide surface layer of the base substrate, a metal species attached to the multidentate ligand, and a multifunctional organic ligand attached to the metal species, and (d) a base substrate having an oxide surface layer, a multidentate ligand, capable of binding a metal ion, attached to the oxide surface layer of the base substrate, a metal species attached to the multidentate ligand, a multifunctional organic ligand attached to the metal species, and a second metal species attached to the multifunctional organic ligand, are provided, such articles useful in detecting the presence of a selected target species, as nonliear optical materials, or as scavengers for selected target species.

    19. Computation Directorate 2007 Annual Report

      SciTech Connect (OSTI)

      Henson, V E; Guse, J A

      2008-03-06

      expanding BlueGene/L, the world's most powerful computer, by 60% and using it to capture the most prestigious prize in the field of computing, to helping create an automated control system for the National Ignition Facility (NIF) that monitors and adjusts more than 60,000 control and diagnostic points; from creating a microarray probe that rapidly detects virulent high-threat organisms, natural or bioterrorist in origin, to replacing large numbers of physical computer servers with small numbers of virtual servers, reducing operating expense by 60%, the people in Computation have been at the center of weighty projects whose impacts are felt across the Laboratory and the DOE community. The accomplishments I just mentioned, and another two dozen or so, make up the stories contained in this report. While they form an exceptionally diverse set of projects and topics, it is what they have in common that excites me. They share the characteristic of being central, often crucial, to the mission-driven business of the Laboratory. Computational science has become fundamental to nearly every aspect of the Laboratory's approach to science and even to the conduct of administration. It is difficult to consider how we would proceed without computing, which occurs at all scales, from handheld and desktop computing to the systems controlling the instruments and mechanisms in the laboratories to the massively parallel supercomputers. The reasons for the dramatic increase in the importance of computing are manifest. Practical, fiscal, or political realities make the traditional approach to science, the cycle of theoretical analysis leading to experimental testing, leading to adjustment of theory, and so on, impossible, impractical, or forbidden. How, for example, can we understand the intricate relationship between human activity and weather and climate? We cannot test our hypotheses by experiment, which would require controlled use of the entire earth over centuries. It is only through

    20. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    1. High-Performance Computing for Advanced Smart Grid Applications

      SciTech Connect (OSTI)

      Huang, Zhenyu; Chen, Yousu

      2012-07-06

      The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

    2. Computer Security Risk Assessment

      Energy Science and Technology Software Center (OSTI)

      1992-02-11

      LAVA/CS (LAVA for Computer Security) is an application of the Los Alamos Vulnerability Assessment (LAVA) methodology specific to computer and information security. The software serves as a generic tool for identifying vulnerabilities in computer and information security safeguards systems. Although it does not perform a full risk assessment, the results from its analysis may provide valuable insights into security problems. LAVA/CS assumes that the system is exposed to both natural and environmental hazards and tomore » deliberate malevolent actions by either insiders or outsiders. The user in the process of answering the LAVA/CS questionnaire identifies missing safeguards in 34 areas ranging from password management to personnel security and internal audit practices. Specific safeguards protecting a generic set of assets (or targets) from a generic set of threats (or adversaries) are considered. There are four generic assets: the facility, the organization''s environment; the hardware, all computer-related hardware; the software, the information in machine-readable form stored both on-line or on transportable media; and the documents and displays, the information in human-readable form stored as hard-copy materials (manuals, reports, listings in full-size or microform), film, and screen displays. Two generic threats are considered: natural and environmental hazards, storms, fires, power abnormalities, water and accidental maintenance damage; and on-site human threats, both intentional and accidental acts attributable to a perpetrator on the facility''s premises.« less

    3. Computational Science Research in Support of Petascale Electromagnetic Modeling

      SciTech Connect (OSTI)

      Lee, L.-Q.; Akcelik, V; Ge, L; Chen, S; Schussman, G; Candel, A; Li, Z; Xiao, L; Kabel, A; Uplenchwar, R; Ng, C; Ko, K; /SLAC

      2008-06-20

      Computational science research components were vital parts of the SciDAC-1 accelerator project and are continuing to play a critical role in newly-funded SciDAC-2 accelerator project, the Community Petascale Project for Accelerator Science and Simulation (ComPASS). Recent advances and achievements in the area of computational science research in support of petascale electromagnetic modeling for accelerator design analysis are presented, which include shape determination of superconducting RF cavities, mesh-based multilevel preconditioner in solving highly-indefinite linear systems, moving window using h- or p- refinement for time-domain short-range wakefield calculations, and improved scalable application I/O.

    4. TRAC-PF1/MOD1: an advanced best-estimate computer program for pressurized water reactor thermal-hydraulic analysis

      SciTech Connect (OSTI)

      Liles, D.R.; Mahaffy, J.H.

      1986-07-01

      The Los Alamos National Laboratory is developing the Transient Reactor Analysis Code (TRAC) to provide advanced best-estimate predictions of postulated accidents in light-water reactors. The TRAC-PF1/MOD1 program provides this capability for pressurized water reactors and for many thermal-hydraulic test facilities. The code features either a one- or a three-dimensional treatment of the pressure vessel and its associated internals, a two-fluid nonequilibrium hydrodynamics model with a noncondensable gas field and solute tracking, flow-regime-dependent constitutive equation treatment, optional reflood tracking capability for bottom-flood and falling-film quench fronts, and consistent treatment of entire accident sequences including the generation of consistent initial conditions. The stability-enhancing two-step (SETS) numerical algorithm is used in the one-dimensional hydrodynamics and permits this portion of the fluid dynamics to violate the material Courant condition. This technique permits large time steps and, hence, reduced running time for slow transients.

    5. Physics, Computer Science and Mathematics Division. Annual report, January 1-December 31, 1980

      SciTech Connect (OSTI)

      Birge, R.W.

      1981-12-01

      Research in the physics, computer science, and mathematics division is described for the year 1980. While the division's major effort remains in high energy particle physics, there is a continually growing program in computer science and applied mathematics. Experimental programs are reported in e/sup +/e/sup -/ annihilation, muon and neutrino reactions at FNAL, search for effects of a right-handed gauge boson, limits on neutrino oscillations from muon-decay neutrinos, strong interaction experiments at FNAL, strong interaction experiments at BNL, particle data center, Barrelet moment analysis of ..pi..N scattering data, astrophysics and astronomy, earth sciences, and instrument development and engineering for high energy physics. In theoretical physics research, studies included particle physics and accelerator physics. Computer science and mathematics research included analytical and numerical methods, information analysis techniques, advanced computer concepts, and environmental and epidemiological studies. (GHT)

    6. Recent progress and advances in iterative software (including parallel aspects)

      SciTech Connect (OSTI)

      Carey, G.; Young, D.M.; Kincaid, D.

      1994-12-31

      The purpose of the workshop is to provide a forum for discussion of the current state of iterative software packages. Of particular interest is software for large scale engineering and scientific applications, especially for distributed parallel systems. However, the authors will also review the state of software development for conventional architectures. This workshop will complement the other proposed workshops on iterative BLAS kernels and applications. The format for the workshop is as follows: To provide some structure, there will be brief presentations, each of less than five minutes duration and dealing with specific facets of the subject. These will be designed to focus the discussion and to stimulate an exchange with the participants. Issues to be covered include: The evolution of iterative packages, current state of the art, the parallel computing challenge, applications viewpoint, standards, and future directions and open problems.

    7. National Energy Research Scientific Computing Center

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... use include on-demand computing functionality for ... mega-electron volts per meter before the metal breaks down. ... been collaborating with earth scientists at Berkeley Lab ...

    8. Discretionary Allocation Request | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Fusion Energy, Magnetic Fusion Materials Science, Condensed Matter and Materials Physics ... This may include information such as: - computational methods - programming model - ...

    9. computational-hydraulics-for-transportation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Transportation Workshop Sept. 23-24, 2009 Argonne TRACC Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small The Transportation Research and Analysis Computing Center at Argonne National Laboratory will hold a workshop on the use of computational hydraulics for transportation applications. The goals of the workshop are: Bring together people who are using or would benefit from the use of high performance cluster

    10. Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer Modeling and Analysis Resources PDF icon catcumminscomputingsuccessstorydec2015...

    11. Ionic liquids, electrolyte solutions including the ionic liquids, and energy storage devices including the ionic liquids

      SciTech Connect (OSTI)

      Gering, Kevin L.; Harrup, Mason K.; Rollins, Harry W.

      2015-12-08

      An ionic liquid including a phosphazene compound that has a plurality of phosphorus-nitrogen units and at least one pendant group bonded to each phosphorus atom of the plurality of phosphorus-nitrogen units. One pendant group of the at least one pendant group comprises a positively charged pendant group. Additional embodiments of ionic liquids are disclosed, as are electrolyte solutions and energy storage devices including the embodiments of the ionic liquid.

    12. Comparison of approaches to Total Quality Management. Including...

      Office of Scientific and Technical Information (OSTI)

      Country of Publication: United States Language: English Subject: 99 MATHEMATICS, COMPUTERS, INFORMATION SCIENCE, MANAGEMENT, LAW, MISCELLANEOUS; US DOE; PROGRAM MANAGEMENT; ...

    13. CDF GlideinWMS usage in grid computing of high energy physics

      SciTech Connect (OSTI)

      Zvada, Marian; Benjamin, Doug; Sfiligoi, Igor; /Fermilab

      2010-01-01

      Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

    14. Progress report No. 56, October 1, 1979-September 30, 1980. [Courant Mathematics and Computing Lab. , New York Univ

      SciTech Connect (OSTI)

      1980-10-01

      Research during the period is sketched in a series of abstract-length summaries. The forte of the Laboratory lies in the development and analysis of mathematical models and efficient computing methods for the rapid solution of technological problems of interest to DOE, in particular, the detailed calculation on large computers of complicated fluid flows in which reactions and heat conduction may be taking place. The research program of the Laboratory encompasses two broad categories: analytical and numerical methods, which include applied analysis, computational mathematics, and numerical methods for partial differential equations, and advanced computer concepts, which include software engineering, distributed systems, and high-performance systems. Lists of seminars and publications are included. (RWR)

    15. 2D Wavefront Sensor Analysis and Control

      Energy Science and Technology Software Center (OSTI)

      1996-02-19

      This software is designed for data acquisition and analysis of two dimensional wavefront sensors. The software includes data acquisition and control functions for an EPIX frame grabber to acquire data from a computer and all the appropriate analysis functions necessary to produce and display intensity and phase information. This software is written in Visual Basic for windows.

    16. NREL: Technology Deployment - Cities-LEAP Energy Profile Tool Includes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Energy Data on More than 23,400 U.S. Cities Cities-LEAP Energy Profile Tool Includes Energy Data on More than 23,400 U.S. Cities News NREL Report Examines Energy Use in Cities and Proposes Next Steps for Energy Innovation Publications Citi-Level Energy Decision Making: Data Use in Energy Planning, Implementation, and Evaluation in U.S. Cities Sponsors DOE's Energy Office of Energy Efficiency and Renewable Energy Policy and Analysis Office Related Stories Hawaii's First Net-Zero Energy

    17. Development of computer graphics

      SciTech Connect (OSTI)

      Nuttall, H.E.

      1989-07-01

      The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

    18. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 1 quarter 3 progress report.

      SciTech Connect (OSTI)

      Lottes, S.A.; Kulak, R.F.; Bojanowski, C.

      2011-08-26

      The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. The analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water loads on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks

    19. Numerical uncertainty in computational engineering and physics

      SciTech Connect (OSTI)

      Hemez, Francois M

      2009-01-01

      Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts of consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.

    20. Internal combustion engines: Computer applications. (Latest citations from the EI Compendex plus database). Published Search

      SciTech Connect (OSTI)

      Not Available

      1993-10-01

      The bibliography contains citations concerning the application of computers and computerized simulations in the design, analysis, operation, and evaluation of various types of internal combustion engines and associated components and apparatus. Special attention is given to engine control and performance. (Contains a minimum of 67 citations and includes a subject term index and title list.)

    1. Applications of Parallel Computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course...

    2. MELCOR computer code manuals

      SciTech Connect (OSTI)

      Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.

      1995-03-01

      MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

    3. Theory, Modeling and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Theory, Modeling and Computation Theory, Modeling and Computation The sophistication of modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme computing. CONTACT Jack Shlachter (505) 665-1888 Email Extreme Computing to Power Accurate Atomistic Simulations Advances in high-performance computing and theory allow longer and larger atomistic simulations than currently possible.

    4. C -parameter distribution at N 3 LL ' including power corrections

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Hoang, André H.; Kolodrubetz, Daniel W.; Mateu, Vicent; Stewart, Iain W.

      2015-05-15

      We compute the e⁺e⁻ C-parameter distribution using the soft-collinear effective theory with a resummation to next-to-next-to-next-to-leading-log prime accuracy of the most singular partonic terms. This includes the known fixed-order QCD results up to O(α3s), a numerical determination of the two-loop nonlogarithmic term of the soft function, and all logarithmic terms in the jet and soft functions up to three loops. Our result holds for C in the peak, tail, and far tail regions. Additionally, we treat hadronization effects using a field theoretic nonperturbative soft function, with moments Ωn. To eliminate an O(ΛQCD) renormalon ambiguity in the soft function, we switchmore » from the MS¯ to a short distance “Rgap” scheme to define the leading power correction parameter Ω1. We show how to simultaneously account for running effects in Ω1 due to renormalon subtractions and hadron-mass effects, enabling power correction universality between C-parameter and thrust to be tested in our setup. We discuss in detail the impact of resummation and renormalon subtractions on the convergence. In the relevant fit region for αs(mZ) and Ω1, the perturbative uncertainty in our cross section is ≅ 2.5% at Q=mZ.« less

    5. Locating hardware faults in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-04-13

      Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

    6. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    7. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy ...

    8. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

    9. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    10. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental ...

    11. Combinatorial evaluation of systems including decomposition of a system representation into fundamental cycles

      DOE Patents [OSTI]

      Oliveira, Joseph S.; Jones-Oliveira, Janet B.; Bailey, Colin G.; Gull, Dean W.

      2008-07-01

      One embodiment of the present invention includes a computer operable to represent a physical system with a graphical data structure corresponding to a matroid. The graphical data structure corresponds to a number of vertices and a number of edges that each correspond to two of the vertices. The computer is further operable to define a closed pathway arrangement with the graphical data structure and identify each different one of a number of fundamental cycles by evaluating a different respective one of the edges with a spanning tree representation. The fundamental cycles each include three or more of the vertices.

    12. Fourth SIAM conference on mathematical and computational issues in the geosciences: Final program and abstracts

      SciTech Connect (OSTI)

      1997-12-31

      The conference focused on computational and modeling issues in the geosciences. Of the geosciences, problems associated with phenomena occurring in the earth`s subsurface were best represented. Topics in this area included petroleum recovery, ground water contamination and remediation, seismic imaging, parameter estimation, upscaling, geostatistical heterogeneity, reservoir and aquifer characterization, optimal well placement and pumping strategies, and geochemistry. Additional sessions were devoted to the atmosphere, surface water and oceans. The central mathematical themes included computational algorithms and numerical analysis, parallel computing, mathematical analysis of partial differential equations, statistical and stochastic methods, optimization, inversion, homogenization and renormalization. The problem areas discussed at this conference are of considerable national importance, with the increasing importance of environmental issues, global change, remediation of waste sites, declining domestic energy sources and an increasing reliance on producing the most out of established oil reservoirs.

    13. New challenges in computational biochemistry

      SciTech Connect (OSTI)

      Honig, B.

      1996-12-31

      The new challenges in computational biochemistry to which the title refers include the prediction of the relative binding free energy of different substrates to the same protein, conformational sampling, and other examples of theoretical predictions matching known protein structure and behavior.

    14. Experimental Mathematics and Computational Statistics

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2009-04-30

      The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

    15. The Computational Physics Program of the national MFE Computer Center

      SciTech Connect (OSTI)

      Mirin, A.A.

      1989-01-01

      Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.

    16. Model Analysis ToolKit

      Energy Science and Technology Software Center (OSTI)

      2015-05-15

      MATK provides basic functionality to facilitate model analysis within the Python computational environment. Model analysis setup within MATK includes: - define parameters - define observations - define model (python function) - define samplesets (sets of parameter combinations) Currently supported functionality includes: - forward model runs - Latin-Hypercube sampling of parameters - multi-dimensional parameter studies - parallel execution of parameter samples - model calibration using internal Levenberg-Marquardt algorithm - model calibration using lmfit package - modelmore » calibration using levmar package - Markov Chain Monte Carlo using pymc package MATK facilitates model analysis using: - scipy - calibration (scipy.optimize) - rpy2 - Python interface to R« less

    17. Cosmic Reionization On Computers | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      its Cosmic Reionization On Computers (CROC) project, using the Adaptive Refinement Tree (ART) code as its main simulation tool. An important objective of this research is to make...

    18. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained

    19. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained

    20. Computing for Finance

      SciTech Connect (OSTI)

      2010-03-24

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has

    1. Performing a global barrier operation in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

      2014-12-09

      Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.

    2. Software/Computing | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software/Computing Software/Computing Argonne is the central site for work on database and data management. The group has key responsibilities in the design and implementation of the I/O model which must provided distributed access to many petabytes of data for both event reconstruction and physics analysis. The group deployed a number of HEP packages on the BlueGene/Q supercomputer of the Argonne Leadership Computing Facility, and currently generates CPU-intensive Monte Carlo event samples for

    3. Parallel computing works

      SciTech Connect (OSTI)

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    4. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    5. Structure of a complex of uridine phosphorylase from Yersinia pseudotuberculosis with the modified bacteriostatic antibacterial drug determined by X-ray crystallography and computer analysis

      SciTech Connect (OSTI)

      Balaev, V. V.; Lashkov, A. A. Gabdoulkhakov, A. G.; Seregina, T. A.; Dontsova, M. V.; Mikhailov, A. M.

      2015-03-15

      Pseudotuberculosis and bubonic plague are acute infectious diseases caused by the bacteria Yersinia pseudotuberculosis and Yersinia pestis. These diseases are treated, in particular, with trimethoprim and its modified analogues. However, uridine phosphorylases (pyrimidine nucleoside phosphorylases) that are present in bacterial cells neutralize the action of trimethoprim and its modified analogues on the cells. In order to reveal the character of the interaction of the drug with bacterial uridine phosphorylase, the atomic structure of the unligated molecule of uridine-specific pyrimidine nucleoside phosphorylase from Yersinia pseudotuberculosis (YptUPh) was determined by X-ray diffraction at 1.7 Å resolution with high reliability (R{sub work} = 16.2, R{sub free} = 19.4%; r.m.s.d. of bond lengths and bond angles are 0.006 Å and 1.005°, respectively; DPI = 0.107 Å). The atoms of the amino acid residues of the functionally important secondary-structure elements—the loop L9 and the helix H8—of the enzyme YptUPh were located. The three-dimensional structure of the complex of YptUPh with modified trimethoprim—referred to as 53I—was determined by the computer simulation. It was shown that 53I is a pseudosubstrate of uridine phosphorylases, and its pyrimidine-2,4-diamine group is located in the phosphate-binding site of the enzyme YptUPh.

    6. Mesh Morphing Pier Analysis

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Application of Mesh Morphing in STAR-CCM+ to Analysis of Scour at Cylindrical Piers Mesh morphing is a fluid structure interaction capability in STAR-CCM+ to move vertices in the computational mesh in a way that preserves mesh quality when a boundary moves. The equations being solved include terms that account for the motion of the mesh maintaining mass and property balances during the solution process. Initial work on leveraging the mesh morphing FSI capability for efficient application to

    7. A user`s guide to LUGSAN II. A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

      SciTech Connect (OSTI)

      Dunn, W.N.

      1998-03-01

      LUG and Sway brace ANalysis (LUGSAN) II is an analysis and database computer program that is designed to calculate store lug and sway brace loads for aircraft captive carriage. LUGSAN II combines the rigid body dynamics code, SWAY85, with a Macintosh Hypercard database to function both as an analysis and archival system. This report describes the LUGSAN II application program, which operates on the Macintosh System (Hypercard 2.2 or later) and includes function descriptions, layout examples, and sample sessions. Although this report is primarily a user`s manual, a brief overview of the LUGSAN II computer code is included with suggested resources for programmers.

    8. PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy

      SciTech Connect (OSTI)

      Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah

      2009-12-01

      advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.

    9. Computer memory management system

      DOE Patents [OSTI]

      Kirk, III, Whitson John

      2002-01-01

      A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

    10. ASCR Workshop on Quantum Computing for Science

      SciTech Connect (OSTI)

      Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

      2015-06-01

      This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

    11. ONSET OF CHAOS IN A MODEL OF QUANTUM COMPUTATION (Conference...

      Office of Scientific and Technical Information (OSTI)

      Clearly, if this happens in a quantum computer, it may lead to a destruction of the ... Numerical analysis 2 of a simplest model of quantum computer (2D model of 12-spins with ...

    12. Computer Model Buildings Contaminated with Radioactive Material

      Energy Science and Technology Software Center (OSTI)

      1998-05-19

      The RESRAD-BUILD computer code is a pathway analysis model designed to evaluate the potential radiological dose incurred by an individual who works or lives in a building contaminated with radioactive material.

    13. Session on computation in biological pathways

      SciTech Connect (OSTI)

      Karp, P.D.; Riley, M.

      1996-12-31

      The papers in this session focus on the development of pathway databases and computational tools for pathway analysis. The discussion involves existing databases of sequenced genomes, as well as techniques for studying regulatory pathways.

    14. Computational Tools to Accelerate Commercial Development

      SciTech Connect (OSTI)

      Miller, David C.

      2013-01-01

      The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

    15. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained

    16. Intranode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

      2013-07-23

      Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

    17. Intranode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

      2014-01-07

      Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a computer node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

    18. Mark Hereld | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Hereld Manager, Visualization and Data Analysis Mark Hereld Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 4139 Argonne, IL 60439 630-252-4170 hereld@mcs.anl.gov Mark Hereld is the ALCF's Visualization and Data Analysis Manager. He is also a member of the research staff in Argonne's Mathematics and Computer Science Division and a Senior Fellow of the Computation Institute with a joint appointment at the University of Chicago. His work in understanding simulation on future

    19. Large-eddy simulation of the Rayleigh-Taylor instability on a massively parallel computer

      SciTech Connect (OSTI)

      Amala, P.A.K.

      1995-03-01

      A computational model for the solution of the three-dimensional Navier-Stokes equations is developed. This model includes a turbulence model: a modified Smagorinsky eddy-viscosity with a stochastic backscatter extension. The resultant equations are solved using finite difference techniques: the second-order explicit Lax-Wendroff schemes. This computational model is implemented on a massively parallel computer. Programming models on massively parallel computers are next studied. It is desired to determine the best programming model for the developed computational model. To this end, three different codes are tested on a current massively parallel computer: the CM-5 at Los Alamos. Each code uses a different programming model: one is a data parallel code; the other two are message passing codes. Timing studies are done to determine which method is the fastest. The data parallel approach turns out to be the fastest method on the CM-5 by at least an order of magnitude. The resultant code is then used to study a current problem of interest to the computational fluid dynamics community. This is the Rayleigh-Taylor instability. The Lax-Wendroff methods handle shocks and sharp interfaces poorly. To this end, the Rayleigh-Taylor linear analysis is modified to include a smoothed interface. The linear growth rate problem is then investigated. Finally, the problem of the randomly perturbed interface is examined. Stochastic backscatter breaks the symmetry of the stationary unstable interface and generates a mixing layer growing at the experimentally observed rate. 115 refs., 51 figs., 19 tabs.

    20. Energy and cost analysis of a solar-hydrogen combined heat and power system for remote power supply using a computer simulation

      SciTech Connect (OSTI)

      Shabani, Bahman; Andrews, John; Watkins, Simon

      2010-01-15

      A simulation program, based on Visual Pascal, for sizing and techno-economic analysis of the performance of solar-hydrogen combined heat and power systems for remote applications is described. The accuracy of the submodels is checked by comparing the real performances of the system's components obtained from experimental measurements with model outputs. The use of the heat generated by the PEM fuel cell, and any unused excess hydrogen, is investigated for hot water production or space heating while the solar-hydrogen system is supplying electricity. A 5 kWh daily demand profile and the solar radiation profile of Melbourne have been used in a case study to investigate the typical techno-economic characteristics of the system to supply a remote household. The simulation shows that by harnessing both thermal load and excess hydrogen it is possible to increase the average yearly energy efficiency of the fuel cell in the solar-hydrogen system from just below 40% up to about 80% in both heat and power generation (based on the high heating value of hydrogen). The fuel cell in the system is conventionally sized to meet the peak of the demand profile. However, an economic optimisation analysis illustrates that installing a larger fuel cell could lead to up to a 15% reduction in the unit cost of the electricity to an average of just below 90 c/kWh over the assessment period of 30 years. Further, for an economically optimal size of the fuel cell, nearly a half the yearly energy demand for hot water of the remote household could be supplied by heat recovery from the fuel cell and utilising unused hydrogen in the exit stream. Such a system could then complement a conventional solar water heating system by providing the boosting energy (usually in the order of 40% of the total) normally obtained from gas or electricity. (author)

    1. Aggregating job exit statuses of a plurality of compute nodes executing a parallel application

      DOE Patents [OSTI]

      Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Mundy, Michael B.

      2015-07-21

      Aggregating job exit statuses of a plurality of compute nodes executing a parallel application, including: identifying a subset of compute nodes in the parallel computer to execute the parallel application; selecting one compute node in the subset of compute nodes in the parallel computer as a job leader compute node; initiating execution of the parallel application on the subset of compute nodes; receiving an exit status from each compute node in the subset of compute nodes, where the exit status for each compute node includes information describing execution of some portion of the parallel application by the compute node; aggregating each exit status from each compute node in the subset of compute nodes; and sending an aggregated exit status for the subset of compute nodes in the parallel computer.

    2. Broadcasting collective operation contributions throughout a parallel computer

      DOE Patents [OSTI]

      Faraj, Ahmad

      2012-02-21

      Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.

    3. Pacing a data transfer operation between compute nodes on a parallel computer

      DOE Patents [OSTI]

      Blocksome, Michael A.

      2011-09-13

      Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

    4. Cognitive Computing for Security.

      SciTech Connect (OSTI)

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    5. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    6. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    7. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost 3 on the TRACC Cluster Oct. ... with an emphasis on applying these capabilities to build computationally efficient models. ...

    8. Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery

      SciTech Connect (OSTI)

      Luttman, A.

      2012-03-30

      The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.

    9. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      which would collect more data than any computing center in existence could process. ... consortium grid called Open Science Grid, so they initiated a project known as FermiGrid. ...

    10. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    11. Computers-BSA.ppt

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Energy Computers, Electronics and Electrical Equipment (2010 MECS) Computers, Electronics and Electrical Equipment (2010 MECS) Manufacturing Energy and Carbon Footprint for Computers, Electronics and Electrical Equipment Sector (NAICS 334, 335) Energy use data source: 2010 EIA MECS (with adjustments) Footprint Last Revised: February 2014 View footprints for other sectors here. Manufacturing Energy and Carbon Footprint Computers, Electronics and Electrical Equipment (123.71 KB) More Documents

    12. Advanced Scientific Computing Research (ASCR) Homepage | U.S...

      Office of Science (SC) Website

      Edison Dedication External link Users are invited to make heavy use of new computer as ... computing, including the need for a new scientific workflow.Read More .pdf file ...

    13. Nuclear Arms Control R&D Consortium includes Los Alamos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nuclear Arms Control R&D Consortium includes Los Alamos Nuclear Arms Control R&D Consortium includes Los Alamos A consortium led by the University of Michigan that includes LANL as ...

    14. Python and computer vision

      SciTech Connect (OSTI)

      Doak, J. E.; Prasad, Lakshman

      2002-01-01

      This paper discusses the use of Python in a computer vision (CV) project. We begin by providing background information on the specific approach to CV employed by the project. This includes a brief discussion of Constrained Delaunay Triangulation (CDT), the Chordal Axis Transform (CAT), shape feature extraction and syntactic characterization, and normalization of strings representing objects. (The terms 'object' and 'blob' are used interchangeably, both referring to an entity extracted from an image.) The rest of the paper focuses on the use of Python in three critical areas: (1) interactions with a MySQL database, (2) rapid prototyping of algorithms, and (3) gluing together all components of the project including existing C and C++ modules. For (l), we provide a schema definition and discuss how the various tables interact to represent objects in the database as tree structures. (2) focuses on an algorithm to create a hierarchical representation of an object, given its string representation, and an algorithm to match unknown objects against objects in a database. And finally, (3) discusses the use of Boost Python to interact with the pre-existing C and C++ code that creates the CDTs and CATS, performs shape feature extraction and syntactic characterization, and normalizes object strings. The paper concludes with a vision of the future use of Python for the CV project.

    15. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences and Engineering The Computational Sciences and Engineering Division (CSED) is ORNL's premier source of basic and applied research in the field of data sciences and knowledge discovery. CSED's science agenda is focused on research and development related to knowledge discovery enabled by the explosive growth in the availability, size, and variability of dynamic and disparate data sources. This science agenda encompasses data sciences as well as advanced modeling and

    16. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Technology Information Technology (IT) at ORNL serves a diverse community of stakeholders and interests. From everyday operations like email and telecommunications to institutional cluster computing and high bandwidth networking, IT at ORNL is responsible for planning and executing a coordinated strategy that ensures cost-effective, state-of-the-art computing capabilities for research and development. ORNL IT delivers leading-edge products to users in a risk-managed portfolio of

    17. Mathematical and Computational Epidemiology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    18. BNL ATLAS Grid Computing

      ScienceCinema (OSTI)

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    19. Computing environment logbook

      DOE Patents [OSTI]

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    20. Scalasca | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Data Transfer Debugging & Profiling Performance Tools & APIs Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource.

    1. Projects | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Projects bgclang Compiler Hal Finkel Cobalt Scheduler Bill Allcock, Paul Rich, Brian Toonen, Tom Uram GLEAN: Scalable In Situ Analysis and I/O Acceleration on Leadership Computing Systems Michael E. Papka, Venkat Vishwanath, Mark Hereld, Preeti Malakar, Joe Insley, Silvio Rizzi, Tom Uram Petrel: Data Management and Sharing Pilot Ian Foster, Michael E. Papka, Bill Allcock, Ben Allen, Rachana Ananthakrishnan, Lukasz Lacinski The Swift Parallel Scripting Language for ALCF Systems Michael Wilde,

    2. MADNESS | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] MADNESS Overview MADNESS is a numerical tool kit used to solve integral differential equations using multi-resolution analysis and a low-rank separation representation. MADNESS can solve multi-dimensional equations, currently up

    3. Development of probabilistic multimedia multipathway computer codes.

      SciTech Connect (OSTI)

      Yu, C.; LePoire, D.; Gnanapragasam, E.; Arnish, J.; Kamboj, S.; Biwer, B. M.; Cheng, J.-J.; Zielen, A. J.; Chen, S. Y.; Mo, T.; Abu-Eid, R.; Thaggard, M.; Sallo, A., III.; Peterson, H., Jr.; Williams, W. A.; Environmental Assessment; NRC; EM

      2002-01-01

      The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributions for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.

    4. VISTA - computational tools for comparative genomics

      SciTech Connect (OSTI)

      Frazer, Kelly A.; Pachter, Lior; Poliakov, Alexander; Rubin,Edward M.; Dubchak, Inna

      2004-01-01

      Comparison of DNA sequences from different species is a fundamental method for identifying functional elements in genomes. Here we describe the VISTA family of tools created to assist biologists in carrying out this task. Our first VISTA server at http://www-gsd.lbl.gov/VISTA/ was launched in the summer of 2000 and was designed to align long genomic sequences and visualize these alignments with associated functional annotations. Currently the VISTA site includes multiple comparative genomics tools and provides users with rich capabilities to browse pre-computed whole-genome alignments of large vertebrate genomes and other groups of organisms with VISTA Browser, submit their own sequences of interest to several VISTA servers for various types of comparative analysis, and obtain detailed comparative analysis results for a set of cardiovascular genes. We illustrate capabilities of the VISTA site by the analysis of a 180 kilobase (kb) interval on human chromosome 5 that encodes for the kinesin family member3A (KIF3A) protein.

    5. Computer virus information update CIAC-2301

      SciTech Connect (OSTI)

      Orvis, W.J.

      1994-01-15

      While CIAC periodically issues bulletins about specific computer viruses, these bulletins do not cover all the computer viruses that affect desktop computers. The purpose of this document is to identify most of the known viruses for the MS-DOS and Macintosh platforms and give an overview of the effects of each virus. The authors also include information on some windows, Atari, and Amiga viruses. This document is revised periodically as new virus information becomes available. This document replaces all earlier versions of the CIAC Computer virus Information Update. The date on the front cover indicates date on which the information in this document was extracted from CIAC`s Virus database.

    6. Solar Energy Education. Reader, Part II. Sun story. [Includes...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reader, Part II. Sun story. Includes glossary Citation Details In-Document Search Title: Solar Energy Education. Reader, Part II. Sun story. Includes glossary You are ...

    7. Natural Gas Delivered to Consumers in California (Including Vehicle...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      California (Including Vehicle Fuel) (Million Cubic Feet) Natural Gas Delivered to Consumers in California (Including Vehicle Fuel) (Million Cubic Feet) Year Jan Feb Mar Apr May Jun ...

    8. Should Title 24 Ventilation Requirements Be Amended to include...

      Office of Scientific and Technical Information (OSTI)

      include an Indoor Air Quality Procedure? Citation Details In-Document Search Title: Should Title 24 Ventilation Requirements Be Amended to include an Indoor Air Quality Procedure? ...

    9. Microfluidic devices and methods including porous polymer monoliths...

      Office of Scientific and Technical Information (OSTI)

      Microfluidic devices and methods including porous polymer monoliths Title: Microfluidic devices and methods including porous polymer monoliths Microfluidic devices and methods ...

    10. Microfluidic devices and methods including porous polymer monoliths...

      Office of Scientific and Technical Information (OSTI)

      Microfluidic devices and methods including porous polymer monoliths Citation Details In-Document Search Title: Microfluidic devices and methods including porous polymer monoliths ...

    11. Newport News in Review, ch. 47, segment includes TEDF groundbreaking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      https:www.jlab.orgnewsarticlesnewport-news-review-ch-47-segment-includes-tedf-groundbreaking-event Newport News in Review, ch. 47, segment includes TEDF groundbreaking event...

    12. Property:Number of Plants included in Capacity Estimate | Open...

      Open Energy Info (EERE)

      Plants included in Capacity Estimate Jump to: navigation, search Property Name Number of Plants included in Capacity Estimate Property Type Number Retrieved from "http:...

    13. Property:Number of Plants Included in Planned Estimate | Open...

      Open Energy Info (EERE)

      Number of Plants Included in Planned Estimate Jump to: navigation, search Property Name Number of Plants Included in Planned Estimate Property Type String Description Number of...

    14. FEMP Expands ESPC ENABLE Program to Include More Energy Conservation...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Expands ESPC ENABLE Program to Include More Energy Conservation Measures FEMP Expands ESPC ENABLE Program to Include More Energy Conservation Measures November 13, 2013 - 12:00am...

    15. Natural Gas Delivered to Consumers in Minnesota (Including Vehicle...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Minnesota (Including Vehicle Fuel) (Million Cubic Feet) Natural Gas Delivered to Consumers in Minnesota (Including Vehicle Fuel) (Million Cubic Feet) Year Jan Feb Mar Apr May Jun ...

    16. computational-hydaulics-march-30

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Aerodynamics using STAR-CCM+ for CFD Analysis March 30-31, 2011 Argonne, Illinois Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small A training course in the use of computational hydraulics and aerodynamics CFD software using CD-adapco's STAR-CCM+ for analysis was held at TRACC from March 30-31, 2011. The course assumes a basic knowledge of fluid mechanics and made extensive use of hands on tutorials.

    17. An Analysis of Nuclear Fuel Burnup in the AGR 1 TRISO Fuel Experiment Using Gamma Spectrometry, Mass Spectrometry, and Computational Simulation Techniques

      SciTech Connect (OSTI)

      Jason M. Harp; Paul A. Demkowicz; Phillip L. Winston; James W. Sterbentz

      2014-10-01

      AGR 1 was the first in a series of experiments designed to test US TRISO fuel under high temperature gas-cooled reactor irradiation conditions. This experiment was irradiated in the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL) and is currently undergoing post irradiation examination (PIE) at INL and Oak Ridge National Laboratory. One component of the AGR 1 PIE is the experimental evaluation of the burnup of the fuel by two separate techniques. Gamma spectrometry was used to non destructively evaluate the burnup of all 72 of the TRISO fuel compacts that comprised the AGR 1 experiment. Two methods for evaluating burnup by gamma spectrometry were developed, one based on the Cs 137 activity and the other based on the ratio of Cs 134 and Cs 137 activities. Burnup values determined from both methods compared well with the values predicted from simulations. The highest measured burnup was 20.1 %FIMA for the direct method and 20.0 %FIMA for the ratio method (compared to 19.56% FIMA from simulations). An advantage of the ratio method is that the burnup of the cylindrical fuel compacts can determined in small (2.5 mm) axial increments and an axial burnup profile can be produced. Destructive chemical analysis by inductively coupled mass spectrometry (ICP MS) was then performed on selected compacts that were representative of the expected range of fuel burnups in the experiment to compare with the burnup values determined by gamma spectrometry. The compacts analyzed by mass spectrometry had a burnup range of 19.3 % FIMA to 10.7 % FIMA. The mass spectrometry evaluation of burnup for the four compacts agreed well with the gamma spectrometry burnup evaluations and the expected burnup from simulation. For all four compacts analyzed by mass spectrometry, the maximum range in the three experimentally determined values and the predicted value was 6% or less. The results confirm the accuracy of the nondestructive burnup evaluation from gamma spectrometry for TRISO

    18. An analysis of nuclear fuel burnup in the AGR-1 TRISO fuel experiment using gamma spectrometry, mass spectrometry, and computational simulation techniques

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Harp, Jason M.; Demkowicz, Paul A.; Winston, Philip L.; Sterbentz, James W.

      2014-09-03

      AGR 1 was the first in a series of experiments designed to test US TRISO fuel under high temperature gas-cooled reactor irradiation conditions. This experiment was irradiated in the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL) and is currently undergoing post irradiation examination (PIE) at INL and Oak Ridge National Laboratory. One component of the AGR 1 PIE is the experimental evaluation of the burnup of the fuel by two separate techniques. Gamma spectrometry was used to non destructively evaluate the burnup of all 72 of the TRISO fuel compacts that comprised the AGR 1 experiment. Two methodsmore » for evaluating burnup by gamma spectrometry were developed, one based on the Cs 137 activity and the other based on the ratio of Cs 134 and Cs 137 activities. Burnup values determined from both methods compared well with the values predicted from simulations. The highest measured burnup was 20.1% FIMA for the direct method and 20.0% FIMA for the ratio method (compared to 19.56% FIMA from simulations). An advantage of the ratio method is that the burnup of the cylindrical fuel compacts can determined in small (2.5 mm) axial increments and an axial burnup profile can be produced. Destructive chemical analysis by inductively coupled mass spectrometry (ICP MS) was then performed on selected compacts that were representative of the expected range of fuel burnups in the experiment to compare with the burnup values determined by gamma spectrometry. The compacts analyzed by mass spectrometry had a burnup range of 19.3% FIMA to 10.7% FIMA. The mass spectrometry evaluation of burnup for the four compacts agreed well with the gamma spectrometry burnup evaluations and the expected burnup from simulation. For all four compacts analyzed by mass spectrometry, the maximum range in the three experimentally determined values and the predicted value was 6% or less. Furthermore, the results confirm the accuracy of the nondestructive burnup evaluation from gamma

    19. Predictive Dynamic Security Assessment through Advanced Computing

      SciTech Connect (OSTI)

      Huang, Zhenyu; Diao, Ruisheng; Jin, Shuangshuang; Chen, Yousu

      2014-11-30

      Abstract— Traditional dynamic security assessment is limited by several factors and thus falls short in providing real-time information to be predictive for power system operation. These factors include the steady-state assumption of current operating points, static transfer limits, and low computational speed. This addresses these factors and frames predictive dynamic security assessment. The primary objective of predictive dynamic security assessment is to enhance the functionality and computational process of dynamic security assessment through the use of high-speed phasor measurements and the application of advanced computing technologies for faster-than-real-time simulation. This paper presents algorithms, computing platforms, and simulation frameworks that constitute the predictive dynamic security assessment capability. Examples of phasor application and fast computation for dynamic security assessment are included to demonstrate the feasibility and speed enhancement for real-time applications.

    20. Modular Environment for Graph Research and Analysis with a Persistent

      Energy Science and Technology Software Center (OSTI)

      2009-11-18

      The MEGRAPHS software package provides a front-end to graphs and vectors residing on special-purpose computing resources. It allows these data objects to be instantiated, destroyed, and manipulated. A variety of primitives needed for typical graph analyses are provided. An example program illustrating how MEGRAPHS can be used to implement a PageRank computation is included in the distribution.The MEGRAPHS software package is targeted towards developers of graph algorithms. Programmers using MEGRAPHS would write graph analysis programsmore » in terms of high-level graph and vector operations. These computations are transparently executed on the Cray XMT compute nodes.« less

    1. Initial explorations of ARM processors for scientific computing...

      Office of Scientific and Technical Information (OSTI)

      DOE Contract Number: AC02-07CH11359 Resource Type: Conference Resource Relation: Conference: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics ...

    2. The National Energy Research Scientific Computing Center: Forty...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The National Energy Research Scientific Computing Center: Forty Years of Supercomputing ... discovery has been evident in both simulation and data analysis for many years. ...

    3. Adjoints and Large Data Sets in Computational Fluid Dynamics...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Oana Marin Speaker(s) Title: Postdoctoral Appointee, MCS Optimal flow control and stability analysis are some of the fields within Computational Fluid Dynamics (CFD) that...

    4. A compute-Efficient Bitmap Compression Index for Database Applications

      Energy Science and Technology Software Center (OSTI)

      2006-01-01

      FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index,more » which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.« less

    5. Monitoring system including an electronic sensor platform and an interrogation transceiver

      DOE Patents [OSTI]

      Kinzel, Robert L.; Sheets, Larry R.

      2003-09-23

      A wireless monitoring system suitable for a wide range of remote data collection applications. The system includes at least one Electronic Sensor Platform (ESP), an Interrogator Transceiver (IT) and a general purpose host computer. The ESP functions as a remote data collector from a number of digital and analog sensors located therein. The host computer provides for data logging, testing, demonstration, installation checkout, and troubleshooting of the system. The IT transmits signals from one or more ESP's to the host computer to the ESP's. The IT host computer may be powered by a common power supply, and each ESP is individually powered by a battery. This monitoring system has an extremely low power consumption which allows remote operation of the ESP for long periods; provides authenticated message traffic over a wireless network; utilizes state-of-health and tamper sensors to ensure that the ESP is secure and undamaged; has robust housing of the ESP suitable for use in radiation environments; and is low in cost. With one base station (host computer and interrogator transceiver), multiple ESP's may be controlled at a single monitoring site.

    6. Sandia Energy - High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    7. Computing and Computational Sciences Directorate - Computer Science and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics Division - Meetings and Workshops Awards Awards Night 2012 R&D LEADERSHIP, DIRECTOR LEVEL Winner: Brian Worley Organization: Computational Sciences & Engineering Division Citation: For exemplary program leadership of a successful and growing collaboration with the Department of Defense and for successfully initiating and providing oversight of a new data program with the Centers for Medicare and Medicaid Services. TECHNICAL SUPPORT Winner: Michael Matheson Organization:

    8. Method for transferring data from an unsecured computer to a secured computer

      DOE Patents [OSTI]

      Nilsen, Curt A.

      1997-01-01

      A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

    9. Natural Gas Delivered to Consumers in New Mexico (Including Vehicle...

      U.S. Energy Information Administration (EIA) Indexed Site

      Mexico (Including Vehicle Fuel) (Million Cubic Feet) Natural Gas Delivered to Consumers in New Mexico (Including Vehicle Fuel) (Million Cubic Feet) Year Jan Feb Mar Apr May Jun Jul ...

    10. SWS Online Tool now includes Multifamily Content, plus a How...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      SWS Online Tool now includes Multifamily Content, plus a How-To Webinar SWS Online Tool now includes Multifamily Content, plus a How-To Webinar This announcement contains ...

    11. Computational method and system for modeling, analyzing, and optimizing DNA amplification and synthesis

      DOE Patents [OSTI]

      Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.

      2010-05-04

      A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

    12. Scientific Cloud Computing Misconceptions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Scientific Cloud Computing Misconceptions Scientific Cloud Computing Misconceptions July 1, 2011 Part of the Magellan project was to understand both the possibilities and the limitations of cloud computing in the pursuit of science. At a recent conference, Magellan investigator Shane Canon outlined some persistent misconceptions about doing science in the cloud - and what Magellan has taught us about them. » Read the ISGTW story. » Download the slides (PDF, 4.1MB

    13. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    14. Energy Aware Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy required to compute is to reduce the power usage on a node. One way to accomplish this is by lowering the frequency at which the CPU operates. However, reducing the clock speed increases the time to solution, creating a potential tradeoff. NERSC continues to examine how such methods impact its operations and its

    15. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or modification. Among NERSC's security goal are: 1. To protect NERSC systems from unauthorized access. 2. To prevent the interruption of services to its users. 3. To prevent misuse or abuse of NERSC resources. Security Incidents If you think there has been a computer security incident you should contact NERSC Security as soon as

    16. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Exascale Computing » CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in research and development into energy efficient and effective processor and memory architectures for DOE's Exascale program. CAL coordinates hardware architecture R&D activities across the DOE. CAL is a joint NNSA/SC activity involving Sandia National Laboratories (CAL-Sandia) and

    17. The Magellan Final Report on Cloud Computing

      SciTech Connect (OSTI)

      ,; Coghlan, Susan; Yelick, Katherine

      2011-12-21

      The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.

    18. Personal Computer Inventory System

      Energy Science and Technology Software Center (OSTI)

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    19. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader ...

    20. Mira Early Science Program | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC architectures. Together, the 16 projects span a diverse range of scientific fields, numerical methods, programming models, and computational approaches. The latter include...

    1. Excessing of Computers Used for Unclassified Controlled Information...

      Broader source: Energy.gov (indexed) [DOE]

      of approxiinately 800 infomations ystems, including up to 11 5,000 personal computers, many powerful supercomputers, numerous servers, and a broad array of related...

    2. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      60 Years of Computing 60 Years of Computing

    3. Microsoft PowerPoint - Microbial Genome and Metagenome Analysis Case Study (NERSC Workshop - May 7-8, 2009).ppt [Compatibility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Microbial Genome & Metagenome Analysis: Computational Challenges Natalia N. Ivanova * Nikos C. Kyrpides * Victor M. Markowitz ** * Genome Biology Program, Joint Genome Institute ** Lawrence Berkeley National Lab Microbial genome & metagenome analysis General aims Understand microbial life Apply to agriculture, bioremediation, biofuels, human health Specific aims include Specific aims include Predict biochemistry & physiology of organisms based on genome sequence Explain known

    4. Thermal Hydraulic Computer Code System.

      Energy Science and Technology Software Center (OSTI)

      1999-07-16

      Version 00 RELAP5 was developed to describe the behavior of a light water reactor (LWR) subjected to postulated transients such as loss of coolant from large or small pipe breaks, pump failures, etc. RELAP5 calculates fluid conditions such as velocities, pressures, densities, qualities, temperatures; thermal conditions such as surface temperatures, temperature distributions, heat fluxes; pump conditions; trip conditions; reactor power and reactivity from point reactor kinetics; and control system variables. In addition to reactor applications,more » the program can be applied to transient analysis of other thermal‑hydraulic systems with water as the fluid. This package contains RELAP5/MOD1/029 for CDC computers and RELAP5/MOD1/025 for VAX or IBM mainframe computers.« less

    5. Interstitial computing : utilizing spare cycles on supercomputers.

      SciTech Connect (OSTI)

      Clearwater, Scott Harvey; Kleban, Stephen David

      2003-06-01

      This paper presents an analysis of utilizing unused cycles on supercomputers through the use of many small jobs. What we call 'interstitial computing,' is important to supercomputer centers for both productivity and political reasons. Interstitial computing makes use of the fact that small jobs are more or less fungible consumers of compute cycles that are more efficient for bin packing than the typical jobs on a supercomputer. An important feature of interstitial computing is that it not have a significant impact on the makespan of native jobs on the machine. Also, a facility can obtain higher utilizations that may only be otherwise possible with more complicated schemes or with very long wait times. The key contribution of this paper is that it provides theoretical and empirical guidelines for users and administrators for how currently unused supercomputer cycles may be exploited. We find that that interstitial computing is a more effective means for increasing machine utilization than increasing native job run times or size.

    6. Software and High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational physics, computer science, applied mathematics, statistics and the ... a fully operational supercomputing environment Providing Current Capability Scientific ...

    7. CDF computing and event data models

      SciTech Connect (OSTI)

      Snider, F.D.; /Fermilab

      2005-12-01

      The authors discuss the computing systems, usage patterns and event data models used to analyze Run II data from the CDF-II experiment at the Tevatron collider. A critical analysis of the current implementation and design reveals some of the stronger and weaker elements of the system, which serve as lessons for future experiments. They highlight a need to maintain simplicity for users in the face of an increasingly complex computing environment.

    8. Low latency, high bandwidth data communications between compute nodes in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

      2010-11-02

      Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

    9. ELECTRONIC DIGITAL COMPUTER

      DOE Patents [OSTI]

      Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.

      1957-10-01

      The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.

    10. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    11. Reach and get capability in a computing environment

      DOE Patents [OSTI]

      Bouchard, Ann M.; Osbourn, Gordon C.

      2012-06-05

      A reach and get technique includes invoking a reach command from a reach location within a computing environment. A user can then navigate to an object within the computing environment and invoke a get command on the object. In response to invoking the get command, the computing environment is automatically navigated back to the reach location and the object copied into the reach location.

    12. Percentage of Total Natural Gas Commercial Deliveries included in Prices

      U.S. Energy Information Administration (EIA) Indexed Site

      City Gate Price Residential Price Percentage of Total Residential Deliveries included in Prices Commercial Price Percentage of Total Commercial Deliveries included in Prices Industrial Price Percentage of Total Industrial Deliveries included in Prices Electric Power Price Period: Monthly Annual Download Series History Download Series History Definitions, Sources & Notes Definitions, Sources & Notes Show Data By: Data Series Area Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 View History U.S.

    13. DOE Releases Request for Information on Critical Materials, Including Fuel

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Cell Platinum Group Metal Catalysts | Department of Energy Releases Request for Information on Critical Materials, Including Fuel Cell Platinum Group Metal Catalysts DOE Releases Request for Information on Critical Materials, Including Fuel Cell Platinum Group Metal Catalysts February 17, 2016 - 3:03pm Addthis The U.S. Department of Energy (DOE) has released a Request for Information (RFI) on critical materials in the energy sector, including fuel cell platinum group metal catalysts. The RFI

    14. Microfluidic devices and methods including porous polymer monoliths

      Office of Scientific and Technical Information (OSTI)

      (Patent) | SciTech Connect Microfluidic devices and methods including porous polymer monoliths Citation Details In-Document Search Title: Microfluidic devices and methods including porous polymer monoliths Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous

    15. Percentage of Total Natural Gas Industrial Deliveries included in Prices

      U.S. Energy Information Administration (EIA) Indexed Site

      Pipeline and Distribution Use Price City Gate Price Residential Price Percentage of Total Residential Deliveries included in Prices Commercial Price Percentage of Total Commercial Deliveries included in Prices Industrial Price Percentage of Total Industrial Deliveries included in Prices Vehicle Fuel Price Electric Power Price Period: Monthly Annual Download Series History Download Series History Definitions, Sources & Notes Definitions, Sources & Notes Show Data By: Data Series Area 2010

    16. Percentage of Total Natural Gas Industrial Deliveries included in Prices

      U.S. Energy Information Administration (EIA) Indexed Site

      City Gate Price Residential Price Percentage of Total Residential Deliveries included in Prices Commercial Price Percentage of Total Commercial Deliveries included in Prices Industrial Price Percentage of Total Industrial Deliveries included in Prices Electric Power Price Period: Monthly Annual Download Series History Download Series History Definitions, Sources & Notes Definitions, Sources & Notes Show Data By: Data Series Area Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 View History U.S.

    17. Percentage of Total Natural Gas Residential Deliveries included in Prices

      U.S. Energy Information Administration (EIA) Indexed Site

      City Gate Price Residential Price Percentage of Total Residential Deliveries included in Prices Commercial Price Percentage of Total Commercial Deliveries included in Prices Industrial Price Percentage of Total Industrial Deliveries included in Prices Electric Power Price Period: Monthly Annual Download Series History Download Series History Definitions, Sources & Notes Definitions, Sources & Notes Show Data By: Data Series Area Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 View History U.S.

    18. Traffic information computing platform for big data

      SciTech Connect (OSTI)

      Duan, Zongtao Li, Ying Zheng, Xibin Liu, Yan Dai, Jiting Kang, Jun

      2014-10-06

      Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

    19. Indirection and computer security.

      SciTech Connect (OSTI)

      Berg, Michael J.

      2011-09-01

      The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyze common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions.

    20. Introduction to Small-Scale Wind Energy Systems (Including RETScreen...

      Open Energy Info (EERE)

      Case Study) (Webinar) Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Introduction to Small-Scale Wind Energy Systems (Including RETScreen Case Study) (Webinar) Focus...

    1. Natural Gas Deliveries to Commercial Consumers (Including Vehicle...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      California (Million Cubic Feet) Natural Gas Deliveries to Commercial Consumers (Including Vehicle Fuel through 1996) in California (Million Cubic Feet) Year Jan Feb Mar Apr May Jun ...

    2. Numerical simulations for low energy nuclear reactions including...

      Office of Scientific and Technical Information (OSTI)

      Numerical simulations for low energy nuclear reactions including direct channels to validate statistical models Citation Details In-Document Search Title: Numerical simulations for ...

    3. U-182: Microsoft Windows Includes Some Invalid Certificates

      Broader source: Energy.gov [DOE]

      The operating system includes some invalid intermediate certificates. The vulnerability is due to the certificate authorities and not the operating system itself.

    4. Numerical simulations for low energy nuclear reactions including...

      Office of Scientific and Technical Information (OSTI)

      Numerical simulations for low energy nuclear reactions including direct channels to ... Visit OSTI to utilize additional information resources in energy science and technology. A ...

    5. Microfluidic devices and methods including porous polymer monoliths...

      Office of Scientific and Technical Information (OSTI)

      The fluid mixture may contain iniferters and the resulting porous polymer monolith may include surfaces terminated with iniferter species. Capture molecules may then be grafted to ...

    6. DOE Releases Request for Information on Critical Materials, Including...

      Broader source: Energy.gov (indexed) [DOE]

      including fuel cell platinum group metal catalysts. ... on issues related to the demand, supply, opportunities for ... Announces Second RFI on Rare Earth Metals DOE Announces RFI ...

    7. Including Retro-Commissioning in Federal Energy Savings Performance Contracts

      Broader source: Energy.gov [DOE]

      Document describes guidance on the importance of (and steps to) including retro-commissioning in federal energy savings performance contracts (ESPCs).

    8. Measuring and modeling the lifetime of nitrous oxide including...

      Office of Scientific and Technical Information (OSTI)

      Published Article: Measuring and modeling the lifetime of nitrous oxide including its variability: NITROUS OXIDE AND ITS CHANGING LIFETIME Prev Next Title: Measuring and ...

    9. Introduction to Small-Scale Photovoltaic Systems (Including RETScreen...

      Open Energy Info (EERE)

      Photovoltaic Systems (Including RETScreen Case Study) (Webinar) Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Introduction to Small-Scale Photovoltaic Systems...

    10. Comparison of Joint Modeling Approaches Including Eulerian Sliding...

      Office of Scientific and Technical Information (OSTI)

      Eulerian Sliding Interfaces Citation Details In-Document Search Title: Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces You are accessing a ...

    11. Identifying failure in a tree network of a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

      2010-08-24

      Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

    12. Computing contingency statistics in parallel.

      SciTech Connect (OSTI)

      Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

      2010-09-01

      Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

    13. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Oak Ridge Climate Change Science Institute Jim Hack Oak Ridge National Laboratory (ORNL) has formed the Oak Ridge Climate Change Science Institute (ORCCSI) that will develop and execute programs for the multi-agency, multi-disciplinary climate change research partnerships at ORNL. Led by Director Jim Hack and Deputy Director Dave Bader, the Institute will integrate scientific projects in modeling, observations, and experimentation with ORNL's powerful computational and informatics capabilities

    14. Mobile computing device configured to compute irradiance, glint, and glare of the sun

      DOE Patents [OSTI]

      Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib

      2014-03-11

      Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.

    15. Solar Energy Education. Reader, Part II. Sun story. [Includes glossary

      SciTech Connect (OSTI)

      Not Available

      1981-05-01

      Magazine articles which focus on the subject of solar energy are presented. The booklet prepared is the second of a four part series of the Solar Energy Reader. Excerpts from the magazines include the history of solar energy, mythology and tales, and selected poetry on the sun. A glossary of energy related terms is included. (BCS)

    16. Microfluidic devices and methods including porous polymer monoliths

      DOE Patents [OSTI]

      Hatch, Anson V.; Sommer, Gregory j.; Singh, Anup K.; Wang, Ying-Chih; Abhyankar, Vinay

      2015-12-01

      Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous polymer monolith may include surfaces terminated with iniferter species. Capture molecules may then be grafted to the monolith pores.

    17. Microfluidic devices and methods including porous polymer monoliths

      DOE Patents [OSTI]

      Hatch, Anson V; Sommer, Gregory J; Singh, Anup K; Wang, Ying-Chih; Abhyankar, Vinay V

      2014-04-22

      Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous polymer monolith may include surfaces terminated with iniferter species. Capture molecules may then be grafted to the monolith pores.

    18. Articles which include chevron film cooling holes, and related processes

      DOE Patents [OSTI]

      Bunker, Ronald Scott; Lacy, Benjamin Paul

      2014-12-09

      An article is described, including an inner surface which can be exposed to a first fluid; an inlet; and an outer surface spaced from the inner surface, which can be exposed to a hotter second fluid. The article further includes at least one row or other pattern of passage holes. Each passage hole includes an inlet bore extending through the substrate from the inlet at the inner surface to a passage hole-exit proximate to the outer surface, with the inlet bore terminating in a chevron outlet adjacent the hole-exit. The chevron outlet includes a pair of wing troughs having a common surface region between them. The common surface region includes a valley which is adjacent the hole-exit; and a plateau adjacent the valley. The article can be an airfoil. Related methods for preparing the passage holes are also described.

    19. Sandia Energy - New Project Is the ACME of Computer Science to...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Project Is the ACME of Computer Science to Address Climate Change Home Climate Partnership News Global Climate & Energy News & Events Analysis Modeling Modeling & Analysis New...

    20. Method and system for benchmarking computers

      DOE Patents [OSTI]

      Gustafson, John L.

      1993-09-14

      A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

    1. Identifying logical planes formed of compute nodes of a subcommunicator in a parallel computer

      DOE Patents [OSTI]

      Davis, Kristan D.; Faraj, Daniel A.

      2016-03-01

      In a parallel computer, a plurality of logical planes formed of compute nodes of a subcommunicator may be identified by: for each compute node of the subcommunicator and for a number of dimensions beginning with a first dimension: establishing, by a plane building node, in a positive direction of the first dimension, all logical planes that include the plane building node and compute nodes of the subcommunicator in a positive direction of a second dimension, where the second dimension is orthogonal to the first dimension; and establishing, by the plane building node, in a negative direction of the first dimension, all logical planes that include the plane building node and compute nodes of the subcommunicator in the positive direction of the second dimension.

    2. Convergence: Computing and communications

      SciTech Connect (OSTI)

      Catlett, C.

      1996-12-31

      This paper highlights the operations of the National Center for Supercomputing Applications (NCSA). NCSA is developing and implementing a national strategy to create, use, and transfer advanced computing and communication tools and information technologies for science, engineering, education, and business. The primary focus of the presentation is historical and expected growth in the computing capacity, personal computer performance, and Internet and WorldWide Web sites. Data are presented to show changes over the past 10 to 20 years in these areas. 5 figs., 4 tabs.

    3. Turbomachine injection nozzle including a coolant delivery system

      DOE Patents [OSTI]

      Zuo, Baifang (Simpsonville, SC)

      2012-02-14

      An injection nozzle for a turbomachine includes a main body having a first end portion that extends to a second end portion defining an exterior wall having an outer surface. A plurality of fluid delivery tubes extend through the main body. Each of the plurality of fluid delivery tubes includes a first fluid inlet for receiving a first fluid, a second fluid inlet for receiving a second fluid and an outlet. The injection nozzle further includes a coolant delivery system arranged within the main body. The coolant delivery system guides a coolant along at least one of a portion of the exterior wall and around the plurality of fluid delivery tubes.

    4. Computing and Computational Sciences Directorate - National Center for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages

      Computational Sciences Search Go! ORNL * Find People * Contact * Site Index * Comments Home Divisions and Centers Computational Sciences and Engineering Computer Science and Mathematics Information Technology Joint Institute for Computational Sciences National Center for Computational Sciences Supercomputing Projects Awards Employment Opportunities Student Opportunities About Us Organization In the News Contact Us Visitor Information ORNL Research Areas Neutron Sciences Biological Systems

    5. Removal of mineral matter including pyrite from coal

      DOE Patents [OSTI]

      Reggel, Leslie; Raymond, Raphael; Blaustein, Bernard D.

      1976-11-23

      Mineral matter, including pyrite, is removed from coal by treatment of the coal with aqueous alkali at a temperature of about 175.degree. to 350.degree. C, followed by acidification with strong acid.

    6. Example Retro-Commissioning Scope of Work to Include Services...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Services as Part of an ESPC Investment-Grade Audit Example Retro-Commissioning Scope of Work to Include Services as Part of an ESPC Investment-Grade Audit Document offers a ...

    7. Natural Gas Deliveries to Commercial Consumers (Including Vehicle...

      U.S. Energy Information Administration (EIA) Indexed Site

      Mexico (Million Cubic Feet) Natural Gas Deliveries to Commercial Consumers (Including Vehicle Fuel through 1996) in New Mexico (Million Cubic Feet) Year Jan Feb Mar Apr May Jun Jul ...

    8. Including Retro-Commissioning in Federal Energy Savings Performance...

      Energy Savers [EERE]

      the cost of the survey. Developing a detailed scope of work and a fixed price for this work is important to eliminate risk to the Agency and the ESCo. Including a detailed scope...

    9. T-603: Mac OS X Includes Some Invalid Comodo Certificates

      Office of Energy Efficiency and Renewable Energy (EERE)

      The operating system includes some invalid certificates. The vulnerability is due to the invalid certificates and not the operating system itself. Other browsers, applications, and operating systems are affected.

    10. Energy Department Expands Gas Gouging Reporting System to Include...

      Energy Savers [EERE]

      Expands Gas Gouging Reporting System to Include 1-800 Number: 1-800-244-3301 Energy Department Expands Gas ... of reformulated gasoline in storage and is already helping to ...

    11. Natural Gas Delivered to Consumers in Ohio (Including Vehicle...

      U.S. Energy Information Administration (EIA) Indexed Site

      Natural Gas Delivered to Consumers in Ohio (Including Vehicle Fuel) (Million Cubic Feet) Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2001 136,340 110,078 102,451 66,525 ...

    12. Hybrid powertrain system including smooth shifting automated transmission

      DOE Patents [OSTI]

      Beaty, Kevin D.; Nellums, Richard A.

      2006-10-24

      A powertrain system is provided that includes a prime mover and a change-gear transmission having an input, at least two gear ratios, and an output. The powertrain system also includes a power shunt configured to route power applied to the transmission by one of the input and the output to the other one of the input and the output. A transmission system and a method for facilitating shifting of a transmission system are also provided.

    13. Prevention of Harassment (Including Sexual Harassment) and Retaliation

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Policy Statement | Department of Energy Prevention of Harassment (Including Sexual Harassment) and Retaliation Policy Statement Prevention of Harassment (Including Sexual Harassment) and Retaliation Policy Statement DOE Policy for Preventing Harassment in the Workplace Harassment Policy July 2011.pdf (112.57 KB) More Documents & Publications Policy Statement on Equal Employment Opportunity, Harassment, and Retaliation Equal Employment Opportunity and Diversity Policy Statement VWA-0039 -

    14. DOE Revises its NEPA Regulations, Including Categorical Exclusions |

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Department of Energy Revises its NEPA Regulations, Including Categorical Exclusions DOE Revises its NEPA Regulations, Including Categorical Exclusions September 30, 2011 - 2:30pm Addthis On September 27, 2011, the Department of Energy (DOE) approved revisions to its National Environmental Policy Act (NEPA) regulations, and on September 28th, submitted the revisions to the Federal Register. The final regulations, which become effective 30 days after publication in the Federal Register, are

    15. Limited Personal Use of Government Office Equipment including Information Technology

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      2005-01-07

      The Order establishes requirements and assigns responsibilities for employees' limited personal use of Government resources (office equipment and other resources including information technology) within DOE, including NNSA. The Order is required to provide guidance on appropriate and inappropriate uses of Government resources. This Order was certified 04/23/2009 as accurate and continues to be relevant and appropriate for use by the Department. Certified 4-23-09. No cancellation.

    16. Thermodynamic Advantages of Low Temperature Combustion Engines Including

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      the Use of Low Heat Rejection Concepts | Department of Energy Advantages of Low Temperature Combustion Engines Including the Use of Low Heat Rejection Concepts Thermodynamic Advantages of Low Temperature Combustion Engines Including the Use of Low Heat Rejection Concepts Thermodynamic cycle simulation was used to evaluate low temperature combustion in systematic and sequential fashion to base engine design. deer10_caton.pdf (462.23 KB) More Documents & Publications Boosted HCCI for High

    17. Solar Energy Education. Reader, Part II. Sun story. [Includes glossary]

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (Technical Report) | SciTech Connect Reader, Part II. Sun story. [Includes glossary] Citation Details In-Document Search Title: Solar Energy Education. Reader, Part II. Sun story. [Includes glossary] × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy science and

    18. Solar Energy Education. Renewable energy: a background text. [Includes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      glossary] (Technical Report) | SciTech Connect energy: a background text. [Includes glossary] Citation Details In-Document Search Title: Solar Energy Education. Renewable energy: a background text. [Includes glossary] × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy

    19. An Arbitrary Precision Computation Package

      Energy Science and Technology Software Center (OSTI)

      2003-06-14

      This package permits a scientist to perform computations using an arbitrarily high level of numeric precision (the equivalent of hundreds or even thousands of digits), by making only minor changes to conventional C++ or Fortran-90 soruce code. This software takes advantage of certain properties of IEEE floating-point arithmetic, together with advanced numeric algorithms, custom data types and operator overloading. Also included in this package is the "Experimental Mathematician's Toolkit", which incorporates many of these facilitiesmore » into an easy-to-use interactive program.« less

    20. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      a n n u a l r e p o r t 2 0 1 2 Argonne Leadership Computing Facility Director's Message .............................................................................................................................1 About ALCF ......................................................................................................................................... 2 IntroDuCIng MIrA Introducing Mira

    1. Quantum steady computation

      SciTech Connect (OSTI)

      Castagnoli, G. )

      1991-08-10

      This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

    2. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Deployment of Edison was made possible in part by funding from DOE's Office of Science and the DARPA High Productivity Computing Systems program. DOE's Office of Science is the ...

    3. Advanced Simulation and Computing

      National Nuclear Security Administration (NNSA)

      NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction

    4. New TRACC Cluster Computer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      TRACC Cluster Computer With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD 16 core, 2.3 GHz, 32 GB processors. See also Computing Resources.

    5. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Argonne National Laboratory | 9700 South Cass Avenue | Argonne, IL 60439 | www.anl.gov | September 2013 alcf_keyfacts_fs_0913 Key facts about the Argonne Leadership Computing Facility User support and services Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. Catalysts are computational scientist with domain expertise and work directly with project principal investigators to maximize discovery and reduce time-to- solution.

    6. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These colors were

    7. Stencil Computation Optimization

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Stencil Computation Optimization and Auto-tuning on State-of-the-Art Multicore Architectures Kaushik Datta ∗† , Mark Murphy † , Vasily Volkov † , Samuel Williams ∗† , Jonathan Carter ∗ , Leonid Oliker ∗† , David Patterson ∗† , John Shalf ∗ , and Katherine Yelick ∗† ∗ CRD/NERSC, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA † Computer Science Division, University of California at Berkeley, Berkeley, CA 94720, USA Abstract Understanding the most

    8. Compute Reservation Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Reservation Request Form Compute Reservation Request Form Users can request a scheduled reservation of machine resources if their jobs have special needs that cannot be accommodated through the regular batch system. A reservation brings some portion of the machine to a specific user or project for an agreed upon duration. Typically this is used for interactive debugging at scale or real time processing linked to some experiment or event. It is not intended to be used to guarantee fast

    9. Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computing Computing Fun fact: Most systems require air conditioning or chilled water to cool super powerful supercomputers, but the Olympus supercomputer at Pacific Northwest National Laboratory is cooled by the location's 65 degree groundwater. Traditional cooling systems could cost up to $61,000 in electricity each year, but this more efficient setup uses 70 percent less energy. | Photo courtesy of PNNL. Fun fact: Most systems require air conditioning or chilled water to cool super powerful

    10. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental health, cleaner energy, and national security. Contact Us Group Leader Carl Gable Deputy Group Leader Gilles Bussod Email Profile pages header Search our Profile pages Hari Viswanathan inspects a microfluidic cell used to study the extraction of hydrocarbon fuels from a complex fracture network. EES-16's Subsurface Flow

    11. Computational Modeling | Bioenergy | NREL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Modeling NREL uses computational modeling to increase the efficiency of biomass conversion by rational design using multiscale modeling, applying theoretical approaches, and testing scientific hypotheses. model of enzymes wrapping on cellulose; colorful circular structures entwined through blue strands Cellulosomes are complexes of protein scaffolds and enzymes that are highly effective in decomposing biomass. This is a snapshot of a coarse-grain model of complex cellulosome

    12. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    13. Paging memory from random access memory to backing storage in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

      2013-05-21

      Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

    14. Intro to computer programming, no computer required! | Argonne...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... "Computational thinking requires you to think in abstractions," said Papka, who spoke to computer science and computer-aided design students at Kaneland High School in Maple Park about ...

    15. Computing and Computational Sciences Directorate - Joint Institute for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences Joint Institute for Computational Sciences To help realize the full potential of new-generation computers for advancing scientific discovery, the University of Tennessee (UT) and Oak Ridge National Laboratory (ORNL) have created the Joint Institute for Computational Sciences (JICS). JICS combines the experience and expertise in theoretical and computational science and engineering, computer science, and mathematics in these two institutions and focuses these skills on

    16. Comparison of International Energy Intensities across the G7 and other parts of Europe, including Ukraine

      U.S. Energy Information Administration (EIA) Indexed Site

      Comparison of International Energy Intensities across the G7 and other parts of Europe, including Ukraine Elizabeth Sendich November 2014 Independent Statistics & Analysis www.eia.gov U.S. Energy Information Administration Washington, DC 20585 This paper is released to encourage discussion and critical comment. The analysis and conclusions expressed here are those of the authors and not necessarily those of the U.S. Energy Information Administration. WORKING PAPER SERIES November 2014

    17. A model for heterogeneous materials including phase transformations

      SciTech Connect (OSTI)

      Addessio, F.L.; Clements, B.E.; Williams, T.O.

      2005-04-15

      A model is developed for particulate composites, which includes phase transformations in one or all of the constituents. The model is an extension of the method of cells formalism. Representative simulations for a single-phase, brittle particulate (SiC) embedded in a ductile material (Ti), which undergoes a solid-solid phase transformation, are provided. Also, simulations for a tungsten heavy alloy (WHA) are included. In the WHA analyses a particulate composite, composed of tungsten particles embedded in a tungsten-iron-nickel alloy matrix, is modeled. A solid-liquid phase transformation of the matrix material is included in the WHA numerical calculations. The example problems also demonstrate two approaches for generating free energies for the material constituents. Simulations for volumetric compression, uniaxial strain, biaxial strain, and pure shear are used to demonstrate the versatility of the model.

    18. Solar Energy Education. Renewable energy: a background text. [Includes glossary

      SciTech Connect (OSTI)

      Not Available

      1985-01-01

      Some of the most common forms of renewable energy are presented in this textbook for students. The topics include solar energy, wind power hydroelectric power, biomass ocean thermal energy, and tidal and geothermal energy. The main emphasis of the text is on the sun and the solar energy that it yields. Discussions on the sun's composition and the relationship between the earth, sun and atmosphere are provided. Insolation, active and passive solar systems, and solar collectors are the subtopics included under solar energy. (BCS)

    19. Methods of producing adsorption media including a metal oxide

      DOE Patents [OSTI]

      Mann, Nicholas R; Tranter, Troy J

      2014-03-04

      Methods of producing a metal oxide are disclosed. The method comprises dissolving a metal salt in a reaction solvent to form a metal salt/reaction solvent solution. The metal salt is converted to a metal oxide and a caustic solution is added to the metal oxide/reaction solvent solution to adjust the pH of the metal oxide/reaction solvent solution to less than approximately 7.0. The metal oxide is precipitated and recovered. A method of producing adsorption media including the metal oxide is also disclosed, as is a precursor of an active component including particles of a metal oxide.

    20. Metal vapor laser including hot electrodes and integral wick

      DOE Patents [OSTI]

      Ault, E.R.; Alger, T.W.

      1995-03-07

      A metal vapor laser, specifically one utilizing copper vapor, is disclosed herein. This laser utilizes a plasma tube assembly including a thermally insulated plasma tube containing a specific metal, e.g., copper, and a buffer gas therein. The laser also utilizes means including hot electrodes located at opposite ends of the plasma tube for electrically exciting the metal vapor and heating its interior to a sufficiently high temperature to cause the metal contained therein to vaporize and for subjecting the vapor to an electrical discharge excitation in order to lase. The laser also utilizes external wicking arrangements, that is, wicking arrangements located outside the plasma tube. 5 figs.

    1. Metal vapor laser including hot electrodes and integral wick

      DOE Patents [OSTI]

      Ault, Earl R.; Alger, Terry W.

      1995-01-01

      A metal vapor laser, specifically one utilizing copper vapor, is disclosed herein. This laser utilizes a plasma tube assembly including a thermally insulated plasma tube containing a specific metal, e.g., copper, and a buffer gas therein. The laser also utilizes means including hot electrodes located at opposite ends of the plasma tube for electrically exciting the metal vapor and heating its interior to a sufficiently high temperature to cause the metal contained therein to vaporize and for subjecting the vapor to an electrical discharge excitation in order to lase. The laser also utilizes external wicking arrangements, that is, wicking arrangements located outside the plasma tube.

    2. DOE Considers Natural Gas Utility Service Options: Proposal Includes

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      30-mile Natural Gas Pipeline from Pasco to Hanford | Department of Energy Considers Natural Gas Utility Service Options: Proposal Includes 30-mile Natural Gas Pipeline from Pasco to Hanford DOE Considers Natural Gas Utility Service Options: Proposal Includes 30-mile Natural Gas Pipeline from Pasco to Hanford January 23, 2012 - 12:00pm Addthis Media Contacts Cameron Hardy, DOE , (509) 376-5365, Cameron.Hardy@rl.doe.gov RICHLAND, WASH. - The U.S. Department of Energy (DOE) is considering

    3. Thin film solar cell including a spatially modulated intrinsic layer

      SciTech Connect (OSTI)

      Guha, Subhendu; Yang, Chi-Chung; Ovshinsky, Stanford R.

      1989-03-28

      One or more thin film solar cells in which the intrinsic layer of substantially amorphous semiconductor alloy material thereof includes at least a first band gap portion and a narrower band gap portion. The band gap of the intrinsic layer is spatially graded through a portion of the bulk thickness, said graded portion including a region removed from the intrinsic layer-dopant layer interfaces. The band gap of the intrinsic layer is always less than the band gap of the doped layers. The gradation of the intrinsic layer is effected such that the open circuit voltage and/or the fill factor of the one or plural solar cell structure is enhanced.

    4. Tunable cavity resonator including a plurality of MEMS beams

      DOE Patents [OSTI]

      Peroulis, Dimitrios; Fruehling, Adam; Small, Joshua Azariah; Liu, Xiaoguang; Irshad, Wasim; Arif, Muhammad Shoaib

      2015-10-20

      A tunable cavity resonator includes a substrate, a cap structure, and a tuning assembly. The cap structure extends from the substrate, and at least one of the substrate and the cap structure defines a resonator cavity. The tuning assembly is positioned at least partially within the resonator cavity. The tuning assembly includes a plurality of fixed-fixed MEMS beams configured for controllable movement relative to the substrate between an activated position and a deactivated position in order to tune a resonant frequency of the tunable cavity resonator.

    5. in High Performance Computing Computer System, Cluster, and Networking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

    6. Applications in Data-Intensive Computing

      SciTech Connect (OSTI)

      Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.; Cannon, William R.; Chavarría-Miranda, Daniel; Choudhury, Sutanay; Gorton, Ian; Gracio, Deborah K.; Halter, Todd D.; Jaitly, Navdeep; Johnson, John R.; Kouzes, Richard T.; Macduff, Matt C.; Marquez, Andres; Monroe, Matthew E.; Oehmen, Christopher S.; Pike, William A.; Scherrer, Chad; Villa, Oreste; Webb-Robertson, Bobbie-Jo M.; Whitney, Paul D.; Zuljevic, Nino

      2010-04-01

      This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications provide timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.

    7. Building Energy Consumption Analysis

      Energy Science and Technology Software Center (OSTI)

      2005-03-02

      DOE2.1E-121SUNOS is a set of modules for energy analysis in buildings. Modules are included to calculate the heating and cooling loads for each space in a building for each hour of a year (LOADS), to simulate the operation and response of the equipment and systems that control temperature and humidity and distribute heating, cooling and ventilation to the building (SYSTEMS), to model energy conversion equipment that uses fuel or electricity to provide the required heating,more » cooling and electricity (PLANT), and to compute the cost of energy and building operation based on utility rate schedule and economic parameters (ECONOMICS).« less

    8. CAD-centric Computation Management System for a Virtual TBM

      SciTech Connect (OSTI)

      Ramakanth Munipalli; K.Y. Szema; P.Y. Huang; C.M. Rowell; A.Ying; M. Abdou

      2011-05-03

      HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of the analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.

    9. Scheduling applications for execution on a plurality of compute nodes of a parallel computer to manage temperature of the nodes during execution

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

      2012-10-16

      Methods, apparatus, and products are disclosed for scheduling applications for execution on a plurality of compute nodes of a parallel computer to manage temperature of the plurality of compute nodes during execution that include: identifying one or more applications for execution on the plurality of compute nodes; creating a plurality of physically discontiguous node partitions in dependence upon temperature characteristics for the compute nodes and a physical topology for the compute nodes, each discontiguous node partition specifying a collection of physically adjacent compute nodes; and assigning, for each application, that application to one or more of the discontiguous node partitions for execution on the compute nodes specified by the assigned discontiguous node partitions.

    10. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC. Quarterly report January through March 2011. Year 1 Quarter 2 progress report.

      SciTech Connect (OSTI)

      Lottes, S. A.; Kulak, R. F.; Bojanowski, C.

      2011-05-19

      This project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at the Turner-Fairbank Highway Research Center for a period of five years, beginning in October 2010. The analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water loads on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of January through March 2011.

    11. cDNA encoding a polypeptide including a hevein sequence

      DOE Patents [OSTI]

      Raikhel, N.V.; Broekaert, W.F.; Namhai Chua; Kush, A.

      1993-02-16

      A cDNA clone (HEV1) encoding hevein was isolated via polymerase chain reaction (PCR) using mixed oligonucleotides corresponding to two regions of hevein as primers and a Hevea brasiliensis latex cDNA library as a template. HEV1 is 1,018 nucleotides long and includes an open reading frame of 204 amino acids.

    12. Generalized Modeling of Enrichment Cascades That Include Minor Isotopes

      SciTech Connect (OSTI)

      Weber, Charles F

      2012-01-01

      The monitoring of enrichment operations may require innovative analysis to allow for imperfect or missing data. The presence of minor isotopes may help or hurt - they can complicate a calculation or provide additional data to corroborate a calculation. However, they must be considered in a rigorous analysis, especially in cases involving reuse. This study considers matched-abundanceratio cascades that involve at least three isotopes and allows generalized input that does not require all feed assays or the enrichment factor to be specified. Calculations are based on the equations developed for the MSTAR code but are generalized to allow input of various combinations of assays, flows, and other cascade properties. Traditional cascade models have required specification of the enrichment factor, all feed assays, and the product and waste assays of the primary enriched component. The calculation would then produce the numbers of stages in the enriching and stripping sections and the remaining assays in waste and product streams. In cases where the enrichment factor or feed assays were not known, analysis was difficult or impossible. However, if other quantities are known (e.g., additional assays in waste or product streams), a reliable calculation is still possible with the new code, but such nonstandard input may introduce additional numerical difficulties into the calculation. Thus, the minimum input requirements for a stable solution are discussed, and a sample problem with a non-unique solution is described. Both heuristic and mathematically required guidelines are given to assist the application of cascade modeling to situations involving such non-standard input. As a result, this work provides both a calculational tool and specific guidance for evaluation of enrichment cascades in which traditional input data are either flawed or unknown. It is useful for cases involving minor isotopes, especially if the minor isotope assays are desired (or required) to be

    13. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los Alamos National Laboratory for the best of both. No place on Earth pursues a broader array of world-class scientific endeavors. Computer, Computational, and Statistical Sciences (CCS)» High Performance Computing (HPC)» Extreme Scale Computing, Co-design» supercomputing into the future Overview Los Alamos Asteroid Killer

    14. computers | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      computers NNSA Announces Procurement of Penguin Computing Clusters to Support Stockpile Stewardship at National Labs The National Nuclear Security Administration's (NNSA's) Lawrence Livermore National Laboratory today announced the awarding of a subcontract to Penguin Computing - a leading developer of high-performance Linux cluster computing systems based in Silicon Valley - to bolster computing for stockpile... Sandia donates 242 computers to northern California schools Sandia National

    15. Method and computer program product for maintenance and modernization backlogging

      DOE Patents [OSTI]

      Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

      2013-02-19

      According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

    16. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los ...

    17. Computer simulation | Open Energy Information

      Open Energy Info (EERE)

      Computer simulation Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Computer simulation Author wikipedia Published wikipedia, 2013 DOI Not Provided...

    18. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse ...

    19. NREL: Computational Science Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      high-performance computing, computational science, applied mathematics, scientific data management, visualization, and informatics. NREL is home to the largest high performance...

    20. Fermilab | Science at Fermilab | Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Computing is indispensable to science at Fermilab. High-energy physics experiments generate an astounding amount of data that physicists need to store, analyze and ...