National Library of Energy BETA

Sample records for analysis including computer

  1. Quantitative Analysis of Biofuel Sustainability, Including Land...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions ...

  2. Quantitative Analysis of Biofuel Sustainability, Including Land...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    life cycle analysis of biofuels continue to improve 2 Feedstock Production Feedstock Logistics, Storage and Transportation Feedstock Conversion Fuel Transportation and...

  3. Human-computer interface including haptically controlled interactions

    DOE Patents [OSTI]

    Anderson, Thomas G.

    2005-10-11

    The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.

  4. Radiological Safety Analysis Computer Program

    Energy Science and Technology Software Center (OSTI)

    2001-08-28

    RSAC-6 is the latest version of the RSAC program. It calculates the consequences of a release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory; decay and in-grow the inventory during transport through processes, facilities, and the environment; model the downwind dispersion of the activity; and calculate doses to downwind individuals. Internal dose from the inhalation and ingestion pathways is calculated. External dose from ground surface andmore » plume gamma pathways is calculated. New and exciting updates to the program include the ability to evaluate a release to an enclosed room, resuspension of deposited activity and evaluation of a release up to 1 meter from the release point. Enhanced tools are included for dry deposition, building wake, occupancy factors, respirable fraction, AMAD adjustment, updated and enhanced radionuclide inventory and inclusion of the dose-conversion factors from FOR 11 and 12.« less

  5. Semiconductor Device Analysis on Personal Computers

    Energy Science and Technology Software Center (OSTI)

    1993-02-08

    PC-1D models the internal operation of bipolar semiconductor devices by solving for the concentrations and quasi-one-dimensional flow of electrons and holes resulting from either electrical or optical excitation. PC-1D uses the same detailed physical models incorporated in mainframe computer programs, yet runs efficiently on personal computers. PC-1D was originally developed with DOE funding to analyze solar cells. That continues to be its primary mode of usage, with registered copies in regular use at more thanmore » 100 locations worldwide. The program has been successfully applied to the analysis of silicon, gallium-arsenide, and indium-phosphide solar cells. The program is also suitable for modeling bipolar transistors and diodes, including heterojunction devices. Its easy-to-use graphical interface makes it useful as a teaching tool as well.« less

  6. Search for Earth-like planets includes LANL star analysis

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Search Response Team Search Response Team logo NNSA's Search Response Team (SRT) is a national level capability that provides assets for complex search operations using both technical and operational expertise. SRT is a full-response asset, which includes the manpower and equipment to conduct aerial, vehicle, or search operations by foot to locate a potential radiological source. In addition to the field team, a "home team" provides additional support to the field team, and any NNSA

  7. Impact analysis on a massively parallel computer

    SciTech Connect (OSTI)

    Zacharia, T.; Aramayo, G.A.

    1994-06-01

    Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper.

  8. Computer aided cogeneration feasibility analysis

    SciTech Connect (OSTI)

    Anaya, D.A.; Caltenco, E.J.L.; Robles, L.F.

    1996-12-31

    A successful cogeneration system design depends of several factors, and the optimal configuration can be founded using a steam and power simulation software. The key characteristics of one of this kind of software are described below, and its application on a process plant cogeneration feasibility analysis is shown in this paper. Finally a study case is illustrated. 4 refs., 2 figs.

  9. Application of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Application of the...

  10. Final Report Computational Analysis of Dynamical Systems

    SciTech Connect (OSTI)

    Guckenheimer, John

    2012-05-08

    This is the final report for DOE Grant DE-FG02-93ER25164, initiated in 1993. This grant supported research of John Guckenheimer on computational analysis of dynamical systems. During that period, seventeen individuals received PhD degrees under the supervision of Guckenheimer and over fifty publications related to the grant were produced. This document contains copies of these publications.

  11. Quantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG Emissions

    Broader source: Energy.gov [DOE]

    Plenary V: Biofuels and Sustainability: Acknowledging Challenges and Confronting MisconceptionsQuantitative Analysis of Biofuel Sustainability, Including Land Use Change GHG EmissionsJennifer B....

  12. PArallel Reacting Multiphase FLOw Computational Fluid Dynamic Analysis

    Energy Science and Technology Software Center (OSTI)

    2002-06-01

    PARMFLO is a parallel multiphase reacting flow computational fluid dynamics (CFD) code. It can perform steady or unsteady simulations in three space dimensions. It is intended for use in engineering CFD analysis of industrial flow system components. Its parallel processing capabilities allow it to be applied to problems that use at least an order of magnitude more computational cells than the number that can be used on a typical single processor workstation (about 106 cellsmore » in parallel processing mode versus about io cells in serial processing mode). Alternately, by spreading the work of a CFD problem that could be run on a single workstation over a group of computers on a network, it can bring the runtime down by an order of magnitude or more (typically from many days to less than one day). The software was implemented using the industry standard Message-Passing Interface (MPI) and domain decomposition in one spatial direction. The phases of a flow problem may include an ideal gas mixture with an arbitrary number of chemical species, and dispersed droplet and particle phases. Regions of porous media may also be included within the domain. The porous media may be packed beds, foams, or monolith catalyst supports. With these features, the code is especially suited to analysis of mixing of reactants in the inlet chamber of catalytic reactors coupled to computation of product yields that result from the flow of the mixture through the catalyst coaled support structure.« less

  13. A Research Roadmap for Computation-Based Human Reliability Analysis

    SciTech Connect (OSTI)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  14. Distributed Design and Analysis of Computer Experiments

    Energy Science and Technology Software Center (OSTI)

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an algorithm by Michael McKay to compute variable correlations. DDACE can also be used to carry out a main-effects analysis to calculate the sensitivity of an output variable to each of the varied inputs taken individually. 1 Continued« less

  15. Computational Aerodynamic Analysis of Offshore Upwind and Downwind Turbines

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Zhao, Qiuying; Sheng, Chunhua; Afjeh, Abdollah

    2014-01-01

    Aerodynamic interactions of the model NREL 5 MW offshore horizontal axis wind turbines (HAWT) are investigated using a high-fidelity computational fluid dynamics (CFD) analysis. Four wind turbine configurations are considered; three-bladed upwind and downwind and two-bladed upwind and downwind configurations, which operate at two different rotor speeds of 12.1 and 16 RPM. In the present study, both steady and unsteady aerodynamic loads, such as the rotor torque, blade hub bending moment, and base the tower bending moment of the tower, are evaluated in detail to provide overall assessment of different wind turbine configurations. Aerodynamic interactions between the rotor and tower are analyzed,more » including the rotor wake development downstream. The computational analysis provides insight into aerodynamic performance of the upwind and downwind, two- and three-bladed horizontal axis wind turbines.« less

  16. Transportation Research and Analysis Computing Center Fact Sheet | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    National Laboratory Transportation Research and Analysis Computing Center Fact Sheet The Transportation Research and Analysis Computing Center (TRACC) is the intersection of state-of-the-art computing and critical science and engineering research that is improving how the nation plans, builds, and secures a transportation system for the 21st Century. PDF icon TRACC_fact_sheet

  17. Computer analysis of the thermohydraulic measurements on CEA dummy cables

    Office of Scientific and Technical Information (OSTI)

    performed at CEN-Grenoble (Journal Article) | SciTech Connect Computer analysis of the thermohydraulic measurements on CEA dummy cables performed at CEN-Grenoble Citation Details In-Document Search Title: Computer analysis of the thermohydraulic measurements on CEA dummy cables performed at CEN-Grenoble We present here the validation of two computer models for the ITER CICCs based on experimental data produced at CEN-Grenoble. The models, implemented in two finite element computer codes,

  18. Center for Integrated Computation and Analysis of Reconnection...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Objectives Current Future New science Center for Integrated Computation and Analysis of Reconnection and Turbulence (CICART) Kai Germaschewski, Amitava Bhattacharjee, Barrett...

  19. The Design and Analysis of Computer Experiments | Open Energy...

    Open Energy Info (EERE)

    to library Book: The Design and Analysis of Computer Experiments Authors Thomas J. Santner, Brian J. Williams and William I. Notz Published Springer-Verlag, 2003 DOI Not...

  20. Comparative genome analysis of Pseudomonas genomes including Populus-associated isolates

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jun, Se Ran; Wassenaar, Trudy; Nookaew, Intawat; Hauser, Loren John; Wanchai, Visanu; Land, Miriam L.; Timm, Collin M.; Lu, Tse-Yuan S.; Schadt, Christopher Warren; Doktycz, Mitchel John; et al

    2016-01-01

    The Pseudomonas genus contains a metabolically versatile group of organisms that are known to occupy numerous ecological niches including the rhizosphere and endosphere of many plants influencing phylogenetic diversity and heterogeneity. In this study, comparative genome analysis was performed on over one thousand Pseudomonas genomes, including 21 Pseudomonas strains isolated from the roots of native Populus deltoides. Based on average amino acid identity, genomic clusters were identified within the Pseudomonas genus, which showed agreements with clades by NCBI and cliques by IMG. The P. fluorescens group was organized into 20 distinct genomic clusters, representing enormous diversity and heterogeneity. The speciesmore » P. aeruginosa showed clear distinction in their genomic relatedness compared to other Pseudomonas species groups based on the pan and core genome analysis. The 19 isolates of our 21 Populus-associated isolates formed three distinct subgroups within the P. fluorescens major group, supported by pathway profiles analysis, while two isolates were more closely related to P. chlororaphis and P. putida. The specific genes to Populus-associated subgroups were identified where genes specific to subgroup 1 include several sensory systems such as proteins which act in two-component signal transduction, a TonB-dependent receptor, and a phosphorelay sensor; specific genes to subgroup 2 contain unique hypothetical genes; and genes specific to subgroup 3 organisms have a different hydrolase activity. IMPORTANCE The comparative genome analyses of the genus Pseudomonas that included Populus-associated isolates resulted in novel insights into high diversity of Pseudomonas. Consistent and robust genomic clusters with phylogenetic homogeneity were identified, which resolved species-clades that are not clearly defined by 16S rRNA gene sequence analysis alone. The genomic clusters may be reflective of distinct ecological niches to which the organisms have adapted, but this needs to be experimentally characterized with ecologically relevant phenotype properties. This study justifies the need to sequence multiple isolates, especially from P. fluorescens group in order to study functional capabilities from a pangenomic perspective. This information will prove useful when choosing Pseudomonas strains for use to promote growth and increase disease resistance in plants.« less

  1. Scalable Computer Performance and Analysis (Hierarchical INTegration)

    Energy Science and Technology Software Center (OSTI)

    1999-09-02

    HINT is a program to measure a wide variety of scalable computer systems. It is capable of demonstrating the benefits of using more memory or processing power, and of improving communications within the system. HINT can be used for measurement of an existing system, while the associated program ANALYTIC HINT can be used to explain the measurements or as a design tool for proposed systems.

  2. Wind energy conversion system analysis model (WECSAM) computer program documentation

    SciTech Connect (OSTI)

    Downey, W T; Hendrick, P L

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation. Thus, any user-supplied data for WECS performance, application load, utility rates, or wind resource may be entered into the scratch file to override the default data-base value. After the model and the inputs required from the user and derived from the data base are described, the model output and the various output options that can be exercised by the user are detailed. The general operation is set forth and suggestions are made for efficient modes of operation. Sample listings of various input, output, and data-base files are appended. (LEW)

  3. A joint analysis of Planck and BICEP2 B modes including dust polarization uncertainty

    SciTech Connect (OSTI)

    Mortonson, Michael J.; Seljak, Uro E-mail: useljak@berkeley.edu

    2014-10-01

    We analyze BICEP2 and Planck data using a model that includes CMB lensing, gravity waves, and polarized dust. Recently published Planck dust polarization maps have highlighted the difficulty of estimating the amount of dust polarization in low intensity regions, suggesting that the polarization fractions have considerable uncertainties and may be significantly higher than previous predictions. In this paper, we start by assuming nothing about the dust polarization except for the power spectrum shape, which we take to be C{sub l}{sup BB,dust}?l{sup -2.42}. The resulting joint BICEP2+Planck analysis favors solutions without gravity waves, and the upper limit on the tensor-to-scalar ratio is r<0.11, a slight improvement relative to the Planck analysis alone which gives r<0.13 (95% c.l.). The estimated amplitude of the dust polarization power spectrum agrees with expectations for this field based on both HI column density and Planck polarization measurements at 353 GHz in the BICEP2 field. Including the latter constraint on the dust spectrum amplitude in our analysis improves the limit further to r<0.09, placing strong constraints on theories of inflation (e.g., models with r>0.14 are excluded with 99.5% confidence). We address the cross-correlation analysis of BICEP2 at 150 GHz with BICEP1 at 100 GHz as a test of foreground contamination. We find that the null hypothesis of dust and lensing with 0r= gives ??{sup 2}<2 relative to the hypothesis of no dust, so the frequency analysis does not strongly favor either model over the other. We also discuss how more accurate dust polarization maps may improve our constraints. If the dust polarization is measured perfectly, the limit can reach r<0.05 (or the corresponding detection significance if the observed dust signal plus the expected lensing signal is below the BICEP2 observations), but this degrades quickly to almost no improvement if the dust calibration error is 20% or larger or if the dust maps are not processed through the BICEP2 pipeline, inducing sampling variance noise.

  4. Applicaiton of the Computer Program SASSI for Seismic SSI Analysis...

    Office of Environmental Management (EM)

    of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop...

  5. Process for computing geometric perturbations for probabilistic analysis

    DOE Patents [OSTI]

    Fitch, Simeon H. K. (Charlottesville, VA); Riha, David S. (San Antonio, TX); Thacker, Ben H. (San Antonio, TX)

    2012-04-10

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  6. Computational Analysis of the Thermal-Hydraulic Characteristics of the

    Office of Scientific and Technical Information (OSTI)

    Encapsulated Nuclear Heat Source (Journal Article) | SciTech Connect Computational Analysis of the Thermal-Hydraulic Characteristics of the Encapsulated Nuclear Heat Source Citation Details In-Document Search Title: Computational Analysis of the Thermal-Hydraulic Characteristics of the Encapsulated Nuclear Heat Source The encapsulated nuclear heat source (ENHS) is a modular reactor that was selected by the 1999 U.S. Department of Energy Nuclear Energy Research Initiative program as a

  7. Multiscale analysis of nonlinear systems using computational homology

    SciTech Connect (OSTI)

    Konstantin Mischaikow; Michael Schatz; William Kalies; Thomas Wanner

    2010-05-24

    This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.

  8. Multiscale analysis of nonlinear systems using computational homology

    SciTech Connect (OSTI)

    Konstantin Mischaikow, Rutgers University /Georgia Institute of Technology, Michael Schatz, Georgia Institute of Technology, William Kalies, Florida Atlantic University, Thomas Wanner,George Mason University

    2010-05-19

    This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.

  9. Analysis of advanced european nuclear fuel cycle scenarios including transmutation and economical estimates

    SciTech Connect (OSTI)

    Merino Rodriguez, I.; Alvarez-Velarde, F.; Martin-Fuertes, F.

    2013-07-01

    In this work the transition from the existing Light Water Reactors (LWR) to the advanced reactors is analyzed, including Generation III+ reactors in a European framework. Four European fuel cycle scenarios involving transmutation options have been addressed. The first scenario (i.e., reference) is the current fleet using LWR technology and open fuel cycle. The second scenario assumes a full replacement of the initial fleet with Fast Reactors (FR) burning U-Pu MOX fuel. The third scenario is a modification of the second one introducing Minor Actinide (MA) transmutation in a fraction of the FR fleet. Finally, in the fourth scenario, the LWR fleet is replaced using FR with MOX fuel as well as Accelerator Driven Systems (ADS) for MA transmutation. All scenarios consider an intermediate period of GEN-III+ LWR deployment and they extend for a period of 200 years looking for equilibrium mass flows. The simulations were made using the TR-EVOL code, a tool for fuel cycle studies developed by CIEMAT. The results reveal that all scenarios are feasible according to nuclear resources demand (U and Pu). Concerning to no transmutation cases, the second scenario reduces considerably the Pu inventory in repositories compared to the reference scenario, although the MA inventory increases. The transmutation scenarios show that elimination of the LWR MA legacy requires on one hand a maximum of 33% fraction (i.e., a peak value of 26 FR units) of the FR fleet dedicated to transmutation (MA in MOX fuel, homogeneous transmutation). On the other hand a maximum number of ADS plants accounting for 5% of electricity generation are predicted in the fourth scenario (i.e., 35 ADS units). Regarding the economic analysis, the estimations show an increase of LCOE (Levelized cost of electricity) - averaged over the whole period - with respect to the reference scenario of 21% and 29% for FR and FR with transmutation scenarios respectively, and 34% for the fourth scenario. (authors)

  10. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S.

    2003-08-19

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  11. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S.; Wang, Chunwei; Jevons, Luis C.; Bernhart, Derek H.; Lipshutz, Robert J.

    2004-05-11

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  12. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S.

    1998-08-18

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  13. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, M.S.

    1998-08-18

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device. 27 figs.

  14. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S.

    1999-10-26

    A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

  15. Computer-aided visualization and analysis system for sequence evaluation

    DOE Patents [OSTI]

    Chee, Mark S.

    2001-06-05

    A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

  16. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect (OSTI)

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  17. Large-scale computations in analysis of structures

    SciTech Connect (OSTI)

    McCallen, D.B.; Goudreau, G.L.

    1993-09-01

    Computer hardware and numerical analysis algorithms have progressed to a point where many engineering organizations and universities can perform nonlinear analyses on a routine basis. Through much remains to be done in terms of advancement of nonlinear analysis techniques and characterization on nonlinear material constitutive behavior, the technology exists today to perform useful nonlinear analysis for many structural systems. In the current paper, a survey on nonlinear analysis technologies developed and employed for many years on programmatic defense work at the Lawrence Livermore National Laboratory is provided, and ongoing nonlinear numerical simulation projects relevant to the civil engineering field are described.

  18. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    SciTech Connect (OSTI)

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin; Pasccci, Valerio; Gamblin, Todd; Brunst, Holger

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  19. RDI's Wisdom Way Solar Village Final Report: Includes Utility Bill Analysis of Occupied Homes

    SciTech Connect (OSTI)

    Robb Aldrich, Steven Winter Associates

    2011-07-01

    In 2010, Rural Development, Inc. (RDI) completed construction of Wisdom Way Solar Village (WWSV), a community of ten duplexes (20 homes) in Greenfield, MA. RDI was committed to very low energy use from the beginning of the design process throughout construction. Key features include: 1. Careful site plan so that all homes have solar access (for active and passive); 2. Cellulose insulation providing R-40 walls, R-50 ceiling, and R-40 floors; 3. Triple-pane windows; 4. Airtight construction (~0.1 CFM50/ft2 enclosure area); 5. Solar water heating systems with tankless, gas, auxiliary heaters; 6. PV systems (2.8 or 3.4kWSTC); 7. 2-4 bedrooms, 1,100-1,700 ft2. The design heating loads in the homes were so small that each home is heated with a single, sealed-combustion, natural gas room heater. The cost savings from the simple HVAC systems made possible the tremendous investments in the homes' envelopes. The Consortium for Advanced Residential Buildings (CARB) monitored temperatures and comfort in several homes during the winter of 2009-2010. In the Spring of 2011, CARB obtained utility bill information from 13 occupied homes. Because of efficient lights, appliances, and conscientious home occupants, the energy generated by the solar electric systems exceeded the electric energy used in most homes. Most homes, in fact, had a net credit from the electric utility over the course of a year. On the natural gas side, total gas costs averaged $377 per year (for heating, water heating, cooking, and clothes drying). Total energy costs were even less - $337 per year, including all utility fees. The highest annual energy bill for any home evaluated was $458; the lowest was $171.

  20. Analysis of energy conversion systems, including material and global warming aspects

    SciTech Connect (OSTI)

    Zhang, M.; Reistad, G.M.

    1998-12-31

    This paper addresses a method for the overall evaluation of energy conversion systems, including material and global environmental aspects. To limit the scope of the work reported here, the global environmental aspects have been limited to global warming aspects. A method is presented that uses exergy as an overall evaluation measure of energy conversion systems for their lifetime. The method takes the direct exergy consumption (fuel consumption) of the conventional exergy analyses and adds (1) the exergy of the energy conversion system equipment materials, (2) the fuel production exergy and material exergy, and (3) the exergy needed to recover the total global warming gases (equivalent) of the energy conversion system. This total, termed Total Equivalent Resource Exergy (TERE), provides a measure of the effectiveness of the energy conversion system in its use of natural resources. The results presented here for several example systems illustrate how the method can be used to screen candidate energy conversion systems and perhaps, as data become more available, to optimize systems. It appears that this concept may be particularly useful for comparing systems that have quite different direct energy and/or environmental impacts. This work should be viewed in the context of being primarily a concept paper in that the lack of detailed data available to the authors at this time limits the accuracy of the overall results. The authors are working on refinements to data used in the evaluation.

  1. Surface and grain boundary scattering in nanometric Cu thin films: A quantitative analysis including twin boundaries

    SciTech Connect (OSTI)

    Barmak, Katayun [Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027 and Department of Materials Science and Engineering and Materials Research Science and Engineering Center, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, Pennsylvania 15213 (United States); Darbal, Amith [Department of Materials Science and Engineering and Materials Research Science and Engineering Center, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, Pennsylvania 15213 (United States); Ganesh, Kameswaran J.; Ferreira, Paulo J. [Materials Science and Engineering, The University of Texas at Austin, 1 University Station, Austin, Texas 78712 (United States); Rickman, Jeffrey M. [Department of Materials Science and Engineering and Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States); Sun, Tik; Yao, Bo; Warren, Andrew P.; Coffey, Kevin R., E-mail: kb2612@columbia.edu [Department of Materials Science and Engineering, University of Central Florida, 4000 Central Florida Boulevard, Orlando, Florida 32816 (United States)

    2014-11-01

    The relative contributions of various defects to the measured resistivity in nanocrystalline Cu were investigated, including a quantitative account of twin-boundary scattering. It has been difficult to quantitatively assess the impact twin boundary scattering has on the classical size effect of electrical resistivity, due to limitations in characterizing twin boundaries in nanocrystalline Cu. In this study, crystal orientation maps of nanocrystalline Cu films were obtained via precession-assisted electron diffraction in the transmission electron microscope. These orientation images were used to characterize grain boundaries and to measure the average grain size of a microstructure, with and without considering twin boundaries. The results of these studies indicate that the contribution from grain-boundary scattering is the dominant factor (as compared to surface scattering) leading to enhanced resistivity. The resistivity data can be well-described by the combined FuchsSondheimer surface scattering model and MayadasShatzkes grain-boundary scattering model using Matthiessen's rule with a surface specularity coefficient of p?=?0.48 and a grain-boundary reflection coefficient of R?=?0.26.

  2. Low-frequency computational electromagnetics for antenna analysis

    SciTech Connect (OSTI)

    Miller, E.K. ); Burke, G.J. )

    1991-01-01

    An overview of low-frequency, computational methods for modeling the electromagnetic characteristics of antennas is presented here. The article presents a brief analytical background, and summarizes the essential ingredients of the method of moments, for numerically solving low-frequency antenna problems. Some extensions to the basic models of perfectly conducting objects in free space are also summarized, followed by a consideration of some of the same computational issues that affect model accuracy, efficiency and utility. A variety of representative computations are then presented to illustrate various modeling aspects and capabilities that are currently available. A fairly extensive bibliography is included to suggest further reference material to the reader. 90 refs., 27 figs.

  3. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes - Update to Include Evaluation of Impact of Including a Humidifier Option

    SciTech Connect (OSTI)

    Baxter, Van D

    2007-02-01

    The long range strategic goal of the Department of Energy's Building Technologies (DOE/BT) Program is to create, by 2020, technologies and design approaches that enable the construction of net-zero energy homes at low incremental cost (DOE/BT 2005). A net zero energy home (NZEH) is a residential building with greatly reduced needs for energy through efficiency gains, with the balance of energy needs supplied by renewable technologies. While initially focused on new construction, these technologies and design approaches are intended to have application to buildings constructed before 2020 as well resulting in substantial reduction in energy use for all building types and ages. DOE/BT's Emerging Technologies (ET) team is working to support this strategic goal by identifying and developing advanced heating, ventilating, air-conditioning, and water heating (HVAC/WH) technology options applicable to NZEHs. In FY05 ORNL conducted an initial Stage 1 (Applied Research) scoping assessment of HVAC/WH systems options for future NZEHs to help DOE/BT identify and prioritize alternative approaches for further development. Eleven system concepts with central air distribution ducting and nine multi-zone systems were selected and their annual and peak demand performance estimated for five locations: Atlanta (mixed-humid), Houston (hot-humid), Phoenix (hot-dry), San Francisco (marine), and Chicago (cold). Performance was estimated by simulating the systems using the TRNSYS simulation engine (Solar Energy Laboratory et al. 2006) in two 1800-ft{sup 2} houses--a Building America (BA) benchmark house and a prototype NZEH taken from BEopt results at the take-off (or crossover) point (i.e., a house incorporating those design features such that further progress towards ZEH is through the addition of photovoltaic power sources, as determined by current BEopt analyses conducted by NREL). Results were summarized in a project report, HVAC Equipment Design options for Near-Zero-Energy Homes--A Stage 2 Scoping Assessment, ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. In 2006, the two top-ranked options from the 2005 study, air-source and ground-source versions of a centrally ducted integrated heat pump (IHP) system, were subjected to an initial business case study. The IHPs were subjected to a more rigorous hourly-based assessment of their performance potential compared to a baseline suite of equipment of legally minimum efficiency that provided the same heating, cooling, water heating, demand dehumidification, and ventilation services as the IHPs. Results were summarized in a project report, Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes, ORNL/TM-2006/130 (Baxter 2006a). The present report is an update to that document which summarizes results of an analysis of the impact of adding a humidifier to the HVAC system to maintain minimum levels of space relative humidity (RH) in winter. The space RH in winter has direct impact on occupant comfort and on control of dust mites, many types of disease bacteria, and 'dry air' electric shocks. Chapter 8 in ASHRAE's 2005 Handbook of Fundamentals (HOF) suggests a 30% lower limit on RH for indoor temperatures in the range of {approx}68-69F based on comfort (ASHRAE 2005). Table 3 in chapter 9 of the same reference suggests a 30-55% RH range for winter as established by a Canadian study of exposure limits for residential indoor environments (EHD 1987). Harriman, et al (2001) note that for RH levels of 35% or higher, electrostatic shocks are minimized and that dust mites cannot live at RH levels below 40%. They also indicate that many disease bacteria life spans are minimized when space RH is held within a 30-60% range. From the foregoing it is reasonable to assume that a winter space RH range of 30-40% would be an acceptable compromise between comfort considerations and limitation of growth rates for dust mites and many bacteria. In addition it reports some corrections made to the simulation models used in order to correct some errors in the TRNSYS building model for Atlanta and in the refrigerant pressure drop calculation in the water-to-refrigerant evaporator module of the ORNL Heat Pump Design Model (HPDM) used for the IHP analyses. These changes resulted in some minor differences between IHP performance as reported in Baxter (2006) and in this report.

  4. Engineering Analysis of Intermediate Loop and Process Heat Exchanger Requirements to Include Configuration Analysis and Materials Needs

    SciTech Connect (OSTI)

    T.M. Lillo; R.L. Williamson; T.R. Reed; C.B. Davis; D.M. Ginosar

    2005-09-01

    The need to locate advanced hydrogen production facilities a finite distance away from a nuclear power source necessitates the need for an intermediate heat transport loop (IHTL). This IHTL must not only efficiently transport energy over distances up to 500 meters but must also be capable of operating at high temperatures (>850oC) for many years. High temperature, long term operation raises concerns of material strength, creep resistance and general material stability (corrosion resistance). IHTL design is currently in the initial stages. Many questions remain to be answered before intelligent design can begin. The report begins to look at some of the issues surrounding the main components of an IHTL. Specifically, a stress analysis of a compact heat exchanger design under expected operating conditions is reported. Also the results of a thermal analysis performed on two ITHL pipe configurations for different heat transport fluids are presented. The configurations consist of separate hot supply and cold return legs as well as annular design in which the hot fluid is carried in an inner pipe and the cold return fluids travels in the opposite direction in the annular space around the hot pipe. The effects of insulation configurations on pipe configuration performance are also reported. Finally, a simple analysis of two different process heat exchanger designs, one a tube in shell type and the other a compact or microchannel reactor are evaluated in light of catalyst requirements. Important insights into the critical areas of research and development are gained from these analyses, guiding the direction of future areas of research.

  5. A system analysis computer model for the High Flux Isotope Reactor (HFIRSYS Version 1)

    SciTech Connect (OSTI)

    Sozer, M.C.

    1992-04-01

    A system transient analysis computer model (HFIRSYS) has been developed for analysis of small break loss of coolant accidents (LOCA) and operational transients. The computer model is based on the Advanced Continuous Simulation Language (ACSL) that produces the FORTRAN code automatically and that provides integration routines such as the Gear`s stiff algorithm as well as enabling users with numerous practical tools for generating Eigen values, and providing debug outputs and graphics capabilities, etc. The HFIRSYS computer code is structured in the form of the Modular Modeling System (MMS) code. Component modules from MMS and in-house developed modules were both used to configure HFIRSYS. A description of the High Flux Isotope Reactor, theoretical bases for the modeled components of the system, and the verification and validation efforts are reported. The computer model performs satisfactorily including cases in which effects of structural elasticity on the system pressure is significant; however, its capabilities are limited to single phase flow. Because of the modular structure, the new component models from the Modular Modeling System can easily be added to HFIRSYS for analyzing their effects on system`s behavior. The computer model is a versatile tool for studying various system transients. The intent of this report is not to be a users manual, but to provide theoretical bases and basic information about the computer model and the reactor.

  6. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  7. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing /newsroom/_assets/images/computing-icon.png Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest. Health Space Computing Energy Earth Materials Science Technology The Lab All Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable

  8. Numerical power balance and free energy loss analysis for solar cells including optical, thermodynamic, and electrical aspects

    SciTech Connect (OSTI)

    Greulich, Johannes Höffler, Hannes; Würfel, Uli; Rein, Stefan

    2013-11-28

    A method for analyzing the power losses of solar cells is presented, supplying a complete balance of the incident power, the optical, thermodynamic, and electrical power losses and the electrical output power. The involved quantities have the dimension of a power density (units: W/m{sup 2}), which permits their direct comparison. In order to avoid the over-representation of losses arising from the ultraviolet part of the solar spectrum, a method for the analysis of the electrical free energy losses is extended to include optical losses. This extended analysis does not focus on the incident solar power of, e.g., 1000 W/m{sup 2} and does not explicitly include the thermalization losses and losses due to the generation of entropy. Instead, the usable power, i.e., the free energy or electro-chemical potential of the electron-hole pairs is set as reference value, thereby, overcoming the ambiguities of the power balance. Both methods, the power balance and the free energy loss analysis, are carried out exemplarily for a monocrystalline p-type silicon metal wrap through solar cell with passivated emitter and rear (MWT-PERC) based on optical and electrical measurements and numerical modeling. The methods give interesting insights in photovoltaic (PV) energy conversion, provide quantitative analyses of all loss mechanisms, and supply the basis for the systematic technological improvement of the device.

  9. Data analysis using the Gnu R system for statistical computation

    SciTech Connect (OSTI)

    Simone, James; /Fermilab

    2011-07-01

    R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.

  10. Computational analysis of azine-N-oxides as energetic materials

    SciTech Connect (OSTI)

    Ritchie, J.P.

    1994-05-01

    A BKW equation of state in a 1-dimensional hydrodynamic simulation of the cylinder test can be used to estimate the performance of explosives. Using this approach, the novel explosive 1,4-diamino-2,3,5,6-tetrazine-2,5-dioxide (TZX) was analyzed. Despite a high detonation velocity and a predicted CJ pressure comparable to that of RDX, TZX performs relatively poorly in the cylinder test. Theoretical and computational analysis shows this to be the result of a low heat of detonation. A conceptual strategy is proposed to remedy this problem. In order to predict the required heats of formation, new ab initio group equivalents were developed. Crystal structure calculations are also described that show hydrogen-bonding is important in determining the density of TZX and related compounds.

  11. Computer analysis of sodium cold trap design and performance. [LMFBR

    SciTech Connect (OSTI)

    McPheeters, C.C.; Raue, D.J.

    1983-11-01

    Normal steam-side corrosion of steam-generator tubes in Liquid Metal Fast Breeder Reactors (LMFBRs) results in liberation of hydrogen, and most of this hydrogen diffuses through the tubes into the heat-transfer sodium and must be removed by the purification system. Cold traps are normally used to purify sodium, and they operate by cooling the sodium to temperatures near the melting point, where soluble impurities including hydrogen and oxygen precipitate as NaH and Na/sub 2/O, respectively. A computer model was developed to simulate the processes that occur in sodium cold traps. The Model for Analyzing Sodium Cold Traps (MASCOT) simulates any desired configuration of mesh arrangements and dimensions and calculates pressure drops and flow distributions, temperature profiles, impurity concentration profiles, and impurity mass distributions.

  12. Analysis of magnetic probe signals including effect of cylindrical conducting wall for field-reversed configuration experiment

    SciTech Connect (OSTI)

    Ikeyama, Taeko; Hiroi, Masanori; Nemoto, Yuuichi; Nogi, Yasuyuki

    2008-06-15

    A confinement field is disturbed by magnetohydrodynamic (MHD) motions of a field-reversed configuration (FRC) plasma in a cylindrical conductor. The effect of the conductor should be included to obtain a spatial structure of the disturbed field with a good precision. For this purpose, a toroidal current in the plasma and an eddy current on a conducting wall are replaced by magnetic dipole and image magnetic dipole moments, respectively. Typical spatial structures of the disturbed field are calculated by using the dipole moments for such MHD motions as radial shift, internal tilt, external tilt, and n=2 mode deformation. Then, analytic formulas for estimating the shift distance, tilt angle, and deformation rate of the MHD motions from magnetic probe signals are derived. It is estimated from the calculations by using the dipole moments that the analytic formulas include an approximately 40% error. Two kinds of experiment are carried out to investigate the reliability of the calculations. First, a magnetic field produced by a circular current is measured in an aluminum pipe to confirm the replacement of the eddy current with the image magnetic dipole moments. The measured fields coincide well with the calculated values including the image magnetic dipole moments. Second, magnetic probe signals measured from the FRC plasma are substituted into the analytic formulas to obtain shift distance and deformation rate. The experimental results are compared to the MHD motions measured by using a radiation from the plasma. If the error included in the analytic formulas and the difference between the magnetic and optical structures in the plasma are considered, the results of the radiation measurement support well those of the magnetic analysis.

  13. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Software Computations Uncertainty Quantification Stochastic About CRF Transportation Energy Consortiums Engine Combustion Heavy Duty Heavy Duty Low-Temperature & Diesel Combustion ...

  14. Computer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    I. INTRODUCTION This paper presents several computational tools required for processing images of a heavy ion beam and estimating the magnetic field within a plasma. The...

  15. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  16. Thermodynamic analysis of a possible CO{sub 2}-laser plant included in a heat engine cycle

    SciTech Connect (OSTI)

    Bisio, G.; Rubatto, G.

    1998-07-01

    In these last years, several plants have been realized in some industrialized countries to recover pressure exergy from various fluids. That has been done by means of suitable turbines in particular for blast-furnace top gas and natural gas. Various papers have examined the topic, considering pros and cons. High-power CO{sub 2}-lasers are being more and more widely used for welding, drilling and cutting in machine shops. In the near future different kinds of metal surface treatments will probably become routine practice with laser units. The industries benefiting most from high power lasers will be: the automotive industry, shipbuilding, the offshore industry, the aerospace industry, the nuclear and the chemical processing industries. Both degradation and cooling problems may be alleviated by allowing the gas to flow through the laser tube and by reducing its pressure outside this tube. Thus, a thermodynamic analysis on high-power CO{sub 2}-lasers with particular reference to a possible energy recovery is justified. In previous papers the critical examination of the concept of efficiency has led one of the present authors to the definition of an operational domain in which the process can be achieved. This domain is confined by regions of no entropy production (upper limit) and no useful effects (lower limit). On the basis of these concepts and of what has been done for pressure exergy recovery from other fluids, exergy investigations and an analysis of losses are performed for a cyclic process including a high performance CO2 laser. Thermodynamic analysis of flow processes in a CO{sub 2}-laser plant shows that the inclusion of a turbine in this plant allows us to recover the most part of the exergy necessary for the compressor; in addition, the water consumption for the refrigeration in the heat exchanger is reduced.

  17. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    SciTech Connect (OSTI)

    Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the experienced user-base and the experimental validation base was decaying away quickly.

  18. Computational Fluid Dynamics Analysis of Flexible Duct Junction Box Design

    SciTech Connect (OSTI)

    Beach, Robert; Prahl, Duncan; Lange, Rich

    2013-12-01

    IBACOS explored the relationships between pressure and physical configurations of flexible duct junction boxes by using computational fluid dynamics (CFD) simulations to predict individual box parameters and total system pressure, thereby ensuring improved HVAC performance. Current Air Conditioning Contractors of America (ACCA) guidance (Group 11, Appendix 3, ACCA Manual D, Rutkowski 2009) allows for unconstrained variation in the number of takeoffs, box sizes, and takeoff locations. The only variables currently used in selecting an equivalent length (EL) are velocity of air in the duct and friction rate, given the first takeoff is located at least twice its diameter away from the inlet. This condition does not account for other factors impacting pressure loss across these types of fittings. For each simulation, the IBACOS team converted pressure loss within a box to an EL to compare variation in ACCA Manual D guidance to the simulated variation. IBACOS chose cases to represent flows reasonably correlating to flows typically encountered in the field and analyzed differences in total pressure due to increases in number and location of takeoffs, box dimensions, and velocity of air, and whether an entrance fitting is included. The team also calculated additional balancing losses for all cases due to discrepancies between intended outlet flows and natural flow splits created by the fitting. In certain asymmetrical cases, the balancing losses were significantly higher than symmetrical cases where the natural splits were close to the targets. Thus, IBACOS has shown additional design constraints that can ensure better system performance.

  19. Analysis of gallium arsenide deposition in a horizontal chemical vapor deposition reactor using massively parallel computations

    SciTech Connect (OSTI)

    Salinger, A.G.; Shadid, J.N.; Hutchinson, S.A.

    1998-01-01

    A numerical analysis of the deposition of gallium from trimethylgallium (TMG) and arsine in a horizontal CVD reactor with tilted susceptor and a three inch diameter rotating substrate is performed. The three-dimensional model includes complete coupling between fluid mechanics, heat transfer, and species transport, and is solved using an unstructured finite element discretization on a massively parallel computer. The effects of three operating parameters (the disk rotation rate, inlet TMG fraction, and inlet velocity) and two design parameters (the tilt angle of the reactor base and the reactor width) on the growth rate and uniformity are presented. The nonlinear dependence of the growth rate uniformity on the key operating parameters is discussed in detail. Efficient and robust algorithms for massively parallel reacting flow simulations, as incorporated into our analysis code MPSalsa, make detailed analysis of this complicated system feasible.

  20. Computational Challenges for Microbial Genome and Metagenome Analysis (2010 JGI/ANL HPC Workshop)

    ScienceCinema (OSTI)

    Mavrommatis, Kostas

    2011-06-08

    Kostas Mavrommatis of the DOE JGI gives a presentation on "Computational Challenges for Microbial Genome & Metagenome Analysis" at the JGI/Argonne HPC Workshop on January 26, 2010.

    1. SWAAM-LT: The long-term, sodium/water reaction analysis method computer code

      SciTech Connect (OSTI)

      Shin, Y.W.; Chung, H.H.; Wiedermann, A.H.; Tanabe, H.

      1993-01-01

      The SWAAM-LT Code, developed for analysis of long-term effects of sodium/water reactions, is discussed. The theoretical formulation of the code is described, including the introduction of system matrices for ease of computer programming as a general system code. Also, some typical results of the code predictions for available large scale tests are presented. Test data for the steam generator design with the cover-gas feature and without the cover-gas feature are available and analyzed. The capabilities and limitations of the code are then discussed in light of the comparison between the code prediction and the test data.

    2. Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. Application and System Memory Use, ...

    3. Computations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear

    4. NASTRAN-based computer program for structural dynamic analysis of horizontal axis wind turbines

      SciTech Connect (OSTI)

      Lobitz, D.W.

      1984-01-01

      This paper describes a computer program developed for structural dynamic analysis of horizontal axis wind turbines (HAWTs). It is based on the finite element method through its reliance on NASTRAN for the development of mass, stiffness, and damping matrices of the tower and rotor, which are treated in NASTRAN as separate structures. The tower is modeled in a stationary frame and the rotor in one rotating at a constant angular velocity. The two structures are subsequently joined together (external to NASTRAN) using a time-dependent transformation consistent with the hub configuration. Aerodynamic loads are computed with an established flow model based on strip theory. Aeroelastic effects are included by incorporating the local velocity and twisting deformation of the blade in the load computation. The turbulent nature of the wind, both in space and time, is modeled by adding in stochastic wind increments. The resulting equations of motion are solved in the time domain using the implicit Newmark-Beta integrator. Preliminary comparisons with data from the Boeing/NASA MOD2 HAWT indicate that the code is capable of accurately and efficiently predicting the response of HAWTs driven by turbulent winds.

    5. THE SAP3 COMPUTER PROGRAM FOR QUANTITATIVE MULTIELEMENT ANALYSIS BY ENERGY DISPERSIVE X-RAY FLUORESCENCE

      SciTech Connect (OSTI)

      Nielson, K. K.; Sanders, R. W.

      1982-04-01

      SAP3 is a dual-function FORTRAN computer program which performs peak analysis of energy-dispersive x-ray fluorescence spectra and then quantitatively interprets the results of the multielement analysis. It was written for mono- or bi-chromatic excitation as from an isotopic or secondary excitation source, and uses the separate incoherent and coherent backscatter intensities to define the bulk sample matrix composition. This composition is used in performing fundamental-parameter matrix corrections for self-absorption, enhancement, and particle-size effects, obviating the need for specific calibrations for a given sample matrix. The generalized calibration is based on a set of thin-film sensitivities, which are stored in a library disk file and used for all sample matrices and thicknesses. Peak overlap factors are also determined from the thin-film standards, and are stored in the library for calculating peak overlap corrections. A detailed description is given of the algorithms and program logic, and the program listing and flow charts are also provided. An auxiliary program, SPCAL, is also given for use in calibrating the backscatter intensities. SAP3 provides numerous analysis options via seventeen control switches which give flexibility in performing the calculations best suited to the sample and the user needs. User input may be limited to the name of the library, the analysis livetime, and the spectrum filename and location. Output includes all peak analysis information, matrix correction factors, and element concentrations, uncertainties and detection limits. Twenty-four elements are typically determined from a 1024-channel spectrum in one-to-two minutes using a PDP-11/34 computer operating under RSX-11M.

    6. Tuning and Analysis Utilities (TAU) | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Analysis Utilities (TAU) References TAU Project Site TAU Instrumentation Methods TAU Compilation Options TAU Fortran Instrumentation FAQ TAU Leap to Petascale 2009 Presentation TAU Workshop 2009 Introduction The TAU (Tuning and Analysis Utilities) Performance System is a portable profiling and tracing toolkit for performance analysis of parallel programs written in Fortran, C, C++, Java, and Python. TAU gathers performance information while a program executes through instrumentation of

    7. Computational design and analysis of flatback airfoil wind tunnel experiment.

      SciTech Connect (OSTI)

      Mayda, Edward A.; van Dam, C.P.; Chao, David D.; Berg, Dale E.

      2008-03-01

      A computational fluid dynamics study of thick wind turbine section shapes in the test section of the UC Davis wind tunnel at a chord Reynolds number of one million is presented. The goals of this study are to validate standard wind tunnel wall corrections for high solid blockage conditions and to reaffirm the favorable effect of a blunt trailing edge or flatback on the performance characteristics of a representative thick airfoil shape prior to building the wind tunnel models and conducting the experiment. The numerical simulations prove the standard wind tunnel corrections to be largely valid for the proposed test of 40% maximum thickness to chord ratio airfoils at a solid blockage ratio of 10%. Comparison of the computed lift characteristics of a sharp trailing edge baseline airfoil and derived flatback airfoils reaffirms the earlier observed trend of reduced sensitivity to surface contamination with increasing trailing edge thickness.

    8. Computer analysis of the thermohydraulic measurements on CEA...

      Office of Scientific and Technical Information (OSTI)

      Subject: 70 PLASMA PHYSICS AND FUSION; 66 PHYSICS; SUPERCONDUCTING CABLES; HELIUM DILUTION REFRIGERATION; HEAT TRANSFER; HYDRAULICS; FLOW MODELS; NUMERICAL ANALYSIS; ITER TOKAMAK; ...

    9. Modeling and Analysis of a Lunar Space Reactor with the Computer Code

      Office of Scientific and Technical Information (OSTI)

      RELAP5-3D/ATHENA (Conference) | SciTech Connect Conference: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA Citation Details In-Document Search Title: Modeling and Analysis of a Lunar Space Reactor with the Computer Code RELAP5-3D/ATHENA The transient analysis 3-dimensional (3-D) computer code RELAP5-3D/ATHENA has been employed to model and analyze a space reactor of 180 kW(thermal), 40 kW (net, electrical) with eight Stirling engines (SEs). Each SE

    10. Computational Proteomics: High-throughput Analysis for Systems Biology

      SciTech Connect (OSTI)

      Cannon, William R.; Webb-Robertson, Bobbie-Jo M.

      2007-01-03

      High-throughput (HTP) proteomics is a rapidly developing field that offers the global profiling of proteins from a biological system. The HTP technological advances are fueling a revolution in biology, enabling analyses at the scales of entire systems (e.g., whole cells, tumors, or environmental communities). However, simply identifying the proteins in a cell is insufficient for understanding the underlying complexity and operating mechanisms of the overall system. Systems level investigations are relying more and more on computational analyses, especially in the field of proteomics generating large-scale global data.

    11. Routing performance analysis and optimization within a massively parallel computer

      DOE Patents [OSTI]

      Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

      2013-04-16

      An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

    12. Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities

      Broader source: Energy.gov [DOE]

      Application of the Computer Program SASSI for Seismic SSI Analysis of WTP Facilities Farhang Ostadan (BNI) & Raman Venkata (DOE-WTP-WED) Presented by Lisa Anderson (BNI) US DOE NPH Workshop October 25, 2011

    13. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

      SciTech Connect (OSTI)

      Oehmen, Chris [PNNL] [PNNL

      2010-01-25

      Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

    14. High Performance Computing for Sequence Analysis (2010 JGI/ANL HPC Workshop)

      ScienceCinema (OSTI)

      Oehmen, Chris [PNNL

      2011-06-08

      Chris Oehmen of the Pacific Northwest National Laboratory gives a presentation on "High Performance Computing for Sequence Analysis" at the JGI/Argonne HPC Workshop on January 25, 2010.

    15. Computer Modeling of Violent Intent: A Content Analysis Approach

      SciTech Connect (OSTI)

      Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.

      2014-01-03

      We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

    16. MHK technologies include current energy conversion

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      technologies include current energy conversion (CEC) devices, e.g., hydrokinetic turbines that extract power from water currents (riverine, tidal, and ocean) and wave energy conversion (WEC) devices that extract power from wave motion. Sandia's MHK research leverages decades of experience in engineering and design and analysis (D&A) of wind power technologies, and its vast research complex, including high-performance computing (HPC), advanced materials and coatings, nondestructive

    17. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

      SciTech Connect (OSTI)

      Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. Sandia National Labs., Albuquerque, NM; Tills, J. J. Tills and Associates, Inc., Sandia Park, NM

      1997-12-01

      The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

    18. Computational Fluid Dynamics-Aided Analysis of a Hydride Vapor Phase

      Office of Scientific and Technical Information (OSTI)

      Epitaxy Reactor (Journal Article) | SciTech Connect Computational Fluid Dynamics-Aided Analysis of a Hydride Vapor Phase Epitaxy Reactor Citation Details In-Document Search Title: Computational Fluid Dynamics-Aided Analysis of a Hydride Vapor Phase Epitaxy Reactor Authors: Schulte, Kevin L. ; Simon, John ; Roy, Abhra ; Reedy, Robert C. ; Young, David L. ; Kuech, Thomas F. ; Ptak, Aaron J. Publication Date: 2016-01-15 OSTI Identifier: 1233143 Report Number(s): NREL/JA-5J00-64594 Journal ID:

    19. Technical support document: Energy conservation standards for consumer products: Dishwashers, clothes washers, and clothes dryers including: Environmental impacts; regulatory impact analysis

      SciTech Connect (OSTI)

      Not Available

      1990-12-01

      The Energy Policy and Conservation Act as amended (P.L. 94-163), establishes energy conservation standards for 12 of the 13 types of consumer products specifically covered by the Act. The legislation requires the Department of Energy (DOE) to consider new or amended standards for these and other types of products at specified times. This Technical Support Document presents the methodology, data and results from the analysis of the energy and economic impacts of standards on dishwashers, clothes washers, and clothes dryers. The economic impact analysis is performed in five major areas: An Engineering Analysis, which establishes technical feasibility and product attributes including costs of design options to improve appliance efficiency. A Consumer Analysis at two levels: national aggregate impacts, and impacts on individuals. The national aggregate impacts include forecasts of appliance sales, efficiencies, energy use, and consumer expenditures. The individual impacts are analyzed by Life-Cycle Cost (LCC), Payback Periods, and Cost of Conserved Energy (CCE), which evaluate the savings in operating expenses relative to increases in purchase price; A Manufacturer Analysis, which provides an estimate of manufacturers' response to the proposed standards. Their response is quantified by changes in several measures of financial performance for a firm. An Industry Impact Analysis shows financial and competitive impacts on the appliance industry. A Utility Analysis that measures the impacts of the altered energy-consumption patterns on electric utilities. A Environmental Effects analysis, which estimates changes in emissions of carbon dioxide, sulfur oxides, and nitrogen oxides, due to reduced energy consumption in the home and at the power plant. A Regulatory Impact Analysis collects the results of all the analyses into the net benefits and costs from a national perspective. 47 figs., 171 tabs. (JF)

    20. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

      SciTech Connect (OSTI)

      Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

      2015-12-31

      Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

    1. Uncertainty Studies of Real Anode Surface Area in Computational Analysis for Molten Salt Electrorefining

      SciTech Connect (OSTI)

      Sungyeol Choi; Jaeyeong Park; Robert O. Hoover; Supathorn Phongikaroon; Michael F. Simpson; Kwang-Rag Kim; Il Soon Hwang

      2011-09-01

      This study examines how much cell potential changes with five differently assumed real anode surface area cases. Determining real anode surface area is a significant issue to be resolved for precisely modeling molten salt electrorefining. Based on a three-dimensional electrorefining model, calculated cell potentials compare with an experimental cell potential variation over 80 hours of operation of the Mark-IV electrorefiner with driver fuel from the Experimental Breeder Reactor II. We succeeded to achieve a good agreement with an overall trend of the experimental data with appropriate selection of a mode for real anode surface area, but there are still local inconsistencies between theoretical calculation and experimental observation. In addition, the results were validated and compared with two-dimensional results to identify possible uncertainty factors that had to be further considered in a computational electrorefining analysis. These uncertainty factors include material properties, heterogeneous material distribution, surface roughness, and current efficiency. Zirconium's abundance and complex behavior have more impact on uncertainty towards the latter period of electrorefining at given batch of fuel. The benchmark results found that anode materials would be dissolved from both axial and radial directions at least for low burn-up metallic fuels after active liquid sodium bonding was dissolved.

    2. RISKIND: An enhanced computer code for National Environmental Policy Act transportation consequence analysis

      SciTech Connect (OSTI)

      Biwer, B.M.; LePoire, D.J.; Chen, S.Y.

      1996-03-01

      The RISKIND computer program was developed for the analysis of radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel (SNF) or other radioactive materials. The code is intended to provide scenario-specific analyses when evaluating alternatives for environmental assessment activities, including those for major federal actions involving radioactive material transport as required by the National Environmental Policy Act (NEPA). As such, rigorous procedures have been implemented to enhance the code`s credibility and strenuous efforts have been made to enhance ease of use of the code. To increase the code`s reliability and credibility, a new version of RISKIND was produced under a quality assurance plan that covered code development and testing, and a peer review process was conducted. During development of the new version, the flexibility and ease of use of RISKIND were enhanced through several major changes: (1) a Windows{sup {trademark}} point-and-click interface replaced the old DOS menu system, (2) the remaining model input parameters were added to the interface, (3) databases were updated, (4) the program output was revised, and (5) on-line help has been added. RISKIND has been well received by users and has been established as a key component in radiological transportation risk assessments through its acceptance by the U.S. Department of Energy community in recent environmental impact statements (EISs) and its continued use in the current preparation of several EISs.

    3. INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION

      SciTech Connect (OSTI)

      Kenneth D. Luff

      2002-09-30

      Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). Luff Exploration Company is applying these tools for analysis of carbonate reservoirs in the southern Williston Basin. The integrated software programs are designed to be used by small team consisting of an engineer, geologist and geophysicist. The software tools are flexible and robust, allowing application in many environments for hydrocarbon reservoirs. Keystone elements of the software tools include clustering and neural-network techniques. The tools are used to transform seismic attribute data to reservoir characteristics such as storage (phi-h), probable oil-water contacts, structural depths and structural growth history. When these reservoir characteristics are combined with neural network or fuzzy logic solvers, they can provide a more complete description of the reservoir. This leads to better estimates of hydrocarbons in place, areal limits and potential for infill or step-out drilling. These tools were developed and tested using seismic, geologic and well data from the Red River Play in Bowman County, North Dakota and Harding County, South Dakota. The geologic setting for the Red River Formation is shallow-shelf carbonate at a depth from 8000 to 10,000 ft.

    4. INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION

      SciTech Connect (OSTI)

      Mark A. Sippel; William C. Carrigan; Kenneth D. Luff; Lyn Canter

      2003-11-12

      Integrated software has been written that comprises the tool kit for the Intelligent Computing System (ICS). The software tools in ICS have been developed for characterization of reservoir properties and evaluation of hydrocarbon potential using a combination of inter-disciplinary data sources such as geophysical, geologic and engineering variables. The ICS tools provide a means for logical and consistent reservoir characterization and oil reserve estimates. The tools can be broadly characterized as (1) clustering tools, (2) neural solvers, (3) multiple-linear regression, (4) entrapment-potential calculator and (5) file utility tools. ICS tools are extremely flexible in their approach and use, and applicable to most geologic settings. The tools are primarily designed to correlate relationships between seismic information and engineering and geologic data obtained from wells, and to convert or translate seismic information into engineering and geologic terms or units. It is also possible to apply ICS in a simple framework that may include reservoir characterization using only engineering, seismic, or geologic data in the analysis. ICS tools were developed and tested using geophysical, geologic and engineering data obtained from an exploitation and development project involving the Red River Formation in Bowman County, North Dakota and Harding County, South Dakota. Data obtained from 3D seismic surveys, and 2D seismic lines encompassing nine prospective field areas were used in the analysis. The geologic setting of the Red River Formation in Bowman and Harding counties is that of a shallow-shelf, carbonate system. Present-day depth of the Red River formation is approximately 8000 to 10,000 ft below ground surface. This report summarizes production results from well demonstration activity, results of reservoir characterization of the Red River Formation at demonstration sites, descriptions of ICS tools and strategies for their application.

    5. Analysis of the cracking behavior of Alloy 600 RVH penetrations. Part 1: Stress analysis and K computation

      SciTech Connect (OSTI)

      Bhandari, S.; Vagner, J.; Garriga-Majo, D.; Amzallag, C.; Faidy, C.

      1996-12-01

      The study presented here concerns the analysis of crack propagation behavior in the Alloy 600 RVH penetrations used in the French 900 and 1300 MWe PWR series. The damage mechanism identified is clearly the SCC in primary water environment. Consequently the analysis presented here is based on: (1) the stress analysis carried out on the RVH penetrations, (2) the SCC model developed in primary water environment and at the operating temperatures, and (3) the fracture mechanics concepts. The different steps involved in the study are: (1) Evaluation of the stress state for the case of the peripheral configuration of RVH penetrations; the case retained here is that of a conic tube with stress analysis conducted using multi-pass welding. (2) Computation of the influence functions (IF) for a polynomial stress distribution in case of a tube of Ri/t ratio (internal diameter/thickness) corresponding to that of an RVH penetration. (3) Establishment of a propagation law based on study and review of data available in the literature. (4) Conduction of a parametric study of crack propagation using several initial defects. (5) Analysis of crack propagation of defects observed in various reactors and comparison with measured propagation rates. This paper (Part 1) deals with the first two steps namely Stress Analysis and K Computation.

    6. Computer code input for thermal hydraulic analysis of Multi-Function Waste Tank Facility Title II design

      SciTech Connect (OSTI)

      Cramer, E.R.

      1994-10-01

      The input files to the P/Thermal computer code are documented for the thermal hydraulic analysis of the Multi-Function Waste Tank Facility Title II design analysis.

    7. Methods and apparatuses for information analysis on shared and distributed computing systems

      DOE Patents [OSTI]

      Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

      2011-02-22

      Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

    8. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    9. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

      SciTech Connect (OSTI)

      Krstulovich, S.F.

      1986-11-12

      This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

    10. COBRA-SFS (Spent Fuel Storage): A thermal-hydraulic analysis computer code: Volume 2, User's manual

      SciTech Connect (OSTI)

      Rector, D.R.; Cuta, J.M.; Lombardo, N.J.; Michener, T.E.; Wheeler, C.L.

      1986-11-01

      COBRA-SFS (Spent Fuel Storage) is a general thermal-hydraulic analysis computer code used to predict temperatures and velocities in a wide variety of systems. The code was refined and specialized for spent fuel storage system analyses for the US Department of Energy's Commercial Spent Fuel Management Program. The finite-volume equations governing mass, momentum, and energy conservation are written for an incompressible, single-phase fluid. The flow equations model a wide range of conditions including natural circulation. The energy equations include the effects of solid and fluid conduction, natural convection, and thermal radiation. The COBRA-SFS code is structured to perform both steady-state and transient calculations; however, the transient capability has not yet been validated. This volume contains the input instructions for COBRA-SFS and an auxiliary radiation exchange factor code, RADX-1. It is intended to aid the user in becoming familiar with the capabilities and modeling conventions of the code.

    11. Internal air flow analysis of a bladeless micro aerial vehicle hemisphere body using computational fluid dynamic

      SciTech Connect (OSTI)

      Othman, M. N. K. E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Zuradzman, M. Razlan E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Hazry, D. E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Khairunizam, Wan E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Shahriman, A. B. E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Yaacob, S. E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; Ahmed, S. Faiz E-mail: zuradzman@unimap.edu.my E-mail: khairunizam@unimap.edu.my E-mail: s.yaacob@unimap.edu.my E-mail: abadal@unimap.edu.my; and others

      2014-12-04

      This paper explain the analysis of internal air flow velocity of a bladeless vertical takeoff and landing (VTOL) Micro Aerial Vehicle (MAV) hemisphere body. In mechanical design, before produce a prototype model, several analyses should be done to ensure the product's effectiveness and efficiency. There are two types of analysis method can be done in mechanical design; mathematical modeling and computational fluid dynamic. In this analysis, I used computational fluid dynamic (CFD) by using SolidWorks Flow Simulation software. The idea came through to overcome the problem of ordinary quadrotor UAV which has larger size due to using four rotors and the propellers are exposed to environment. The bladeless MAV body is designed to protect all electronic parts, which means it can be used in rainy condition. It also has been made to increase the thrust produced by the ducted propeller compare to exposed propeller. From the analysis result, the air flow velocity at the ducted area increased to twice the inlet air. This means that the duct contribute to the increasing of air velocity.

    12. Station for X-ray structural analysis of materials and single crystals (including nanocrystals) on a synchrotron radiation beam from the wiggler at the Siberia-2 storage ring

      SciTech Connect (OSTI)

      Kheiker, D. M. Kovalchuk, M. V.; Korchuganov, V. N.; Shilin, Yu. N.; Shishkov, V. A.; Sulyanov, S. N.; Dorovatovskii, P. V.; Rubinsky, S. V.; Rusakov, A. A.

      2007-11-15

      The design of the station for structural analysis of polycrystalline materials and single crystals (including nanoobjects and macromolecular crystals) on a synchrotron radiation beam from the superconducting wiggler of the Siberia-2 storage ring is described. The wiggler is constructed at the Budker Institute of Nuclear Physics of the Siberian Division of the Russian Academy of Sciences. The X-ray optical scheme of the station involves a (1, -1) double-crystal monochromator with a fixed position of the monochromatic beam and a sagittal bending of the second crystal, segmented mirrors bent by piezoelectric motors, and a (2{theta}, {omega}, {phi}) three-circle goniometer with a fixed tilt angle. Almost all devices of the station are designed and fabricated at the Shubnikov Institute of Crystallography of the Russian Academy of Sciences. The Bruker APEX11 two-dimensional CCD detector will serve as a detector in the station.

    13. Analysis and selection of optimal function implementations in massively parallel computer

      DOE Patents [OSTI]

      Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.

      2011-05-31

      An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

    14. The Radiological Safety Analysis Computer Program (RSAC-5) user`s manual. Revision 1

      SciTech Connect (OSTI)

      Wenzel, D.R.

      1994-02-01

      The Radiological Safety Analysis Computer Program (RSAC-5) calculates the consequences of the release of radionuclides to the atmosphere. Using a personal computer, a user can generate a fission product inventory from either reactor operating history or nuclear criticalities. RSAC-5 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated through the inhalation, immersion, ground surface, and ingestion pathways. RSAC+, a menu-driven companion program to RSAC-5, assists users in creating and running RSAC-5 input files. This user`s manual contains the mathematical models and operating instructions for RSAC-5 and RSAC+. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-5 and RSAC+. These programs are designed for users who are familiar with radiological dose assessment methods.

    15. High-Performance Computing for Real-Time Grid Analysis and Operation

      SciTech Connect (OSTI)

      Huang, Zhenyu; Chen, Yousu; Chavarría-Miranda, Daniel

      2013-10-31

      Power grids worldwide are undergoing an unprecedented transition as a result of grid evolution meeting information revolution. The grid evolution is largely driven by the desire for green energy. Emerging grid technologies such as renewable generation, smart loads, plug-in hybrid vehicles, and distributed generation provide opportunities to generate energy from green sources and to manage energy use for better system efficiency. With utility companies actively deploying these technologies, a high level of penetration of these new technologies is expected in the next 5-10 years, bringing in a level of intermittency, uncertainties, and complexity that the grid did not see nor design for. On the other hand, the information infrastructure in the power grid is being revolutionized with large-scale deployment of sensors and meters in both the transmission and distribution networks. The future grid will have two-way flows of both electrons and information. The challenge is how to take advantage of the information revolution: pull the large amount of data in, process it in real time, and put information out to manage grid evolution. Without addressing this challenge, the opportunities in grid evolution will remain unfulfilled. This transition poses grand challenges in grid modeling, simulation, and information presentation. The computational complexity of underlying power grid modeling and simulation will significantly increase in the next decade due to an increased model size and a decreased time window allowed to compute model solutions. High-performance computing is essential to enable this transition. The essential technical barrier is to vastly increase the computational speed so operation response time can be reduced from minutes to seconds and sub-seconds. The speed at which key functions such as state estimation and contingency analysis are conducted (typically every 3-5 minutes) needs to be dramatically increased so that the analysis of contingencies is both comprehensive and real time. An even bigger challenge is how to incorporate dynamic information into real-time grid operation. Today’s online grid operation is based on a static grid model and can only provide a static snapshot of current system operation status, while dynamic analysis is conducted offline because of low computational efficiency. The offline analysis uses a worst-case scenario to determine transmission limits, resulting in under-utilization of grid assets. This conservative approach does not necessarily lead to reliability. Many times, actual power grid scenarios are not studied, and they will push the grid over the edge and resulting in outages and blackouts. This chapter addresses the HPC needs in power grid analysis and operations. Example applications such as state estimation and contingency analysis are given to demonstrate the value of HPC in power grid applications. Future research directions are suggested for high performance computing applications in power grids to improve the transparency, efficiency, and reliability of power grids.

    16. COBRA-SFS (Spent Fuel Storage): A thermal-hydraulic analysis computer code: Volume 3, Validation assessments

      SciTech Connect (OSTI)

      Lombardo, N.J.; Cuta, J.M.; Michener, T.E.; Rector, D.R.; Wheeler, C.L.

      1986-12-01

      This report presents the results of the COBRA-SFS (Spent Fuel Storage) computer code validation effort. COBRA-SFS, while refined and specialized for spent fuel storage system analyses, is a lumped-volume thermal-hydraulic analysis computer code that predicts temperature and velocity distributions in a wide variety of systems. Through comparisons of code predictions with spent fuel storage system test data, the code's mathematical, physical, and mechanistic models are assessed, and empirical relations defined. The six test cases used to validate the code and code models include single-assembly and multiassembly storage systems under a variety of fill media and system orientations and include unconsolidated and consolidated spent fuel. In its entirety, the test matrix investigates the contributions of convection, conduction, and radiation heat transfer in spent fuel storage systems. To demonstrate the code's performance for a wide variety of storage systems and conditions, comparisons of code predictions with data are made for 14 runs from the experimental data base. The cases selected exercise the important code models and code logic pathways and are representative of the types of simulations required for spent fuel storage system design and licensing safety analyses. For each test, a test description, a summary of the COBRA-SFS computational model, assumptions, and correlations employed are presented. For the cases selected, axial and radial temperature profile comparisons of code predictions with test data are provided, and conclusions drawn concerning the code models and the ability to predict the data and data trends. Comparisons of code predictions with test data demonstrate the ability of COBRA-SFS to successfully predict temperature distributions in unconsolidated or consolidated single and multiassembly spent fuel storage systems.

    17. Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Sciences Our Vision National User Facilities Research Areas In Focus Global Solutions ⇒ Navigate Section Our Vision National User Facilities Research Areas In Focus Global Solutions Computational Research Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. Scientific Networking

    18. Technical support document: Energy efficiency standards for consumer products: Refrigerators, refrigerator-freezers, and freezers including draft environmental assessment, regulatory impact analysis

      SciTech Connect (OSTI)

      1995-07-01

      The Energy Policy and Conservation Act (P.L. 94-163), as amended by the National Appliance Energy Conservation Act of 1987 (P.L. 100-12) and by the National Appliance Energy Conservation Amendments of 1988 (P.L. 100-357), and by the Energy Policy Act of 1992 (P.L. 102-486), provides energy conservation standards for 12 of the 13 types of consumer products` covered by the Act, and authorizes the Secretary of Energy to prescribe amended or new energy standards for each type (or class) of covered product. The assessment of the proposed standards for refrigerators, refrigerator-freezers, and freezers presented in this document is designed to evaluate their economic impacts according to the criteria in the Act. It includes an engineering analysis of the cost and performance of design options to improve the efficiency of the products; forecasts of the number and average efficiency of products sold, the amount of energy the products will consume, and their prices and operating expenses; a determination of change in investment, revenues, and costs to manufacturers of the products; a calculation of the costs and benefits to consumers, electric utilities, and the nation as a whole; and an assessment of the environmental impacts of the proposed standards.

    19. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    20. Methods, computer readable media, and graphical user interfaces for analysis of frequency selective surfaces

      DOE Patents [OSTI]

      Kotter, Dale K. [Shelley, ID; Rohrbaugh, David T. [Idaho Falls, ID

      2010-09-07

      A frequency selective surface (FSS) and associated methods for modeling, analyzing and designing the FSS are disclosed. The FSS includes a pattern of conductive material formed on a substrate to form an array of resonance elements. At least one aspect of the frequency selective surface is determined by defining a frequency range including multiple frequency values, determining a frequency dependent permittivity across the frequency range for the substrate, determining a frequency dependent conductivity across the frequency range for the conductive material, and analyzing the frequency selective surface using a method of moments analysis at each of the multiple frequency values for an incident electromagnetic energy impinging on the frequency selective surface. The frequency dependent permittivity and the frequency dependent conductivity are included in the method of moments analysis.

    1. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information From here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop. Kerberos. AFS. Printing. Recommended applications for various common tasks. Running CPU- or IO-intensive programs (batch jobs) Commonly encountered problems Computing support within BooNE Bringing a computer to FNAL, or purchasing a new one. Laptops. The Computer Security Program Plan for MiniBooNE The

    2. National cyber defense high performance computing and analysis : concepts, planning and roadmap.

      SciTech Connect (OSTI)

      Hamlet, Jason R.; Keliiaa, Curtis M.

      2010-09-01

      There is a national cyber dilemma that threatens the very fabric of government, commercial and private use operations worldwide. Much is written about 'what' the problem is, and though the basis for this paper is an assessment of the problem space, we target the 'how' solution space of the wide-area national information infrastructure through the advancement of science, technology, evaluation and analysis with actionable results intended to produce a more secure national information infrastructure and a comprehensive national cyber defense capability. This cybersecurity High Performance Computing (HPC) analysis concepts, planning and roadmap activity was conducted as an assessment of cybersecurity analysis as a fertile area of research and investment for high value cybersecurity wide-area solutions. This report and a related SAND2010-4765 Assessment of Current Cybersecurity Practices in the Public Domain: Cyber Indications and Warnings Domain report are intended to provoke discussion throughout a broad audience about developing a cohesive HPC centric solution to wide-area cybersecurity problems.

    3. Hydropower generation management under uncertainty via scenario analysis and parallel computation

      SciTech Connect (OSTI)

      Escudero, L.F.; Garcia, C.; Fuente, J.L. de la; Prieto, F.J.

      1996-05-01

      The authors present a modeling framework for the robust solution of hydroelectric power management problems with uncertainty in the values of the water inflows and outflows. A deterministic treatment of the problem provides unsatisfactory results, except for very short time horizons. The authors describe a model based on scenario analysis that allows a satisfactory treatment of uncertainty in the model data for medium and long-term planning problems. Their approach results in a huge model with a network submodel per scenario plus coupling constraints. The size of the problem and the structure of the constraints are adequate for the use of decomposition techniques and parallel computation tools. The authors present computational results for both sequential and parallel implementation versions of the codes, running on a cluster of workstations. The codes have been tested on data obtained from the reservoir network of Iberdrola, a power utility owning 50% of the total installed hydroelectric capacity of Spain, and generating 40% of the total energy demand.

    4. Hydropower generation management under uncertainty via scenario analysis and parallel computation

      SciTech Connect (OSTI)

      Escudero, L.F.; Garcia, C.; Fuente, J.L. de la; Prieto, F.J.

      1995-12-31

      The authors present a modeling framework for the robust solution of hydroelectric power management problems and uncertainty in the values of the water inflows and outflows. A deterministic treatment of the problem provides unsatisfactory results, except for very short time horizons. The authors describe a model based on scenario analysis that allows a satisfactory treatment of uncertainty in the model data for medium and long-term planning problems. This approach results in a huge model with a network submodel per scenario plus coupling constraints. The size of the problem and the structure of the constraints are adequate for the use of decomposition techniques and parallel computation tools. The authors present computational results for both sequential and parallel implementation versions of the codes, running on a cluster of workstations. The code have been tested on data obtained from the reservoir network of Iberdrola, a power utility owning 50% of the total installed hydroelectric capacity of Spain, and generating 40% of the total energy demand.

    5. Methods, apparatuses, and computer-readable media for projectional morphological analysis of N-dimensional signals

      DOE Patents [OSTI]

      Glazoff, Michael V.; Gering, Kevin L.; Garnier, John E.; Rashkeev, Sergey N.; Pyt'ev, Yuri Petrovich

      2016-05-17

      Embodiments discussed herein in the form of methods, systems, and computer-readable media deal with the application of advanced "projectional" morphological algorithms for solving a broad range of problems. In a method of performing projectional morphological analysis, an N-dimensional input signal is supplied. At least one N-dimensional form indicative of at least one feature in the N-dimensional input signal is identified. The N-dimensional input signal is filtered relative to the at least one N-dimensional form and an N-dimensional output signal is generated indicating results of the filtering at least as differences in the N-dimensional input signal relative to the at least one N-dimensional form.

    6. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing. The PRIMA Project

      SciTech Connect (OSTI)

      Malony, Allen D.; Wolf, Felix G.

      2014-01-31

      The growing number of cores provided by todays high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-Productivity Supercomputing (VI-HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-HPS training activities together within the past three years.

    7. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing: the PRIMA Project

      SciTech Connect (OSTI)

      Malony, Allen D.; Wolf, Felix G.

      2014-01-31

      The growing number of cores provided by todays high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish these objectives: (1) refactor TAU and Scalasca performance system components for core code sharing and (2) integrate TAU and Scalasca functionality through data interfaces, formats, and utilities. As presented in this report, the project has completed these goals. In addition to shared technical advances, the groups have worked to engage with users through application performance engineering and tools training. In this regard, the project benefits from the close interactions the teams have with national laboratories in the United States and Germany. We have also sought to enhance our interactions through joint tutorials and outreach. UO has become a member of the Virtual Institute of High-Productivity Supercomputing (VI-HPS) established by the Helmholtz Association of German Research Centres as a center of excellence, focusing on HPC tools for diagnosing programming errors and optimizing performance. UO and FZJ have conducted several VI-HPS training activities together within the past three years.

    8. SAFE: A computer code for the steady-state and transient thermal analysis of LMR fuel elements

      SciTech Connect (OSTI)

      Hayes, S.L.

      1993-12-01

      SAFE is a computer code developed for both the steady-state and transient thermal analysis of single LMR fuel elements. The code employs a two-dimensional control-volume based finite difference methodology with fully implicit time marching to calculate the temperatures throughout a fuel element and its associated coolant channel for both the steady-state and transient events. The code makes no structural calculations or predictions whatsoever. It does, however, accept as input structural parameters within the fuel such as the distributions of porosity and fuel composition, as well as heat generation, to allow a thermal analysis to be performed on a user-specified fuel structure. The code was developed with ease of use in mind. An interactive input file generator and material property correlations internal to the code are available to expedite analyses using SAFE. This report serves as a complete design description of the code as well as a user`s manual. A sample calculation made with SAFE is included to highlight some of the code`s features. Complete input and output files for the sample problem are provided.

    9. Use of model calibration to achieve high accuracy in analysis of computer networks

      DOE Patents [OSTI]

      Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

      2004-05-11

      A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

    10. Computational Fluid Dynamic Analysis of the VHTR Lower Plenum Standard Problem

      SciTech Connect (OSTI)

      Richard W. Johnson; Richard R. Schultz

      2009-07-01

      The United States Department of Energy is promoting the resurgence of nuclear power in the U. S. for both electrical power generation and production of process heat required for industrial processes such as the manufacture of hydrogen for use as a fuel in automobiles. The DOE project is called the next generation nuclear plant (NGNP) and is based on a Generation IV reactor concept called the very high temperature reactor (VHTR), which will use helium as the coolant at temperatures ranging from 450 ºC to perhaps 1000 ºC. While computational fluid dynamics (CFD) has not been used for past safety analysis for nuclear reactors in the U. S., it is being considered for safety analysis for existing and future reactors. It is fully recognized that CFD simulation codes will have to be validated for flow physics reasonably close to actual fluid dynamic conditions expected in normal and accident operational situations. To this end, experimental data have been obtained in a scaled model of a narrow slice of the lower plenum of a prismatic VHTR. The present report presents results of CFD examinations of these data to explore potential issues with the geometry, the initial conditions, the flow dynamics and the data needed to fully specify the inlet and boundary conditions; results for several turbulence models are examined. Issues are addressed and recommendations about the data are made.

    11. Pump apparatus including deconsolidator

      DOE Patents [OSTI]

      Sonwane, Chandrashekhar; Saunders, Timothy; Fitzsimmons, Mark Andrew

      2014-10-07

      A pump apparatus includes a particulate pump that defines a passage that extends from an inlet to an outlet. A duct is in flow communication with the outlet. The duct includes a deconsolidator configured to fragment particle agglomerates received from the passage.

    12. Comparison of different computed radiography systems: Physical characterization and contrast detail analysis

      SciTech Connect (OSTI)

      Rivetti, Stefano; Lanconelli, Nico; Bertolini, Marco; Nitrosi, Andrea; Burani, Aldo; Acchiappati, Domenico

      2010-02-15

      Purpose: In this study, five different units based on three different technologies--traditional computed radiography (CR) units with granular phosphor and single-side reading, granular phosphor and dual-side reading, and columnar phosphor and line-scanning reading--are compared in terms of physical characterization and contrast detail analysis. Methods: The physical characterization of the five systems was obtained with the standard beam condition RQA5. Three of the units have been developed by FUJIFILM (FCR ST-VI, FCR ST-BD, and FCR Velocity U), one by Kodak (Direct View CR 975), and one by Agfa (DX-S). The quantitative comparison is based on the calculation of the modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE). Noise investigation was also achieved by using a relative standard deviation analysis. Psychophysical characterization is assessed by performing a contrast detail analysis with an automatic reading of CDRAD images. Results: The most advanced units based on columnar phosphors provide MTF values in line or better than those from conventional CR systems. The greater thickness of the columnar phosphor improves the efficiency, allowing for enhanced noise properties. In fact, NPS values for standard CR systems are remarkably higher for all the investigated exposures and especially for frequencies up to 3.5 lp/mm. As a consequence, DQE values for the three units based on columnar phosphors and line-scanning reading, or granular phosphor and dual-side reading, are neatly better than those from conventional CR systems. Actually, DQE values of about 40% are easily achievable for all the investigated exposures. Conclusions: This study suggests that systems based on the dual-side reading or line-scanning reading with columnar phosphors provide a remarkable improvement when compared to conventional CR units and yield results in line with those obtained from most digital detectors for radiography.

    13. MHK technology developments include...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... background in the design and analysis of wind-turbine rotors, Sandia applies a variety of ... WEC-Sim experimental validation testing of the floating oscillating surge WEC DOE ...

    14. Computational analysis of an autophagy/translation switch based on mutual inhibition of MTORC1 and ULK1

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Szymańska, Paulina; Martin, Katie R.; MacKeigan, Jeffrey P.; Hlavacek, William S.; Lipniacki, Tomasz

      2015-03-11

      We constructed a mechanistic, computational model for regulation of (macro)autophagy and protein synthesis (at the level of translation). The model was formulated to study the system-level consequences of interactions among the following proteins: two key components of MTOR complex 1 (MTORC1), namely the protein kinase MTOR (mechanistic target of rapamycin) and the scaffold protein RPTOR; the autophagy-initiating protein kinase ULK1; and the multimeric energy-sensing AMP-activated protein kinase (AMPK). Inputs of the model include intrinsic AMPK kinase activity, which is taken as an adjustable surrogate parameter for cellular energy level or AMP:ATP ratio, and rapamycin dose, which controls MTORC1 activity. Outputsmore » of the model include the phosphorylation level of the translational repressor EIF4EBP1, a substrate of MTORC1, and the phosphorylation level of AMBRA1 (activating molecule in BECN1-regulated autophagy), a substrate of ULK1 critical for autophagosome formation. The model incorporates reciprocal regulation of mTORC1 and ULK1 by AMPK, mutual inhibition of MTORC1 and ULK1, and ULK1-mediated negative feedback regulation of AMPK. Through analysis of the model, we find that these processes may be responsible, depending on conditions, for graded responses to stress inputs, for bistable switching between autophagy and protein synthesis, or relaxation oscillations, comprising alternating periods of autophagy and protein synthesis. A sensitivity analysis indicates that the prediction of oscillatory behavior is robust to changes of the parameter values of the model. The model provides testable predictions about the behavior of the AMPK-MTORC1-ULK1 network, which plays a central role in maintaining cellular energy and nutrient homeostasis.« less

    15. Computational Analysis of an Evolutionarily Conserved VertebrateMuscle Alternative Splicing Program

      SciTech Connect (OSTI)

      Das, Debopriya; Clark, Tyson A.; Schweitzer, Anthony; Marr,Henry; Yamamoto, Miki L.; Parra, Marilyn K.; Arribere, Josh; Minovitsky,Simon; Dubchak, Inna; Blume, John E.; Conboy, John G.

      2006-06-15

      A novel exon microarray format that probes gene expression with single exon resolution was employed to elucidate critical features of a vertebrate muscle alternative splicing program. A dataset of 56 microarray-defined, muscle-enriched exons and their flanking introns were examined computationally in order to investigate coordination of the muscle splicing program. Candidate intron regulatory motifs were required to meet several stringent criteria: significant over-representation near muscle-enriched exons, correlation with muscle expression, and phylogenetic conservation among genomes of several vertebrate orders. Three classes of regulatory motifs were identified in the proximal downstream intron, within 200nt of the target exons: UGCAUG, a specific binding site for Fox-1 related splicing factors; ACUAAC, a novel branchpoint-like element; and UG-/UGC-rich elements characteristic of binding sites for CELF splicing factors. UGCAUG was remarkably enriched, being present in nearly one-half of all cases. These studies suggest that Fox and CELF splicing factors play a major role in enforcing the muscle-specific alternative splicing program, facilitating expression of a set of unique isoforms of cytoskeletal proteins that are critical to muscle cell differentiation. Supplementary materials: There are four supplementary tables and one supplementary figure. The tables provide additional detailed information concerning the muscle-enriched datasets, and about over-represented oligonucleotide sequences in the flanking introns. The supplementary figure shows RT-PCR data confirming the muscle-enriched expression of exons predicted from the microarray analysis.

    16. Radiological Safety Analysis Computer (RSAC) Program Version 7.2 Users Manual

      SciTech Connect (OSTI)

      Dr. Bradley J Schrader

      2010-10-01

      The Radiological Safety Analysis Computer (RSAC) Program Version 7.2 (RSAC-7) is the newest version of the RSAC legacy code. It calculates the consequences of a release of radionuclides to the atmosphere. A user can generate a fission product inventory from either reactor operating history or a nuclear criticality event. RSAC-7 models the effects of high-efficiency particulate air filters or other cleanup systems and calculates the decay and ingrowth during transport through processes, facilities, and the environment. Doses are calculated for inhalation, air immersion, ground surface, ingestion, and cloud gamma pathways. RSAC-7 can be used as a tool to evaluate accident conditions in emergency response scenarios, radiological sabotage events and to evaluate safety basis accident consequences. This users manual contains the mathematical models and operating instructions for RSAC-7. Instructions, screens, and examples are provided to guide the user through the functions provided by RSAC-7. This program was designed for users who are familiar with radiological dose assessment methods.

    17. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

      SciTech Connect (OSTI)

      John D. Bess; Keith C. Bledsoe; Bradley T. Rearden

      2011-02-01

      An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

    18. A computational model for thermal fluid design analysis of nuclear thermal rockets

      SciTech Connect (OSTI)

      Given, J.A.; Anghaie, S.

      1997-01-01

      A computational model for simulation and design analysis of nuclear thermal propulsion systems has been developed. The model simulates a full-topping expander cycle engine system and the thermofluid dynamics of the core coolant flow, accounting for the real gas properties of the hydrogen propellant/coolant throughout the system. Core thermofluid studies reveal that near-wall heat transfer models currently available may not be applicable to conditions encountered within some nuclear rocket cores. Additionally, the possibility of a core thermal fluid instability at low mass fluxes and the effects of the core power distribution are investigated. Results indicate that for tubular core coolant channels, thermal fluid instability is not an issue within the possible range of operating conditions in these systems. Findings also show the advantages of having a nonflat centrally peaking axial core power profile from a fluid dynamic standpoint. The effects of rocket operating conditions on system performance are also investigated. Results show that high temperature and low pressure operation is limited by core structural considerations, while low temperature and high pressure operation is limited by system performance constraints. The utility of these programs for finding these operational limits, optimum operating conditions, and thermal fluid effects is demonstrated.

    19. MCS division researchers help develop new sequencing analysis...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computation Institute has announced a new sequencing analysis service called Globus Genomics. The Globus Genomics team includes two members of Argonne's Mathematics and Computer...

    20. BPO crude oil analysis data base user`s guide: Methods, publications, computer access correlations, uses, availability

      SciTech Connect (OSTI)

      Sellers, C.; Fox, B.; Paulz, J.

      1996-03-01

      The Department of Energy (DOE) has one of the largest and most complete collections of information on crude oil composition that is available to the public. The computer program that manages this database of crude oil analyses has recently been rewritten to allow easier access to this information. This report describes how the new system can be accessed and how the information contained in the Crude Oil Analysis Data Bank can be obtained.

    1. Towards Real-Time High Performance Computing For Power Grid Analysis

      SciTech Connect (OSTI)

      Hui, Peter SY; Lee, Barry; Chikkagoudar, Satish

      2012-11-16

      Real-time computing has traditionally been considered largely in the context of single-processor and embedded systems, and indeed, the terms real-time computing, embedded systems, and control systems are often mentioned in closely related contexts. However, real-time computing in the context of multinode systems, specifically high-performance, cluster-computing systems, remains relatively unexplored. Imposing real-time constraints on a parallel (cluster) computing environment introduces a variety of challenges with respect to the formal verification of the system's timing properties. In this paper, we give a motivating example to demonstrate the need for such a system--- an application to estimate the electromechanical states of the power grid--- and we introduce a formal method for performing verification of certain temporal properties within a system of parallel processes. We describe our work towards a full real-time implementation of the target application--- namely, our progress towards extracting a key mathematical kernel from the application, the formal process by which we analyze the intricate timing behavior of the processes on the cluster, as well as timing measurements taken on our test cluster to demonstrate use of these concepts.

    2. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

      SciTech Connect (OSTI)

      Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie; Mandelli, Diego; Smith, Curtis Lee

      2015-09-01

      The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.

    3. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.

    4. Fracture Analysis of Vessels. Oak Ridge FAVOR, v06.1, Computer Code: Theory and Implementation of Algorithms, Methods, and Correlations

      SciTech Connect (OSTI)

      Williams, P. T.; Dickson, T. L.; Yin, S.

      2007-12-01

      The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include the NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.

    5. An Analysis Framework for Investigating the Trade-offs Between System Performance and Energy Consumption in a Heterogeneous Computing Environment

      SciTech Connect (OSTI)

      Friese, Ryan; Khemka, Bhavesh; Maciejewski, Anthony A; Siegel, Howard Jay; Koenig, Gregory A; Powers, Sarah S; Hilton, Marcia M; Rambharos, Rajendra; Okonski, Gene D; Poole, Stephen W

      2013-01-01

      Rising costs of energy consumption and an ongoing effort for increases in computing performance are leading to a significant need for energy-efficient computing. Before systems such as supercomputers, servers, and datacenters can begin operating in an energy-efficient manner, the energy consumption and performance characteristics of the system must be analyzed. In this paper, we provide an analysis framework that will allow a system administrator to investigate the tradeoffs between system energy consumption and utility earned by a system (as a measure of system performance). We model these trade-offs as a bi-objective resource allocation problem. We use a popular multi-objective genetic algorithm to construct Pareto fronts to illustrate how different resource allocations can cause a system to consume significantly different amounts of energy and earn different amounts of utility. We demonstrate our analysis framework using real data collected from online benchmarks, and further provide a method to create larger data sets that exhibit similar heterogeneity characteristics to real data sets. This analysis framework can provide system administrators with insight to make intelligent scheduling decisions based on the energy and utility needs of their systems.

    6. Information regarding previous INCITE awards including selected highlights

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      | U.S. DOE Office of Science (SC) Information regarding previous INCITE awards including selected highlights Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities User Facilities Accessing ASCR Facilities Innovative & Novel Computational Impact on Theory & Experiement (INCITE) ASCR Leadership Computing Challenge (ALCC) Industrial Users Computational Science Graduate Fellowship (CSGF) Research & Evaluation Prototypes (REP) Science Highlights Benefits of

    7. Open-cycle ocean thermal energy conversion surface-condenser design analysis and computer program

      SciTech Connect (OSTI)

      Panchal, C.B.; Rabas, T.J.

      1991-05-01

      This report documents a computer program for designing a surface condenser that condenses low-pressure steam in an ocean thermal energy conversion (OTEC) power plant. The primary emphasis is on the open-cycle (OC) OTEC power system, although the same condenser design can be used for conventional and hybrid cycles because of their highly similar operating conditions. In an OC-OTEC system, the pressure level is very low (deep vacuums), temperature differences are small, and the inlet noncondensable gas concentrations are high. Because current condenser designs, such as the shell-and-tube, are not adequate for such conditions, a plate-fin configuration is selected. This design can be implemented in aluminum, which makes it very cost-effective when compared with other state-of-the-art vacuum steam condenser designs. Support for selecting a plate-fin heat exchanger for OC-OTEC steam condensation can be found in the sizing (geometric details) and rating (heat transfer and pressure drop) calculations presented. These calculations are then used in a computer program to obtain all the necessary thermal performance details for developing design specifications for a plate-fin steam condenser. 20 refs., 5 figs., 5 tabs.

    8. Cogeneration: Economic and technical analysis. (Latest citations from the INSPEC - The Database for Physics, Electronics, and Computing). Published Search

      SciTech Connect (OSTI)

      Not Available

      1993-11-01

      The bibliography contains citations concerning economic and technical analyses of cogeneration systems. Topics include electric power generation, industrial cogeneration, use by utilities, and fuel cell cogeneration. The citations explore steam power station, gas turbine and steam turbine technology, district heating, refuse derived fuels, environmental effects and regulations, bioenergy and solar energy conversion, waste heat and waste product recycling, and performance analysis. (Contains a minimum of 104 citations and includes a subject term index and title list.)

    9. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

      SciTech Connect (OSTI)

      Messer, Bronson; Sewell, Christopher; Heitmann, Katrin; Finkel, Dr. Hal J; Fasel, Patricia; Zagaris, George; Pope, Adrian; Habib, Salman; Parete-Koon, Suzanne T

      2015-01-01

      Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial in situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.

    10. CORCON-MOD3: An integrated computer model for analysis of molten core-concrete interactions. User`s manual

      SciTech Connect (OSTI)

      Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O.

      1993-10-01

      The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.

    11. FRAP-T6: a computer code for the transient analysis of oxide fuel rods. [PWR; BWR

      SciTech Connect (OSTI)

      Siefken, L.J.; Shah, V.N.; Berna, G.A.; Hohorst, J.K.

      1983-06-01

      FRAP-T6 is a computer code which is being developed to calculate the transient behavior of a light water reactor fuel rod. This report is an addendum to the FRAP-T6/MODO user's manual which provides the additional user information needed to use FRAP-T6/MOD1. This includes model changes, improvements, and additions, coding changes and improvements, change in input and control language, and example problem solutions to aid the user. This information is designed to supplement the FRAP-T6/MODO user's manual.

    12. Inference of tumor evolution during chemotherapy by computational modeling and in situ analysis of genetic and phenotypic cellular diversity

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Almendro, Vanessa; Cheng, Yu -Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muñoz, Montse; Russnes, Hege  G.; Helland, Åslaug; et al

      2014-02-01

      Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatialmore » distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.« less

    13. Inference of tumor evolution during chemotherapy by computational modeling and in situ analysis of genetic and phenotypic cellular diversity

      SciTech Connect (OSTI)

      Almendro, Vanessa; Cheng, Yu -Kang; Randles, Amanda; Itzkovitz, Shalev; Marusyk, Andriy; Ametller, Elisabet; Gonzalez-Farre, Xavier; Muñoz, Montse; Russnes, Hege  G.; Helland, Åslaug; Rye, Inga  H.; Borresen-Dale, Anne -Lise; Maruyama, Reo; van Oudenaarden, Alexander; Dowsett, Mitchell; Jones, Robin  L.; Reis-Filho, Jorge; Gascon, Pere; Gönen, Mithat; Michor, Franziska; Polyak, Kornelia

      2014-02-01

      Cancer therapy exerts a strong selection pressure that shapes tumor evolution, yet our knowledge of how tumors change during treatment is limited. Here, we report the analysis of cellular heterogeneity for genetic and phenotypic features and their spatial distribution in breast tumors pre- and post-neoadjuvant chemotherapy. We found that intratumor genetic diversity was tumor-subtype specific, and it did not change during treatment in tumors with partial or no response. However, lower pretreatment genetic diversity was significantly associated with pathologic complete response. In contrast, phenotypic diversity was different between pre- and post-treatment samples. We also observed significant changes in the spatial distribution of cells with distinct genetic and phenotypic features. We used these experimental data to develop a stochastic computational model to infer tumor growth patterns and evolutionary dynamics. Our results highlight the importance of integrated analysis of genotypes and phenotypes of single cells in intact tissues to predict tumor evolution.

    14. Scalable Computational Methods for the Analysis of High-Throughput Biological Data

      SciTech Connect (OSTI)

      Langston, Michael A

      2012-09-06

      This primary focus of this research project is elucidating genetic regulatory mechanisms that control an organism?¢????s responses to low-dose ionizing radiation. Although low doses (at most ten centigrays) are not lethal to humans, they elicit a highly complex physiological response, with the ultimate outcome in terms of risk to human health unknown. The tools of molecular biology and computational science will be harnessed to study coordinated changes in gene expression that orchestrate the mechanisms a cell uses to manage the radiation stimulus. High performance implementations of novel algorithms that exploit the principles of fixed-parameter tractability will be used to extract gene sets suggestive of co-regulation. Genomic mining will be performed to scrutinize, winnow and highlight the most promising gene sets for more detailed investigation. The overall goal is to increase our understanding of the health risks associated with exposures to low levels of radiation.

    15. Intelligent Computing System for Reservoir Analysis and Risk Assessment of Red River Formation, Class Revisit

      SciTech Connect (OSTI)

      Sippel, Mark A.

      2002-09-24

      Integrated software was written that comprised the tool kit for the Intelligent Computing System (ICS). The software tools in ICS are for evaluating reservoir and hydrocarbon potential from various seismic, geologic and engineering data sets. The ICS tools provided a means for logical and consistent reservoir characterization. The tools can be broadly characterized as (1) clustering tools, (2) neural solvers, (3) multiple-linear regression, (4) entrapment-potential calculator and (5) combining tools. A flexible approach can be used with the ICS tools. They can be used separately or in a series to make predictions about a desired reservoir objective. The tools in ICS are primarily designed to correlate relationships between seismic information and data obtained from wells; however, it is possible to work with well data alone.

    16. Computational fluid dynamics analysis of a wire-feed, high-velocity oxygen-fuel (HVOF) thermal spray torch

      SciTech Connect (OSTI)

      Lopez, A.R.; Hassan, B.; Oberkampf, W.L.; Neiser, R.A.; Roemer, T.J.

      1996-09-01

      The fluid and particle dynamics of a High-Velocity Oxygen-Fuel Thermal Spray torch are analyzed using computational and experimental techniques. Three-dimensional Computational Fluid Dynamics (CFD) results are presented for a curved aircap used for coating interior surfaces such as engine cylinder bores. The device analyzed is similar to the Metco Diamond Jet Rotating Wire (DJRW) torch. The feed gases are injected through an axisymmetric nozzle into the curved aircap. Premixed propylene and oxygen are introduced from an annulus in the nozzle, while cooling air is injected between the nozzle and the interior wall of the aircap. The combustion process is modeled using a single-step finite-rate chemistry model with a total of 9 gas species which includes dissociation of combustion products. A continually-fed steel wire passes through the center of the nozzle and melting occurs at a conical tip near the exit of the aircap. Wire melting is simulated computationally by injecting liquid steel particles into the flow field near the tip of the wire. Experimental particle velocity measurements during wire feed were also taken using a Laser Two-Focus (L2F) velocimeter system. Flow fields inside and outside the aircap are presented and particle velocity predictions are compared with experimental measurements outside of the aircap.

    17. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    18. The Use Of Computational Human Performance Modeling As Task Analysis Tool

      SciTech Connect (OSTI)

      Jacuqes Hugo; David Gertman

      2012-07-01

      During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employed to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.

    19. Computational Study and Analysis of Structural Imperfections in 1D and 2D Photonic Crystals

      SciTech Connect (OSTI)

      K.R. Maskaly

      2005-06-01

      Dielectric reflectors that are periodic in one or two dimensions, also known as 1D and 2D photonic crystals, have been widely studied for many potential applications due to the presence of wavelength-tunable photonic bandgaps. However, the unique optical behavior of photonic crystals is based on theoretical models of perfect analogues. Little is known about the practical effects of dielectric imperfections on their technologically useful optical properties. In order to address this issue, a finite-difference time-domain (FDTD) code is employed to study the effect of three specific dielectric imperfections in 1D and 2D photonic crystals. The first imperfection investigated is dielectric interfacial roughness in quarter-wave tuned 1D photonic crystals at normal incidence. This study reveals that the reflectivity of some roughened photonic crystal configurations can change up to 50% at the center of the bandgap for RMS roughness values around 20% of the characteristic periodicity of the crystal. However, this reflectivity change can be mitigated by increasing the index contrast and/or the number of bilayers in the crystal. In order to explain these results, the homogenization approximation, which is usually applied to single rough surfaces, is applied to the quarter-wave stacks. The results of the homogenization approximation match the FDTD results extremely well, suggesting that the main role of the roughness features is to grade the refractive index profile of the interfaces in the photonic crystal rather than diffusely scatter the incoming light. This result also implies that the amount of incoherent reflection from the roughened quarterwave stacks is extremely small. This is confirmed through direct extraction of the amount of incoherent power from the FDTD calculations. Further FDTD studies are done on the entire normal incidence bandgap of roughened 1D photonic crystals. These results reveal a narrowing and red-shifting of the normal incidence bandgap with increasing RMS roughness. Again, the homogenization approximation is able to predict these results. The problem of surface scratches on 1D photonic crystals is also addressed. Although the reflectivity decreases are lower in this study, up to a 15% change in reflectivity is observed in certain scratched photonic crystal structures. However, this reflectivity change can be significantly decreased by adding a low index protective coating to the surface of the photonic crystal. Again, application of homogenization theory to these structures confirms its predictive power for this type of imperfection as well. Additionally, the problem of a circular pores in 2D photonic crystals is investigated, showing that almost a 50% change in reflectivity can occur for some structures. Furthermore, this study reveals trends that are consistent with the 1D simulations: parameter changes that increase the absolute reflectivity of the photonic crystal will also increase its tolerance to structural imperfections. Finally, experimental reflectance spectra from roughened 1D photonic crystals are compared to the results predicted computationally in this thesis. Both the computed and experimental spectra correlate favorably, validating the findings presented herein.

    20. User's manual for RATEPAC: a digital-computer program for revenue requirements and rate-impact analysis

      SciTech Connect (OSTI)

      Fuller, L.C.

      1981-09-01

      The RATEPAC computer program is designed to model the financial aspects of an electric power plant or other investment requiring capital outlays and having annual operating expenses. The program produces incremental pro forma financial statements showing how an investment will affect the overall financial statements of a business entity. The code accepts parameters required to determine capital investment and expense as a function of time and sums these to determine minimum revenue requirements (cost of service). The code also calculates present worth of revenue requirements and required return on rate base. This user's manual includes a general description of the code as well as the instructions for input data preparation. A complete example case is appended.

    1. Computing and Computational Sciences Directorate - Computer Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, ...

    2. Computational Science and Engineering

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science and Engineering NETL's Computational Science and Engineering competency consists of conducting applied scientific research and developing physics-based simulation models, methods, and tools to support the development and deployment of novel process and equipment designs. Research includes advanced computations to generate information beyond the reach of experiments alone by integrating experimental and computational sciences across different length and time scales. Specific

    3. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw; Gokhale, Maya B.; McCabe, Kevin Peter

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    4. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    5. Computation & Simulation > Theory & Computation > Research >...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...

    6. Parallel computation safety analysis irradiation targets fission product molybdenum in neutronic aspect using the successive over-relaxation algorithm

      SciTech Connect (OSTI)

      Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos

      2014-09-30

      One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo{sup 99} used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 10{sup 6} cm{sup −1}) in a tube, their delta reactivities are the still within safety limits; however, for 7.9542 g and 8.838 g (× 10{sup 6} cm{sup −1}) the limits were exceeded.

    7. Accelerated Aging of BKC 44306-10 Rigid Polyurethane Foam: FT-IR Spectroscopy, Dimensional Analysis, and Micro Computed Tomography

      SciTech Connect (OSTI)

      Gilbertson, Robert D.; Patterson, Brian M.; Smith, Zachary

      2014-01-02

      An accelerated aging study of BKC 44306-10 rigid polyurethane foam was carried out. Foam samples were aged in a nitrogen atmosphere at three different temperatures: 50 C, 65 C, and 80 C. Foam samples were periodically removed from the aging canisters at 1, 3, 6, 9, 12, and 15 month intervals when FT-IR spectroscopy, dimensional analysis, and mechanical testing experiments were performed. Micro Computed Tomography imaging was also employed to study the morphology of the foams. Over the course of the aging study the foams the decreased in size by a magnitude of 0.001 inches per inch of foam. Micro CT showed the heterogeneous nature of the foam structure likely resulting from flow effects during the molding process. The effect of aging on the compression and tensile strength of the foam was minor and no cause for concern. FT-IR spectroscopy was used to follow the foam chemistry. However, it was difficult to draw definitive conclusions about the changes in chemical nature of the materials due to large variability throughout the samples.

    8. Computational analysis of a three-dimensional High-Velocity Oxygen-Fuel (HVOF) Thermal Spray torch

      SciTech Connect (OSTI)

      Hassan, B.; Lopez, A.R.; Oberkampf, W.L.

      1995-07-01

      An analysis of a High-Velocity Oxygen-Fuel Thermal Spray torch is presented using computational fluid dynamics (CFD). Three-dimensional CFD results are presented for a curved aircap used for coating interior surfaces such as engine cylinder bores. The device analyzed is similar to the Metco Diamond Jet Rotating Wire torch, but wire feed is not simulated. To the authors` knowledge, these are the first published 3-D results of a thermal spray device. The feed gases are injected through an axisymmetric nozzle into the curved aircap. Argon is injected through the center of the nozzle. Pre-mixed propylene and oxygen are introduced from an annulus in the nozzle, while cooling air is injected between the nozzle and the interior wall of the aircap. The combustion process is modeled assuming instantaneous chemistry. A standard, two-equation, K-{var_epsilon} turbulence model is employed for the turbulent flow field. An implicit, iterative, finite volume numerical technique is used to solve the coupled conservation of mass, momentum, and energy equations for the gas in a sequential manner. Flow fields inside and outside the aircap are presented and discussed.

    9. Argonne's Laboratory computing center - 2007 annual report.

      SciTech Connect (OSTI)

      Bair, R.; Pieper, G. W.

      2008-05-28

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    10. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

      SciTech Connect (OSTI)

      Saffer, Shelley I.

      2014-12-01

      This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

    11. Kinetic analysis of the phenyl-shift reaction in $\\beta$-O-4 lignin model compounds: A computational study.

      SciTech Connect (OSTI)

      Beste, Ariana; Buchanan III, A C

      2011-01-01

      The phenyl-shift reaction in $\\beta$-phenethyl phenyl ether ($\\beta - \\rm PhCH_2CH_2OPh$, $\\beta$-PPE) is an integral step in the pyrolysis of PPE, which is a model compound for the $\\beta$-O-4 linkage in lignin. We investigated the influence of natural occurring substituents (hydroxy, methoxy) on the reaction rate by calculating relative rate constant using density functional theory in combination with transition state theory, including anharmonic correction for low-frequency modes. The phenyl-shift reaction proceeds through an intermediate and the overall rate constants were computed invoking the steady-state approximation (its validity was confirmed). Substituents on the phenethyl group have only little influence on the rate constants. If a methoxy substituent is located in para position of the phenyl ring adjacent to the ether oxygen, the energies of the intermediate and second transition state are lowered, but the overall rate constant is not significantly altered. This is a consequence of the dominating first transition from pre-complex to intermediate in the overall rate constant. {\\it O}- and di-{\\it o}-methoxy substituents accelerate the phenyl-migration rate compared to $\\beta$-PPE.

    12. Computational analysis of storage synthesis in developing Brassica napus L. (oilseed rape) embryos: Flux variability analysis in relation to 13C-metabolic flux analysis

      SciTech Connect (OSTI)

      Hay, J.; Schwender, J.

      2011-08-01

      Plant oils are an important renewable resource, and seed oil content is a key agronomical trait that is in part controlled by the metabolic processes within developing seeds. A large-scale model of cellular metabolism in developing embryos of Brassica napus (bna572) was used to predict biomass formation and to analyze metabolic steady states by flux variability analysis under different physiological conditions. Predicted flux patterns are highly correlated with results from prior 13C metabolic flux analysis of B. napus developing embryos. Minor differences from the experimental results arose because bna572 always selected only one sugar and one nitrogen source from the available alternatives, and failed to predict the use of the oxidative pentose phosphate pathway. Flux variability, indicative of alternative optimal solutions, revealed alternative pathways that can provide pyruvate and NADPH to plastidic fatty acid synthesis. The nutritional values of different medium substrates were compared based on the overall carbon conversion efficiency (CCE) for the biosynthesis of biomass. Although bna572 has a functional nitrogen assimilation pathway via glutamate synthase, the simulations predict an unexpected role of glycine decarboxylase operating in the direction of NH4+ assimilation. Analysis of the light-dependent improvement of carbon economy predicted two metabolic phases. At very low light levels small reductions in CO2 efflux can be attributed to enzymes of the tricarboxylic acid cycle (oxoglutarate dehydrogenase, isocitrate dehydrogenase) and glycine decarboxylase. At higher light levels relevant to the 13C flux studies, ribulose-1,5-bisphosphate carboxylase activity is predicted to account fully for the light-dependent changes in carbon balance.

    13. Development of an Extensible Computational Framework for Centralized Storage and Distributed Curation and Analysis of Genomic Data Genome-scale Metabolic Models

      SciTech Connect (OSTI)

      Stevens, Rick

      2010-08-01

      The DOE funded KBase project of the Stevens group at the University of Chicago was focused on four high-level goals: (i) improve extensibility, accessibility, and scalability of the SEED framework for genome annotation, curation, and analysis; (ii) extend the SEED infrastructure to support transcription regulatory network reconstructions (2.1), metabolic model reconstruction and analysis (2.2), assertions linked to data (2.3), eukaryotic annotation (2.4), and growth phenotype prediction (2.5); (iii) develop a web-API for programmatic remote access to SEED data and services; and (iv) application of all tools to bioenergy-related genomes and organisms. In response to these goals, we enhanced and improved the ModelSEED resource within the SEED to enable new modeling analyses, including improved model reconstruction and phenotype simulation. We also constructed a new website and web-API for the ModelSEED. Further, we constructed a comprehensive web-API for the SEED as a whole. We also made significant strides in building infrastructure in the SEED to support the reconstruction of transcriptional regulatory networks by developing a pipeline to identify sets of consistently expressed genes based on gene expression data. We applied this pipeline to 29 organisms, computing regulons which were subsequently stored in the SEED database and made available on the SEED website (http://pubseed.theseed.org). We developed a new pipeline and database for the use of kmers, or short 8-residue oligomer sequences, to annotate genomes at high speed. Finally, we developed the PlantSEED, or a new pipeline for annotating primary metabolism in plant genomes. All of the work performed within this project formed the early building blocks for the current DOE Knowledgebase system, and the kmer annotation pipeline, plant annotation pipeline, and modeling tools are all still in use in KBase today.

    14. Computing and Computational Sciences Directorate - Computer Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Science and Mathematics Division Citation: For exemplary administrative secretarial support to the Computer Science and Mathematics Division and to the ORNL ...

    15. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    16. Microsoft Word - NETL-TRS-X-2015_Field-Generated Foamed Cement Initial Collection, Computed Tomography, and Analysis.final.2015

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Field-Generated Foamed Cement: Initial Collection, Computed Tomography, and Analysis 20 July 2015 Office of Fossil Energy NETL-TRS-5-2015 Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information,

    17. Evaluation of computer-based ultrasonic inservice inspection systems

      SciTech Connect (OSTI)

      Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T.

      1994-03-01

      This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems.

    18. Scientific computations section monthly report, November 1993

      SciTech Connect (OSTI)

      Buckner, M.R.

      1993-12-30

      This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

    19. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2016-04-29 11:35:0

    20. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    1. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      DesignForward FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home R & D Exascale Computing Exascale Computing Moving forward into the exascale era, ...

    2. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    3. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    4. A Systematic Comprehensive Computational Model for Stake Estimation in Mission Assurance: Applying Cyber Security Econometrics System (CSES) to Mission Assurance Analysis Protocol (MAAP)

      SciTech Connect (OSTI)

      Abercrombie, Robert K; Sheldon, Frederick T; Grimaila, Michael R

      2010-01-01

      In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper, we discuss how this infrastructure can be used in the subject domain of mission assurance as defined as the full life-cycle engineering process to identify and mitigate design, production, test, and field support deficiencies of mission success. We address the opportunity to apply the Cyberspace Security Econometrics System (CSES) to Carnegie Mellon University and Software Engineering Institute s Mission Assurance Analysis Protocol (MAAP) in this context.

    5. NERSC Enhances PDSF, Genepool Computing Capabilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Capabilities NERSC Enhances PDSF, Genepool Computing Capabilities Linux cluster expansion speeds data access and analysis January 3, 2014 Christmas came early for...

    6. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes -- Update to Include Analyses of an Economizer Option and Alternative Winter Water Heating Control Option

      SciTech Connect (OSTI)

      Baxter, Van D

      2006-12-01

      The long range strategic goal of the Department of Energy's Building Technologies (DOE/BT) Program is to create, by 2020, technologies and design approaches that enable the construction of net-zero energy homes at low incremental cost (DOE/BT 2005). A net zero energy home (NZEH) is a residential building with greatly reduced needs for energy through efficiency gains, with the balance of energy needs supplied by renewable technologies. While initially focused on new construction, these technologies and design approaches are intended to have application to buildings constructed before 2020 as well resulting in substantial reduction in energy use for all building types and ages. DOE/BT's Emerging Technologies (ET) team is working to support this strategic goal by identifying and developing advanced heating, ventilating, air-conditioning, and water heating (HVAC/WH) technology options applicable to NZEHs. Although the energy efficiency of heating, ventilating, and air-conditioning (HVAC) equipment has increased substantially in recent years, new approaches are needed to continue this trend. Dramatic efficiency improvements are necessary to enable progress toward the NZEH goals, and will require a radical rethinking of opportunities to improve system performance. The large reductions in HVAC energy consumption necessary to support the NZEH goals require a systems-oriented analysis approach that characterizes each element of energy consumption, identifies alternatives, and determines the most cost-effective combination of options. In particular, HVAC equipment must be developed that addresses the range of special needs of NZEH applications in the areas of reduced HVAC and water heating energy use, humidity control, ventilation, uniform comfort, and ease of zoning. In FY05 ORNL conducted an initial Stage 1 (Applied Research) scoping assessment of HVAC/WH systems options for future NZEHs to help DOE/BT identify and prioritize alternative approaches for further development. Eleven system concepts with central air distribution ducting and nine multi-zone systems were selected and their annual and peak demand performance estimated for five locations: Atlanta (mixed-humid), Houston (hot-humid), Phoenix (hot-dry), San Francisco (marine), and Chicago (cold). Performance was estimated by simulating the systems using the TRNSYS simulation engine (Solar Energy Laboratory et al. 2006) in two 1800-ft{sup 2} houses--a Building America (BA) benchmark house and a prototype NZEH taken from BEopt results at the take-off (or crossover) point (i.e., a house incorporating those design features such that further progress towards ZEH is through the addition of photovoltaic power sources, as determined by current BEopt analyses conducted by NREL). Results were summarized in a project report, HVAC Equipment Design options for Near-Zero-Energy Homes--A Stage 2 Scoping Assessment, ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. In 2006, the two top-ranked options from the 2005 study, air-source and ground-source versions of an integrated heat pump (IHP) system, were subjected to an initial business case study. The IHPs were subjected to a more rigorous hourly-based assessment of their performance potential compared to a baseline suite of equipment of legally minimum efficiency that provided the same heating, cooling, water heating, demand dehumidification, and ventilation services as the IHPs. Results were summarized in a project report, Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes, ORNL/TM-2006/130 (Baxter 2006). The present report is an update to that document. Its primary purpose is to summarize results of an analysis of the potential of adding an outdoor air economizer operating mode to the IHPs to take advantage of free cooling (using outdoor air to cool the house) whenever possible. In addition it provides some additional detail for an alternative winter water heating/space heating (WH/SH) control strategy briefly described in the original report and corrects some minor errors.

    7. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    8. computational-fluid-dynamics-training

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Table of Contents Date Location Advanced Hydraulic and Aerodynamic Analysis Using CFD March 27-28, 2013 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 30-31, 2011 Argonne TRACC Argonne, IL Computational Hydraulics for Transportation Workshop September 23-24, 2009 Argonne TRACC West Chicago, IL

    9. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    10. Argonne's Laboratory computing resource center : 2006 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

      2007-05-31

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    11. Multi-processor including data flow accelerator module

      DOE Patents [OSTI]

      Davidson, George S.; Pierce, Paul E.

      1990-01-01

      An accelerator module for a data flow computer includes an intelligent memory. The module is added to a multiprocessor arrangement and uses a shared tagged memory architecture in the data flow computer. The intelligent memory module assigns locations for holding data values in correspondence with arcs leading to a node in a data dependency graph. Each primitive computation is associated with a corresponding memory cell, including a number of slots for operands needed to execute a primitive computation, a primitive identifying pointer, and linking slots for distributing the result of the cell computation to other cells requiring that result as an operand. Circuitry is provided for utilizing tag bits to determine automatically when all operands required by a processor are available and for scheduling the primitive for execution in a queue. Each memory cell of the module may be associated with any of the primitives, and the particular primitive to be executed by the processor associated with the cell is identified by providing an index, such as the cell number for the primitive, to the primitive lookup table of starting addresses. The module thus serves to perform functions previously performed by a number of sections of data flow architectures and coexists with conventional shared memory therein. A multiprocessing system including the module operates in a hybrid mode, wherein the same processing modules are used to perform some processing in a sequential mode, under immediate control of an operating system, while performing other processing in a data flow mode.

    12. Computational Analysis of the Pyrolysis of ..beta..-O4 Lignin Model Compounds: Concerted vs. Homolytic Fragmentation

      SciTech Connect (OSTI)

      Clark, J. M.; Robichaud, D. J.; Nimlos, M. R.

      2012-01-01

      The thermochemical conversion of biomass to liquid transportation fuels is a very attractive technology for expanding the utilization of carbon neutral processes and reducing dependency on fossil fuel resources. As with all such emerging technologies, biomass conversion through gasification or pyrolysis has a number of obstacles that need to be overcome to make these processes cost competitive with the refining of fossil fuels. Our current efforts have focused on the investigation of the thermochemistry of the linkages between lignin units using ab initio calculations on dimeric lignin model compounds. All calculations were carried out using M062X density functional theory at the 6-311++G(d,p) basis set. The M062X method has been shown to be consistent with the CBS-QB3 method while being significantly less computationally expensive. To date we have only completed the study on the b-O4 compounds. The theoretical calculations performed in the study indicate that concerted elimination pathways dominate over bond homolysis reactions under typical pyrolysis conditions. However, this does not mean that concerted elimination will be the dominant loss process for lignin. Bimolecular radical chemistry could very well dwarf the unimolecular pathways investigated in this study. These concerted pathways tend to form stable, reasonably non-reactive products that would be more suited producing a fungible bio-oil for the production of liquid transportation fuels.

    13. Parallel computing in enterprise modeling.

      SciTech Connect (OSTI)

      Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

      2008-08-01

      This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

    14. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2015-01-27

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    15. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2014-12-30

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    16. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Laboratory (pdf) DOENNSA Laboratories Fulfill National Mission with Trinity and Cielo Petascale Computers (pdf) Exascale Co-design Center for Materials in Extreme...

    17. Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Advanced Materials Laboratory Center for Integrated Nanotechnologies Combustion Research Facility Computational Science Research Institute Joint BioEnergy Institute About EC News ...

    18. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    19. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved within B 174. Use

    20. Computing and Computational Sciences Directorate - Divisions

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center ...

    1. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and...

    2. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

      SciTech Connect (OSTI)

      Joseph, Earl C.; Conway, Steve; Dekate, Chirag

      2013-09-30

      This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size.  A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

    3. Power throttling of collections of computing elements

      DOE Patents [OSTI]

      Bellofatto, Ralph E.; Coteus, Paul W.; Crumley, Paul G.; Gara, Alan G.; Giampapa, Mark E.; Gooding; Thomas M.; Haring, Rudolf A.; Megerian, Mark G.; Ohmacht, Martin; Reed, Don D.; Swetz, Richard A.; Takken, Todd

      2011-08-16

      An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

    4. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the microgrid.

    5. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the .

    6. Computer, Computational, and Statistical Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Directed Research and Development (LDRD) Defense Advanced Research Projects Agency (DARPA) Defense Threat Reduction Agency (DTRA) Research Applied Computer Science Co-design ...

    7. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes There are currently 2632 nodes available on PDSF. The compute (batch) nodes at PDSF are heterogenous, reflecting the periodic procurement of new nodes (and the eventual retirement of old nodes). From the user's perspective they are essentially all equivalent except that some have more memory per job slot. If your jobs have memory requirements beyond the default maximum of 1.1GB you should specify that in your job submission and the batch system will run your job on an

    8. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    9. Collectively loading an application in a parallel computer

      DOE Patents [OSTI]

      Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Miller, Samuel J.; Mundy, Michael B.

      2016-01-05

      Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

    10. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    11. High Throughput Computing Impact on Meta Genomics (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

      ScienceCinema (OSTI)

      Gore, Brooklin [Morgridge Institute for Research

      2013-01-22

      This presentation includes a brief background on High Throughput Computing, correlating gene transcription factors, optical mapping, genotype to phenotype mapping via QTL analysis, and current work on next gen sequencing.

    12. High Throughput Computing Impact on Meta Genomics (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

      SciTech Connect (OSTI)

      Gore, Brooklin [Morgridge Institute for Research] [Morgridge Institute for Research

      2011-10-12

      This presentation includes a brief background on High Throughput Computing, correlating gene transcription factors, optical mapping, genotype to phenotype mapping via QTL analysis, and current work on next gen sequencing.

    13. Dedicated heterogeneous node scheduling including backfill scheduling

      DOE Patents [OSTI]

      Wood, Robert R. (Livermore, CA); Eckert, Philip D. (Livermore, CA); Hommes, Gregg (Pleasanton, CA)

      2006-07-25

      A method and system for job backfill scheduling dedicated heterogeneous nodes in a multi-node computing environment. Heterogeneous nodes are grouped into homogeneous node sub-pools. For each sub-pool, a free node schedule (FNS) is created so that the number of to chart the free nodes over time. For each prioritized job, using the FNS of sub-pools having nodes useable by a particular job, to determine the earliest time range (ETR) capable of running the job. Once determined for a particular job, scheduling the job to run in that ETR. If the ETR determined for a lower priority job (LPJ) has a start time earlier than a higher priority job (HPJ), then the LPJ is scheduled in that ETR if it would not disturb the anticipated start times of any HPJ previously scheduled for a future time. Thus, efficient utilization and throughput of such computing environments may be increased by utilizing resources otherwise remaining idle.

    14. Proposal for grid computing for nuclear applications

      SciTech Connect (OSTI)

      Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat; Muhamad, Shalina Bt. Sheik; Hassan, Hasni; Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri; and others

      2014-02-12

      The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

    15. Computational analysis of kidney scintigrams

      SciTech Connect (OSTI)

      Vrincianu, D.; Puscasu, E.; Creanga, D.; Stefanescu, C.

      2013-11-13

      The scintigraphic investigation of normal and pathological kidneys was carried out using specialized gamma-camera device from nuclear medicine hospital department. Technetium 90m isotope with gamma radiation emission, coupled with vector molecules for kidney tissues was introduced into the subject body, its dynamics being recorded as data source for kidney clearance capacity. Two representative data series were investigated, corresponding to healthy and pathological organs respectively. The semi-quantitative tests applied for the comparison of the two distinct medical situations were: the shape of probability distribution histogram, the power spectrum, the auto-correlation function and the Lyapunov exponent. While power spectrum led to similar results in both cases, significant differences were revealed by means of distribution probability, Lyapunov exponent and correlation time, recommending these numerical tests as possible complementary tools in clinical diagnosis.

    16. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    17. Cloud computing security.

      SciTech Connect (OSTI)

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    18. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes MC-proc.png Compute Node Configuration 6,384 nodes 2 twelve-core AMD 'MagnyCours' 2.1-GHz processors per node (see die image to the right and schematic below) 24 cores per node (153,216 total cores) 32 GB DDR3 1333-MHz memory per node (6,000 nodes) 64 GB DDR3 1333-MHz memory per node (384 nodes) Peak Gflop/s rate: 8.4 Gflops/core 201.6 Gflops/node 1.28 Peta-flops for the entire machine Each core has its own L1 and L2 caches, with 64 KB and 512KB respectively One 6-MB

    19. INSTRUMENTATION, INCLUDING NUCLEAR AND PARTICLE DETECTORS; RADIATION

      Office of Scientific and Technical Information (OSTI)

      interval technical basis document Chiaro, P.J. Jr. 44 INSTRUMENTATION, INCLUDING NUCLEAR AND PARTICLE DETECTORS; RADIATION DETECTORS; RADIATION MONITORS; DOSEMETERS;...

    20. Annual Technology Baseline (Including Supporting Data); NREL...

      Office of Scientific and Technical Information (OSTI)

      Annual Technology Baseline (Including Supporting Data); NREL (National Renewable Energy Laboratory) Citation Details In-Document Search Title: Annual Technology Baseline ...

    1. Text analysis methods, text analysis apparatuses, and articles of manufacture

      DOE Patents [OSTI]

      Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M

      2014-10-28

      Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.

    2. User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal energy storage coupled with district heating or cooling systems. Volume I. Main text

      SciTech Connect (OSTI)

      Huber, H.D.; Brown, D.R.; Reilly, R.W.

      1982-04-01

      A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. The AQUASTOR model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two principal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains the main text, including introduction, program description, input data instruction, a description of the output, and Appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

    3. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    4. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, ... The DOE Office of Science's Advanced Scientific Computing Research (ASCR) program ...

    5. Internode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

      2014-02-11

      Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

    6. Internode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

      2013-09-03

      Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

    7. Gas storage materials, including hydrogen storage materials

      DOE Patents [OSTI]

      Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

      2013-02-19

      A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

    8. Gas storage materials, including hydrogen storage materials

      DOE Patents [OSTI]

      Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

      2014-11-25

      A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material, such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

    9. Communications circuit including a linear quadratic estimator

      DOE Patents [OSTI]

      Ferguson, Dennis D.

      2015-07-07

      A circuit includes a linear quadratic estimator (LQE) configured to receive a plurality of measurements a signal. The LQE is configured to weight the measurements based on their respective uncertainties to produce weighted averages. The circuit further includes a controller coupled to the LQE and configured to selectively adjust at least one data link parameter associated with a communication channel in response to receiving the weighted averages.

    10. Intentionally Including - Engaging Minorities in Physics Careers |

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Department of Energy Intentionally Including - Engaging Minorities in Physics Careers Intentionally Including - Engaging Minorities in Physics Careers April 24, 2013 - 4:37pm Addthis Joining Director Dot Harris (second from left) were Marlene Kaplan, the Deputy Director of Education and director of EPP, National Oceanic and Atmospheric Administration, Claudia Rankins, a Program Officer with the National Science Foundation and Jim Stith, the past Vice-President of the American Institute of

    11. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

    12. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Grid Computing Center interior. Grid Computing Center interior. Computing Grid Computing As high-energy physics experiments grow larger in scope, they require more computing power to process and analyze data. Laboratories purchase rooms full of computer nodes for experiments to use. But many experiments need even more capacity during peak periods . And some experiments do not need to use all of their computing power all of the time. In the early 2000s, members of Fermilab's Computing Division

    13. RATIO COMPUTER

      DOE Patents [OSTI]

      Post, R.F.

      1958-11-11

      An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.

    14. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    15. Scramjet including integrated inlet and combustor

      SciTech Connect (OSTI)

      Kutschenreuter, P.H. Jr.; Blanton, J.C.

      1992-02-04

      This patent describes a scramjet engine. It comprises: a first surface including an aft facing step; a cowl including: a leading edge and a trailing edge; an upper surface and a lower surface extending between the leading edge and the trailing edge; the cowl upper surface being spaced from and generally parallel to the first surface to define an integrated inlet-combustor therebetween having an inlet for receiving and channeling into the inlet-combustor supersonic inlet airflow; means for injecting fuel into the inlet-combustor at the step for mixing with the supersonic inlet airflow for generating supersonic combustion gases; and further including a spaced pari of sidewalls extending between the first surface to the cowl upper surface and wherein the integrated inlet-combustor is generally rectangular and defined by the sidewall pair, the first surface and the cowl upper surface.

    16. User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal-energy storage oupled with district-heating or cooling systems. Volume II. Appendices

      SciTech Connect (OSTI)

      Huber, H.D.; Brown, D.R.; Reilly, R.W.

      1982-04-01

      A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. the AQUASTOR Model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two prinicpal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains all the appendices, including supply and distribution system cost equations and models, descriptions of predefined residential districts, key equations for the cooling degree-hour methodology, a listing of the sample case output, and appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

    17. Electric Power Monthly, August 1990. [Glossary included

      SciTech Connect (OSTI)

      Not Available

      1990-11-29

      The Electric Power Monthly (EPM) presents monthly summaries of electric utility statistics at the national, Census division, and State level. The purpose of this publication is to provide energy decisionmakers with accurate and timely information that may be used in forming various perspectives on electric issues that lie ahead. Data includes generation by energy source (coal, oil, gas, hydroelectric, and nuclear); generation by region; consumption of fossil fuels for power generation; sales of electric power, cost data; and unusual occurrences. A glossary is included.

    18. Argonne's Laboratory Computing Resource Center : 2005 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

      2007-06-30

      Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to develop comprehensive scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has begun developing a 'path forward' plan for additional computing resources.

    19. NERSC Enhances PDSF, Genepool Computing Capabilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Enhances PDSF, Genepool Computing Capabilities NERSC Enhances PDSF, Genepool Computing Capabilities Linux cluster expansion speeds data access and analysis January 3, 2014 Christmas came early for users of the Parallel Distributed Systems Facility (PDSF) and Genepool systems at Department of Energy's National Energy Research Scientific Computer Center (NERSC). Throughout November members of NERSC's Computational Systems Group were busy expanding the Linux computing resources that support PDSF's

    20. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math » Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs in the next generation of supercomputers. Get Expertise Tim Germann Physics and Chemistry of Materials Email Allen McPherson Energy and Infrastructure Analysis Email Turab Lookman Physics and Condensed Matter and Complex Systems Email Computational co-design involves developing the interacting components of a

    1. Geant4 Computing Performance Benchmarking and Monitoring

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

      2015-12-23

      Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

    2. Method and system for knowledge discovery using non-linear statistical analysis and a 1st and 2nd tier computer program

      DOE Patents [OSTI]

      Hively, Lee M.

      2011-07-12

      The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.

    3. Link failure detection in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Megerian, Mark G.; Smith, Brian E.

      2010-11-09

      Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

    4. Broadcasting a message in a parallel computer

      DOE Patents [OSTI]

      Berg, Jeremy E.; Faraj, Ahmad A.

      2011-08-02

      Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

    5. Subterranean barriers including at least one weld

      DOE Patents [OSTI]

      Nickelson, Reva A.; Sloan, Paul A.; Richardson, John G.; Walsh, Stephanie; Kostelnik, Kevin M.

      2007-01-09

      A subterranean barrier and method for forming same are disclosed, the barrier including a plurality of casing strings wherein at least one casing string of the plurality of casing strings may be affixed to at least another adjacent casing string of the plurality of casing strings through at least one weld, at least one adhesive joint, or both. A method and system for nondestructively inspecting a subterranean barrier is disclosed. For instance, a radiographic signal may be emitted from within a casing string toward an adjacent casing string and the radiographic signal may be detected from within the adjacent casing string. A method of repairing a barrier including removing at least a portion of a casing string and welding a repair element within the casing string is disclosed. A method of selectively heating at least one casing string forming at least a portion of a subterranean barrier is disclosed.

    6. Photoactive devices including porphyrinoids with coordinating additives

      DOE Patents [OSTI]

      Forrest, Stephen R; Zimmerman, Jeramy; Yu, Eric K; Thompson, Mark E; Trinh, Cong; Whited, Matthew; Diev, Vlacheslav

      2015-05-12

      Coordinating additives are included in porphyrinoid-based materials to promote intermolecular organization and improve one or more photoelectric characteristics of the materials. The coordinating additives are selected from fullerene compounds and organic compounds having free electron pairs. Combinations of different coordinating additives can be used to tailor the characteristic properties of such porphyrinoid-based materials, including porphyrin oligomers. Bidentate ligands are one type of coordinating additive that can form coordination bonds with a central metal ion of two different porphyrinoid compounds to promote porphyrinoid alignment and/or pi-stacking. The coordinating additives can shift the absorption spectrum of a photoactive material toward higher wavelengths, increase the external quantum efficiency of the material, or both.

    7. Power generation method including membrane separation

      DOE Patents [OSTI]

      Lokhandwala, Kaaeid A.

      2000-01-01

      A method for generating electric power, such as at, or close to, natural gas fields. The method includes conditioning natural gas containing C.sub.3+ hydrocarbons and/or acid gas by means of a membrane separation step. This step creates a leaner, sweeter, drier gas, which is then used as combustion fuel to run a turbine, which is in turn used for power generation.

    8. Nuclear reactor shield including magnesium oxide

      DOE Patents [OSTI]

      Rouse, Carl A.; Simnad, Massoud T.

      1981-01-01

      An improvement in nuclear reactor shielding of a type used in reactor applications involving significant amounts of fast neutron flux, the reactor shielding including means providing structural support, neutron moderator material, neutron absorber material and other components as described below, wherein at least a portion of the neutron moderator material is magnesium in the form of magnesium oxide either alone or in combination with other moderator materials such as graphite and iron.

    9. Electric power monthly, September 1990. [Glossary included

      SciTech Connect (OSTI)

      Not Available

      1990-12-17

      The purpose of this report is to provide energy decision makers with accurate and timely information that may be used in forming various perspectives on electric issues. The power plants considered include coal, petroleum, natural gas, hydroelectric, and nuclear power plants. Data are presented for power generation, fuel consumption, fuel receipts and cost, sales of electricity, and unusual occurrences at power plants. Data are compared at the national, Census division, and state levels. 4 figs., 52 tabs. (CK)

    10. Rotor assembly including superconducting magnetic coil

      DOE Patents [OSTI]

      Snitchler, Gregory L. (Shrewsbury, MA); Gamble, Bruce B. (Wellesley, MA); Voccio, John P. (Somerville, MA)

      2003-01-01

      Superconducting coils and methods of manufacture include a superconductor tape wound concentrically about and disposed along an axis of the coil to define an opening having a dimension which gradually decreases, in the direction along the axis, from a first end to a second end of the coil. Each turn of the superconductor tape has a broad surface maintained substantially parallel to the axis of the coil.

    11. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

      DOE Patents [OSTI]

      Faraj, Ahmad

      2012-04-17

      Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

    12. computer graphics

      Energy Science and Technology Software Center (OSTI)

      2001-06-08

      MUSTAFA is a scientific visualization package for visualizing data in the EXODUSII file format. These data files are typically priduced from Sandia's suite of finite element engineering analysis codes.

    13. Controlling data transfers from an origin compute node to a target compute node

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

      2011-06-21

      Methods, apparatus, and products are disclosed for controlling data transfers from an origin compute node to a target compute node that include: receiving, by an application messaging module on the target compute node, an indication of a data transfer from an origin compute node to the target compute node; and administering, by the application messaging module on the target compute node, the data transfer using one or more messaging primitives of a system messaging module in dependence upon the indication.

    14. computational-hydraulics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne, Illinois Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. A training course in the use of computational hydraulics and aerodynamics CFD software using CD-adapco's STAR-CCM+ for analysis will be held at TRACC from March 21-22, 2012. The course assumes a basic knowledge of fluid mechanics and will make extensive use of hands on tutorials. CD-adapco will issue

    15. How Do You Reduce Energy Use from Computers and Electronics?...

      Broader source: Energy.gov (indexed) [DOE]

      discussed some ways to reduce the energy used by computers and electronics. Some tips include ensuring your computer is configured for optimal energy savings, turning off devices...

    16. Determination Of Ph Including Hemoglobin Correction

      DOE Patents [OSTI]

      Maynard, John D.; Hendee, Shonn P.; Rohrscheib, Mark R.; Nunez, David; Alam, M. Kathleen; Franke, James E.; Kemeny, Gabor J.

      2005-09-13

      Methods and apparatuses of determining the pH of a sample. A method can comprise determining an infrared spectrum of the sample, and determining the hemoglobin concentration of the sample. The hemoglobin concentration and the infrared spectrum can then be used to determine the pH of the sample. In some embodiments, the hemoglobin concentration can be used to select an model relating infrared spectra to pH that is applicable at the determined hemoglobin concentration. In other embodiments, a model relating hemoglobin concentration and infrared spectra to pH can be used. An apparatus according to the present invention can comprise an illumination system, adapted to supply radiation to a sample; a collection system, adapted to collect radiation expressed from the sample responsive to the incident radiation; and an analysis system, adapted to relate information about the incident radiation, the expressed radiation, and the hemoglobin concentration of the sample to pH.

    17. TRAC-PF1/MOD1: an advanced best-estimate computer program for pressurized water reactor thermal-hydraulic analysis

      SciTech Connect (OSTI)

      Liles, D.R.; Mahaffy, J.H.

      1986-07-01

      The Los Alamos National Laboratory is developing the Transient Reactor Analysis Code (TRAC) to provide advanced best-estimate predictions of postulated accidents in light-water reactors. The TRAC-PF1/MOD1 program provides this capability for pressurized water reactors and for many thermal-hydraulic test facilities. The code features either a one- or a three-dimensional treatment of the pressure vessel and its associated internals, a two-fluid nonequilibrium hydrodynamics model with a noncondensable gas field and solute tracking, flow-regime-dependent constitutive equation treatment, optional reflood tracking capability for bottom-flood and falling-film quench fronts, and consistent treatment of entire accident sequences including the generation of consistent initial conditions. The stability-enhancing two-step (SETS) numerical algorithm is used in the one-dimensional hydrodynamics and permits this portion of the fluid dynamics to violate the material Courant condition. This technique permits large time steps and, hence, reduced running time for slow transients.

    18. Smart Grid Computational Tool | Open Energy Information

      Open Energy Info (EERE)

      project benefits. The Smart Grid Computational Tool employs the benefit analysis methodology that DOE uses to evaluate the Recovery Act smart grid projects. How it works: The...

    19. PREPARING FOR EXASCALE: ORNL Leadership Computing Application...

      Office of Scientific and Technical Information (OSTI)

      ... Requirements elicitation, analysis, validation, and management comprise a difficult and ... Research Org: Oak Ridge National Laboratory (ORNL); Oak Ridge Leadership Computing ...

    20. Drapery assembly including insulated drapery liner

      DOE Patents [OSTI]

      Cukierski, Gwendolyn (Ithaca, NY)

      1983-01-01

      A drapery assembly is disclosed for covering a framed wall opening, the assembly including drapery panels hung on a horizontal traverse rod, the rod having a pair of master slides and means for displacing the master slides between open and closed positions. A pair of insulating liner panels are positioned behind the drapery, the remote side edges of the liner panels being connected with the side portions of the opening frame, and the adjacent side edges of the liner panels being connected with a pair of vertically arranged center support members adapted for sliding movement longitudinally of a horizontal track member secured to the upper horizontal portion of the opening frame. Pivotally arranged brackets connect the center support members with the master slides of the traverse rod whereby movement of the master slides to effect opening and closing of the drapery panels effects simultaneous opening and closing of the liner panels.

    1. Thermovoltaic semiconductor device including a plasma filter

      DOE Patents [OSTI]

      Baldasaro, Paul F.

      1999-01-01

      A thermovoltaic energy conversion device and related method for converting thermal energy into an electrical potential. An interference filter is provided on a semiconductor thermovoltaic cell to pre-filter black body radiation. The semiconductor thermovoltaic cell includes a P/N junction supported on a substrate which converts incident thermal energy below the semiconductor junction band gap into electrical potential. The semiconductor substrate is doped to provide a plasma filter which reflects back energy having a wavelength which is above the band gap and which is ineffectively filtered by the interference filter, through the P/N junction to the source of radiation thereby avoiding parasitic absorption of the unusable portion of the thermal radiation energy.

    2. Optical panel system including stackable waveguides

      DOE Patents [OSTI]

      DeSanto, Leonard; Veligdan, James T.

      2007-03-06

      An optical panel system including stackable waveguides is provided. The optical panel system displays a projected light image and comprises a plurality of planar optical waveguides in a stacked state. The optical panel system further comprises a support system that aligns and supports the waveguides in the stacked state. In one embodiment, the support system comprises at least one rod, wherein each waveguide contains at least one hole, and wherein each rod is positioned through a corresponding hole in each waveguide. In another embodiment, the support system comprises at least two opposing edge structures having the waveguides positioned therebetween, wherein each opposing edge structure contains a mating surface, wherein opposite edges of each waveguide contain mating surfaces which are complementary to the mating surfaces of the opposing edge structures, and wherein each mating surface of the opposing edge structures engages a corresponding complementary mating surface of the opposite edges of each waveguide.

    3. Optical panel system including stackable waveguides

      DOE Patents [OSTI]

      DeSanto, Leonard; Veligdan, James T.

      2007-11-20

      An optical panel system including stackable waveguides is provided. The optical panel system displays a projected light image and comprises a plurality of planar optical waveguides in a stacked state. The optical panel system further comprises a support system that aligns and supports the waveguides in the stacked state. In one embodiment, the support system comprises at least one rod, wherein each waveguide contains at least one hole, and wherein each rod is positioned through a corresponding hole in each waveguide. In another embodiment, the support system comprises at least two opposing edge structures having the waveguides positioned therebetween, wherein each opposing edge structure contains a mating surface, wherein opposite edges of each waveguide contain mating surfaces which are complementary to the mating surfaces of the opposing edge structures, and wherein each mating surface of the opposing edge structures engages a corresponding complementary mating surface of the opposite edges of each waveguide.

    4. Computing and Computational Sciences Directorate - Joint Institute...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (JICS). JICS combines the experience and expertise in theoretical and computational science and engineering, computer science, and mathematics in these two institutions and ...

    5. Engine lubrication circuit including two pumps

      DOE Patents [OSTI]

      Lane, William H.

      2006-10-03

      A lubrication pump coupled to the engine is sized such that the it can supply the engine with a predetermined flow volume as soon as the engine reaches a peak torque engine speed. In engines that operate predominately at speeds above the peak torque engine speed, the lubrication pump is often producing lubrication fluid in excess of the predetermined flow volume that is bypassed back to a lubrication fluid source. This arguably results in wasted power. In order to more efficiently lubricate an engine, a lubrication circuit includes a lubrication pump and a variable delivery pump. The lubrication pump is operably coupled to the engine, and the variable delivery pump is in communication with a pump output controller that is operable to vary a lubrication fluid output from the variable delivery pump as a function of at least one of engine speed and lubrication flow volume or system pressure. Thus, the lubrication pump can be sized to produce the predetermined flow volume at a speed range at which the engine predominately operates while the variable delivery pump can supplement lubrication fluid delivery from the lubrication pump at engine speeds below the predominant engine speed range.

    6. High-Performance Computing for Advanced Smart Grid Applications

      SciTech Connect (OSTI)

      Huang, Zhenyu; Chen, Yousu

      2012-07-06

      The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

    7. Computation Directorate 2007 Annual Report

      SciTech Connect (OSTI)

      Henson, V E; Guse, J A

      2008-03-06

      If there is a single word that both characterized 2007 and dominated the thoughts and actions of many Laboratory employees throughout the year, it is transition. Transition refers to the major shift that took place on October 1, when the University of California relinquished management responsibility for Lawrence Livermore National Laboratory (LLNL), and Lawrence Livermore National Security, LLC (LLNS), became the new Laboratory management contractor for the Department of Energy's (DOE's) National Nuclear Security Administration (NNSA). In the 55 years under the University of California, LLNL amassed an extraordinary record of significant accomplishments, clever inventions, and momentous contributions in the service of protecting the nation. This legacy provides the new organization with a built-in history, a tradition of excellence, and a solid set of core competencies from which to build the future. I am proud to note that in the nearly seven years I have had the privilege of leading the Computation Directorate, our talented and dedicated staff has made far-reaching contributions to the legacy and tradition we passed on to LLNS. Our place among the world's leaders in high-performance computing, algorithmic research and development, applications, and information technology (IT) services and support is solid. I am especially gratified to report that through all the transition turmoil, and it has been considerable, the Computation Directorate continues to produce remarkable achievements. Our most important asset--the talented, skilled, and creative people who work in Computation--has continued a long-standing Laboratory tradition of delivering cutting-edge science even in the face of adversity. The scope of those achievements is breathtaking, and in 2007, our accomplishments span an amazing range of topics. From making an important contribution to a Nobel Prize-winning effort to creating tools that can detect malicious codes embedded in commercial software; from expanding BlueGene/L, the world's most powerful computer, by 60% and using it to capture the most prestigious prize in the field of computing, to helping create an automated control system for the National Ignition Facility (NIF) that monitors and adjusts more than 60,000 control and diagnostic points; from creating a microarray probe that rapidly detects virulent high-threat organisms, natural or bioterrorist in origin, to replacing large numbers of physical computer servers with small numbers of virtual servers, reducing operating expense by 60%, the people in Computation have been at the center of weighty projects whose impacts are felt across the Laboratory and the DOE community. The accomplishments I just mentioned, and another two dozen or so, make up the stories contained in this report. While they form an exceptionally diverse set of projects and topics, it is what they have in common that excites me. They share the characteristic of being central, often crucial, to the mission-driven business of the Laboratory. Computational science has become fundamental to nearly every aspect of the Laboratory's approach to science and even to the conduct of administration. It is difficult to consider how we would proceed without computing, which occurs at all scales, from handheld and desktop computing to the systems controlling the instruments and mechanisms in the laboratories to the massively parallel supercomputers. The reasons for the dramatic increase in the importance of computing are manifest. Practical, fiscal, or political realities make the traditional approach to science, the cycle of theoretical analysis leading to experimental testing, leading to adjustment of theory, and so on, impossible, impractical, or forbidden. How, for example, can we understand the intricate relationship between human activity and weather and climate? We cannot test our hypotheses by experiment, which would require controlled use of the entire earth over centuries. It is only through extremely intricate, detailed computational simulation that we can test our theories, and simulating weather and climate over the entire globe requires the most massive high-performance computers that exist. Such extreme problems are found in numerous laboratory missions, including astrophysics, weapons programs, materials science, and earth science.

    8. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    9. Computational Science Research in Support of Petascale Electromagnetic Modeling

      SciTech Connect (OSTI)

      Lee, L.-Q.; Akcelik, V; Ge, L; Chen, S; Schussman, G; Candel, A; Li, Z; Xiao, L; Kabel, A; Uplenchwar, R; Ng, C; Ko, K; /SLAC

      2008-06-20

      Computational science research components were vital parts of the SciDAC-1 accelerator project and are continuing to play a critical role in newly-funded SciDAC-2 accelerator project, the Community Petascale Project for Accelerator Science and Simulation (ComPASS). Recent advances and achievements in the area of computational science research in support of petascale electromagnetic modeling for accelerator design analysis are presented, which include shape determination of superconducting RF cavities, mesh-based multilevel preconditioner in solving highly-indefinite linear systems, moving window using h- or p- refinement for time-domain short-range wakefield calculations, and improved scalable application I/O.

    10. Computer Security Risk Assessment

      Energy Science and Technology Software Center (OSTI)

      1992-02-11

      LAVA/CS (LAVA for Computer Security) is an application of the Los Alamos Vulnerability Assessment (LAVA) methodology specific to computer and information security. The software serves as a generic tool for identifying vulnerabilities in computer and information security safeguards systems. Although it does not perform a full risk assessment, the results from its analysis may provide valuable insights into security problems. LAVA/CS assumes that the system is exposed to both natural and environmental hazards and tomore » deliberate malevolent actions by either insiders or outsiders. The user in the process of answering the LAVA/CS questionnaire identifies missing safeguards in 34 areas ranging from password management to personnel security and internal audit practices. Specific safeguards protecting a generic set of assets (or targets) from a generic set of threats (or adversaries) are considered. There are four generic assets: the facility, the organization''s environment; the hardware, all computer-related hardware; the software, the information in machine-readable form stored both on-line or on transportable media; and the documents and displays, the information in human-readable form stored as hard-copy materials (manuals, reports, listings in full-size or microform), film, and screen displays. Two generic threats are considered: natural and environmental hazards, storms, fires, power abnormalities, water and accidental maintenance damage; and on-site human threats, both intentional and accidental acts attributable to a perpetrator on the facility''s premises.« less

    11. Physics, Computer Science and Mathematics Division. Annual report, January 1-December 31, 1980

      SciTech Connect (OSTI)

      Birge, R.W.

      1981-12-01

      Research in the physics, computer science, and mathematics division is described for the year 1980. While the division's major effort remains in high energy particle physics, there is a continually growing program in computer science and applied mathematics. Experimental programs are reported in e/sup +/e/sup -/ annihilation, muon and neutrino reactions at FNAL, search for effects of a right-handed gauge boson, limits on neutrino oscillations from muon-decay neutrinos, strong interaction experiments at FNAL, strong interaction experiments at BNL, particle data center, Barrelet moment analysis of ..pi..N scattering data, astrophysics and astronomy, earth sciences, and instrument development and engineering for high energy physics. In theoretical physics research, studies included particle physics and accelerator physics. Computer science and mathematics research included analytical and numerical methods, information analysis techniques, advanced computer concepts, and environmental and epidemiological studies. (GHT)

    12. Barbara Helland Advanced Scientific Computing Research NERSC-HEP Requirements Review

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7-28, 2012 Barbara Helland Advanced Scientific Computing Research NERSC-HEP Requirements Review 1 Science C ase S tudies d rive d iscussions Program R equirements R eviews  Program offices evaluated every two-three years  Participants include program managers, PI/ Scientists, ESnet/NERSC staff and management  User-driven discussion of science opportunities and needs  What: Instruments and facilities, data scale, computational requirements  How: science process, data analysis,

    13. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      LaboratoryNational Security Education Center Menu About Contact Educational Prog Computer System, Cluster and Networking Summer Institute (CSCNSI) IS&T Data Science at Scale Summer School IS&T Co-Design Summer School Parallel Computing Summer Research Internship Univ Partnerships CMU/LANL Institute for Reliable High Performance Technology (IRHPIT) Missouri S&T/LANL Cyber Security Sciences Institute (CSSI) UC, Davis/LANL Institute for Next Generation Visualization and Analysis (INGVA)

    14. computational-hydraulics-for-transportation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Transportation Workshop Sept. 23-24, 2009 Argonne TRACC Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small The Transportation Research and Analysis Computing Center at Argonne National Laboratory will hold a workshop on the use of computational hydraulics for transportation applications. The goals of the workshop are: Bring together people who are using or would benefit from the use of high performance cluster

    15. Computational Physicist | Princeton Plasma Physics Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Physicist Department: Theory Supervisor(s): Steve Jardin Staff: ENG 04 Requisition Number: 16000352 This position is in the Computational Plasma Physics Group. PPPL seeks a computational physicist for the TRANSP development and CPPG (Computational Plasma Physics Group) support group. The TRANSP software package is used by fusion physicists worldwide for comprehensive analysis and interpretation of data from magnetic-confinement fusion experiments and to predict the performance of

    16. Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Caterpillar and Cummins Gain Edge Through Argonnne's Rare Computer Modeling and Analysis Resources PDF icon catcumminscomputingsuccessstorydec2015...

    17. 2D Wavefront Sensor Analysis and Control

      Energy Science and Technology Software Center (OSTI)

      1996-02-19

      This software is designed for data acquisition and analysis of two dimensional wavefront sensors. The software includes data acquisition and control functions for an EPIX frame grabber to acquire data from a computer and all the appropriate analysis functions necessary to produce and display intensity and phase information. This software is written in Visual Basic for windows.

    18. Ionic liquids, electrolyte solutions including the ionic liquids, and energy storage devices including the ionic liquids

      DOE Patents [OSTI]

      Gering, Kevin L.; Harrup, Mason K.; Rollins, Harry W.

      2015-12-08

      An ionic liquid including a phosphazene compound that has a plurality of phosphorus-nitrogen units and at least one pendant group bonded to each phosphorus atom of the plurality of phosphorus-nitrogen units. One pendant group of the at least one pendant group comprises a positively charged pendant group. Additional embodiments of ionic liquids are disclosed, as are electrolyte solutions and energy storage devices including the embodiments of the ionic liquid.

    19. Progress report No. 56, October 1, 1979-September 30, 1980. [Courant Mathematics and Computing Lab. , New York Univ

      SciTech Connect (OSTI)

      1980-10-01

      Research during the period is sketched in a series of abstract-length summaries. The forte of the Laboratory lies in the development and analysis of mathematical models and efficient computing methods for the rapid solution of technological problems of interest to DOE, in particular, the detailed calculation on large computers of complicated fluid flows in which reactions and heat conduction may be taking place. The research program of the Laboratory encompasses two broad categories: analytical and numerical methods, which include applied analysis, computational mathematics, and numerical methods for partial differential equations, and advanced computer concepts, which include software engineering, distributed systems, and high-performance systems. Lists of seminars and publications are included. (RWR)

    20. Development of computer graphics

      SciTech Connect (OSTI)

      Nuttall, H.E.

      1989-07-01

      The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

    1. NREL: Technology Deployment - Cities-LEAP Energy Profile Tool Includes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Energy Data on More than 23,400 U.S. Cities Cities-LEAP Energy Profile Tool Includes Energy Data on More than 23,400 U.S. Cities News NREL Report Examines Energy Use in Cities and Proposes Next Steps for Energy Innovation Publications Citi-Level Energy Decision Making: Data Use in Energy Planning, Implementation, and Evaluation in U.S. Cities Sponsors DOE's Energy Office of Energy Efficiency and Renewable Energy Policy and Analysis Office Related Stories Hawaii's First Net-Zero Energy

    2. Your Computer Would Like a Little Sleep, Too

      Broader source: Energy.gov [DOE]

      One woman considers energy efficient choices in purchasing a new computer, including hardware and active power management software.

    3. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC, year 1 quarter 3 progress report.

      SciTech Connect (OSTI)

      Lottes, S.A.; Kulak, R.F.; Bojanowski, C.

      2011-08-26

      The computational fluid dynamics (CFD) and computational structural mechanics (CSM) focus areas at Argonne's Transportation Research and Analysis Computing Center (TRACC) initiated a project to support and compliment the experimental programs at the Turner-Fairbank Highway Research Center (TFHRC) with high performance computing based analysis capabilities in August 2010. The project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at TFHRC for a period of five years, beginning in October 2010. The analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water loads on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of April through June 2011.

    4. Internal combustion engines: Computer applications. (Latest citations from the EI Compendex plus database). Published Search

      SciTech Connect (OSTI)

      Not Available

      1993-10-01

      The bibliography contains citations concerning the application of computers and computerized simulations in the design, analysis, operation, and evaluation of various types of internal combustion engines and associated components and apparatus. Special attention is given to engine control and performance. (Contains a minimum of 67 citations and includes a subject term index and title list.)

    5. Numerical uncertainty in computational engineering and physics

      SciTech Connect (OSTI)

      Hemez, Francois M

      2009-01-01

      Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts of consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.

    6. Model Analysis ToolKit

      Energy Science and Technology Software Center (OSTI)

      2015-05-15

      MATK provides basic functionality to facilitate model analysis within the Python computational environment. Model analysis setup within MATK includes: - define parameters - define observations - define model (python function) - define samplesets (sets of parameter combinations) Currently supported functionality includes: - forward model runs - Latin-Hypercube sampling of parameters - multi-dimensional parameter studies - parallel execution of parameter samples - model calibration using internal Levenberg-Marquardt algorithm - model calibration using lmfit package - modelmore » calibration using levmar package - Markov Chain Monte Carlo using pymc package MATK facilitates model analysis using: - scipy - calibration (scipy.optimize) - rpy2 - Python interface to R« less

    7. Applications of Parallel Computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course...

    8. Locating hardware faults in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-04-13

      Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

    9. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    10. Applied & Computational Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

    11. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental ...

    12. Energy Aware Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX Home R & D Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy ...

    13. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    14. C -parameter distribution at N 3 LL ' including power corrections

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Hoang, André H.; Kolodrubetz, Daniel W.; Mateu, Vicent; Stewart, Iain W.

      2015-05-15

      We compute the e⁺e⁻ C-parameter distribution using the soft-collinear effective theory with a resummation to next-to-next-to-next-to-leading-log prime accuracy of the most singular partonic terms. This includes the known fixed-order QCD results up to O(α3s), a numerical determination of the two-loop nonlogarithmic term of the soft function, and all logarithmic terms in the jet and soft functions up to three loops. Our result holds for C in the peak, tail, and far tail regions. Additionally, we treat hadronization effects using a field theoretic nonperturbative soft function, with moments Ωn. To eliminate an O(ΛQCD) renormalon ambiguity in the soft function, we switchmore » from the MS¯ to a short distance “Rgap” scheme to define the leading power correction parameter Ω1. We show how to simultaneously account for running effects in Ω1 due to renormalon subtractions and hadron-mass effects, enabling power correction universality between C-parameter and thrust to be tested in our setup. We discuss in detail the impact of resummation and renormalon subtractions on the convergence. In the relevant fit region for αs(mZ) and Ω1, the perturbative uncertainty in our cross section is ≅ 2.5% at Q=mZ.« less

    15. Fourth SIAM conference on mathematical and computational issues in the geosciences: Final program and abstracts

      SciTech Connect (OSTI)

      1997-12-31

      The conference focused on computational and modeling issues in the geosciences. Of the geosciences, problems associated with phenomena occurring in the earth`s subsurface were best represented. Topics in this area included petroleum recovery, ground water contamination and remediation, seismic imaging, parameter estimation, upscaling, geostatistical heterogeneity, reservoir and aquifer characterization, optimal well placement and pumping strategies, and geochemistry. Additional sessions were devoted to the atmosphere, surface water and oceans. The central mathematical themes included computational algorithms and numerical analysis, parallel computing, mathematical analysis of partial differential equations, statistical and stochastic methods, optimization, inversion, homogenization and renormalization. The problem areas discussed at this conference are of considerable national importance, with the increasing importance of environmental issues, global change, remediation of waste sites, declining domestic energy sources and an increasing reliance on producing the most out of established oil reservoirs.

    16. Combinatorial evaluation of systems including decomposition of a system representation into fundamental cycles

      DOE Patents [OSTI]

      Oliveira, Joseph S.; Jones-Oliveira, Janet B.; Bailey, Colin G.; Gull, Dean W.

      2008-07-01

      One embodiment of the present invention includes a computer operable to represent a physical system with a graphical data structure corresponding to a matroid. The graphical data structure corresponds to a number of vertices and a number of edges that each correspond to two of the vertices. The computer is further operable to define a closed pathway arrangement with the graphical data structure and identify each different one of a number of fundamental cycles by evaluating a different respective one of the edges with a spanning tree representation. The fundamental cycles each include three or more of the vertices.

    17. New challenges in computational biochemistry

      SciTech Connect (OSTI)

      Honig, B.

      1996-12-31

      The new challenges in computational biochemistry to which the title refers include the prediction of the relative binding free energy of different substrates to the same protein, conformational sampling, and other examples of theoretical predictions matching known protein structure and behavior.

    18. Experimental Mathematics and Computational Statistics

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2009-04-30

      The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

    19. Computing and Computational Sciences Directorate - Information...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      cost-effective, state-of-the-art computing capabilities for research and development. ... communicates and manages strategy, policy and finance across the portfolio of IT assets. ...

    20. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zrich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    1. Computing for Finance

      SciTech Connect (OSTI)

      2010-03-24

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    2. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    3. Performing a global barrier operation in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

      2014-12-09

      Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.

    4. Parallel computing works

      SciTech Connect (OSTI)

      Not Available

      1991-10-23

      An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

    5. A user`s guide to LUGSAN II. A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

      SciTech Connect (OSTI)

      Dunn, W.N.

      1998-03-01

      LUG and Sway brace ANalysis (LUGSAN) II is an analysis and database computer program that is designed to calculate store lug and sway brace loads for aircraft captive carriage. LUGSAN II combines the rigid body dynamics code, SWAY85, with a Macintosh Hypercard database to function both as an analysis and archival system. This report describes the LUGSAN II application program, which operates on the Macintosh System (Hypercard 2.2 or later) and includes function descriptions, layout examples, and sample sessions. Although this report is primarily a user`s manual, a brief overview of the LUGSAN II computer code is included with suggested resources for programmers.

    6. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    7. Energy and cost analysis of a solar-hydrogen combined heat and power system for remote power supply using a computer simulation

      SciTech Connect (OSTI)

      Shabani, Bahman; Andrews, John; Watkins, Simon

      2010-01-15

      A simulation program, based on Visual Pascal, for sizing and techno-economic analysis of the performance of solar-hydrogen combined heat and power systems for remote applications is described. The accuracy of the submodels is checked by comparing the real performances of the system's components obtained from experimental measurements with model outputs. The use of the heat generated by the PEM fuel cell, and any unused excess hydrogen, is investigated for hot water production or space heating while the solar-hydrogen system is supplying electricity. A 5 kWh daily demand profile and the solar radiation profile of Melbourne have been used in a case study to investigate the typical techno-economic characteristics of the system to supply a remote household. The simulation shows that by harnessing both thermal load and excess hydrogen it is possible to increase the average yearly energy efficiency of the fuel cell in the solar-hydrogen system from just below 40% up to about 80% in both heat and power generation (based on the high heating value of hydrogen). The fuel cell in the system is conventionally sized to meet the peak of the demand profile. However, an economic optimisation analysis illustrates that installing a larger fuel cell could lead to up to a 15% reduction in the unit cost of the electricity to an average of just below 90 c/kWh over the assessment period of 30 years. Further, for an economically optimal size of the fuel cell, nearly a half the yearly energy demand for hot water of the remote household could be supplied by heat recovery from the fuel cell and utilising unused hydrogen in the exit stream. Such a system could then complement a conventional solar water heating system by providing the boosting energy (usually in the order of 40% of the total) normally obtained from gas or electricity. (author)

    8. PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy

      SciTech Connect (OSTI)

      Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah

      2009-12-01

      In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for the longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.

    9. Computer memory management system

      DOE Patents [OSTI]

      Kirk, III, Whitson John

      2002-01-01

      A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

    10. ASCR Workshop on Quantum Computing for Science

      SciTech Connect (OSTI)

      Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

      2015-06-01

      This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

    11. Computer Model Buildings Contaminated with Radioactive Material

      Energy Science and Technology Software Center (OSTI)

      1998-05-19

      The RESRAD-BUILD computer code is a pathway analysis model designed to evaluate the potential radiological dose incurred by an individual who works or lives in a building contaminated with radioactive material.

    12. Session on computation in biological pathways

      SciTech Connect (OSTI)

      Karp, P.D.; Riley, M.

      1996-12-31

      The papers in this session focus on the development of pathway databases and computational tools for pathway analysis. The discussion involves existing databases of sequenced genomes, as well as techniques for studying regulatory pathways.

    13. Computational Tools to Accelerate Commercial Development

      SciTech Connect (OSTI)

      Miller, David C.

      2013-01-01

      The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

    14. Accounts Policy | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Accounts Policy All holders of user accounts must abide by all appropriate Argonne Leadership Computing Facility and Argonne National Laboratory computing usage policies. These are described at the time of the account request and include requirements such as using a sufficiently strong password, appropriate use of the system, and so on. Any user not following these requirements will have their account disabled. Furthermore, ALCF resources are intended to be used as a computing resource for

    15. Computer Networking Group | Stanford Synchrotron Radiation Lightsource

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Networking Group Do you need help? For assistance please submit a CNG Help Request ticket. CNG Logo Chris Ramirez SSRL Computer and Networking Group (650) 926-2901 | email Jerry Camuso SSRL Computer and Networking Group (650) 926-2994 | email Networking Support The Networking group provides connectivity and communications services for SSRL. The services provided by the Networking Support Group include: Local Area Network support for cable and wireless connectivity. Installation and

    16. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege o

    17. Intranode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

      2013-07-23

      Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

    18. Intranode data communications in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

      2014-01-07

      Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a computer node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

    19. Fermilab | Science at Fermilab | Computing | High-performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lattice QCD Farm at the Grid Computing Center at Fermilab. Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer can perform billions of multiplication and addition operations each second. High-performance parallel computing becomes necessary when computations become too large or too long to complete on a single such machine. In parallel computing, computations are divided up so that many computers can work on the same problem at

    20. Aggregating job exit statuses of a plurality of compute nodes executing a parallel application

      DOE Patents [OSTI]

      Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Mundy, Michael B.

      2015-07-21

      Aggregating job exit statuses of a plurality of compute nodes executing a parallel application, including: identifying a subset of compute nodes in the parallel computer to execute the parallel application; selecting one compute node in the subset of compute nodes in the parallel computer as a job leader compute node; initiating execution of the parallel application on the subset of compute nodes; receiving an exit status from each compute node in the subset of compute nodes, where the exit status for each compute node includes information describing execution of some portion of the parallel application by the compute node; aggregating each exit status from each compute node in the subset of compute nodes; and sending an aggregated exit status for the subset of compute nodes in the parallel computer.

    1. An analysis of nuclear fuel burnup in the AGR-1 TRISO fuel experiment using gamma spectrometry, mass spectrometry, and computational simulation techniques

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Harp, Jason M.; Demkowicz, Paul A.; Winston, Philip L.; Sterbentz, James W.

      2014-09-03

      AGR 1 was the first in a series of experiments designed to test US TRISO fuel under high temperature gas-cooled reactor irradiation conditions. This experiment was irradiated in the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL) and is currently undergoing post irradiation examination (PIE) at INL and Oak Ridge National Laboratory. One component of the AGR 1 PIE is the experimental evaluation of the burnup of the fuel by two separate techniques. Gamma spectrometry was used to non destructively evaluate the burnup of all 72 of the TRISO fuel compacts that comprised the AGR 1 experiment. Two methodsmore » for evaluating burnup by gamma spectrometry were developed, one based on the Cs 137 activity and the other based on the ratio of Cs 134 and Cs 137 activities. Burnup values determined from both methods compared well with the values predicted from simulations. The highest measured burnup was 20.1% FIMA for the direct method and 20.0% FIMA for the ratio method (compared to 19.56% FIMA from simulations). An advantage of the ratio method is that the burnup of the cylindrical fuel compacts can determined in small (2.5 mm) axial increments and an axial burnup profile can be produced. Destructive chemical analysis by inductively coupled mass spectrometry (ICP MS) was then performed on selected compacts that were representative of the expected range of fuel burnups in the experiment to compare with the burnup values determined by gamma spectrometry. The compacts analyzed by mass spectrometry had a burnup range of 19.3% FIMA to 10.7% FIMA. The mass spectrometry evaluation of burnup for the four compacts agreed well with the gamma spectrometry burnup evaluations and the expected burnup from simulation. For all four compacts analyzed by mass spectrometry, the maximum range in the three experimentally determined values and the predicted value was 6% or less. Furthermore, the results confirm the accuracy of the nondestructive burnup evaluation from gamma spectrometry for TRISO fuel compacts across a burnup range of approximately 10 to 20% FIMA and also validate the approach used in the physics simulation of the AGR 1 experiment.« less

    2. An Analysis of Nuclear Fuel Burnup in the AGR 1 TRISO Fuel Experiment Using Gamma Spectrometry, Mass Spectrometry, and Computational Simulation Techniques

      SciTech Connect (OSTI)

      Jason M. Harp; Paul A. Demkowicz; Phillip L. Winston; James W. Sterbentz

      2014-10-01

      AGR 1 was the first in a series of experiments designed to test US TRISO fuel under high temperature gas-cooled reactor irradiation conditions. This experiment was irradiated in the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL) and is currently undergoing post irradiation examination (PIE) at INL and Oak Ridge National Laboratory. One component of the AGR 1 PIE is the experimental evaluation of the burnup of the fuel by two separate techniques. Gamma spectrometry was used to non destructively evaluate the burnup of all 72 of the TRISO fuel compacts that comprised the AGR 1 experiment. Two methods for evaluating burnup by gamma spectrometry were developed, one based on the Cs 137 activity and the other based on the ratio of Cs 134 and Cs 137 activities. Burnup values determined from both methods compared well with the values predicted from simulations. The highest measured burnup was 20.1 %FIMA for the direct method and 20.0 %FIMA for the ratio method (compared to 19.56% FIMA from simulations). An advantage of the ratio method is that the burnup of the cylindrical fuel compacts can determined in small (2.5 mm) axial increments and an axial burnup profile can be produced. Destructive chemical analysis by inductively coupled mass spectrometry (ICP MS) was then performed on selected compacts that were representative of the expected range of fuel burnups in the experiment to compare with the burnup values determined by gamma spectrometry. The compacts analyzed by mass spectrometry had a burnup range of 19.3 % FIMA to 10.7 % FIMA. The mass spectrometry evaluation of burnup for the four compacts agreed well with the gamma spectrometry burnup evaluations and the expected burnup from simulation. For all four compacts analyzed by mass spectrometry, the maximum range in the three experimentally determined values and the predicted value was 6% or less. The results confirm the accuracy of the nondestructive burnup evaluation from gamma spectrometry for TRISO fuel compacts across a burnup range of approximately 10 to 20 % FIMA and also validate the approach used in the physics simulation of the AGR 1 experiment.

    3. Broadcasting collective operation contributions throughout a parallel computer

      DOE Patents [OSTI]

      Faraj, Ahmad

      2012-02-21

      Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.

    4. Modular Environment for Graph Research and Analysis with a Persistent

      Energy Science and Technology Software Center (OSTI)

      2009-11-18

      The MEGRAPHS software package provides a front-end to graphs and vectors residing on special-purpose computing resources. It allows these data objects to be instantiated, destroyed, and manipulated. A variety of primitives needed for typical graph analyses are provided. An example program illustrating how MEGRAPHS can be used to implement a PageRank computation is included in the distribution.The MEGRAPHS software package is targeted towards developers of graph algorithms. Programmers using MEGRAPHS would write graph analysis programsmore » in terms of high-level graph and vector operations. These computations are transparently executed on the Cray XMT compute nodes.« less

    5. Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery

      SciTech Connect (OSTI)

      Luttman, A.

      2012-03-30

      The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.

    6. Pacing a data transfer operation between compute nodes on a parallel computer

      DOE Patents [OSTI]

      Blocksome, Michael A.

      2011-09-13

      Methods, systems, and products are disclosed for pacing a data transfer between compute nodes on a parallel computer that include: transferring, by an origin compute node, a chunk of an application message to a target compute node; sending, by the origin compute node, a pacing request to a target direct memory access (`DMA`) engine on the target compute node using a remote get DMA operation; determining, by the origin compute node, whether a pacing response to the pacing request has been received from the target DMA engine; and transferring, by the origin compute node, a next chunk of the application message if the pacing response to the pacing request has been received from the target DMA engine.

    7. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost 3 on the TRACC Cluster Oct. ... with an emphasis on applying these capabilities to build computationally efficient models. ...

    8. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    9. Cognitive Computing for Security.

      SciTech Connect (OSTI)

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    10. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    11. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    12. Development of probabilistic multimedia multipathway computer codes.

      SciTech Connect (OSTI)

      Yu, C.; LePoire, D.; Gnanapragasam, E.; Arnish, J.; Kamboj, S.; Biwer, B. M.; Cheng, J.-J.; Zielen, A. J.; Chen, S. Y.; Mo, T.; Abu-Eid, R.; Thaggard, M.; Sallo, A., III.; Peterson, H., Jr.; Williams, W. A.; Environmental Assessment; NRC; EM

      2002-01-01

      The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributions for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.

    13. VISTA - computational tools for comparative genomics

      SciTech Connect (OSTI)

      Frazer, Kelly A.; Pachter, Lior; Poliakov, Alexander; Rubin,Edward M.; Dubchak, Inna

      2004-01-01

      Comparison of DNA sequences from different species is a fundamental method for identifying functional elements in genomes. Here we describe the VISTA family of tools created to assist biologists in carrying out this task. Our first VISTA server at http://www-gsd.lbl.gov/VISTA/ was launched in the summer of 2000 and was designed to align long genomic sequences and visualize these alignments with associated functional annotations. Currently the VISTA site includes multiple comparative genomics tools and provides users with rich capabilities to browse pre-computed whole-genome alignments of large vertebrate genomes and other groups of organisms with VISTA Browser, submit their own sequences of interest to several VISTA servers for various types of comparative analysis, and obtain detailed comparative analysis results for a set of cardiovascular genes. We illustrate capabilities of the VISTA site by the analysis of a 180 kilobase (kb) interval on human chromosome 5 that encodes for the kinesin family member3A (KIF3A) protein.

    14. Python and computer vision

      SciTech Connect (OSTI)

      Doak, J. E.; Prasad, Lakshman

      2002-01-01

      This paper discusses the use of Python in a computer vision (CV) project. We begin by providing background information on the specific approach to CV employed by the project. This includes a brief discussion of Constrained Delaunay Triangulation (CDT), the Chordal Axis Transform (CAT), shape feature extraction and syntactic characterization, and normalization of strings representing objects. (The terms 'object' and 'blob' are used interchangeably, both referring to an entity extracted from an image.) The rest of the paper focuses on the use of Python in three critical areas: (1) interactions with a MySQL database, (2) rapid prototyping of algorithms, and (3) gluing together all components of the project including existing C and C++ modules. For (l), we provide a schema definition and discuss how the various tables interact to represent objects in the database as tree structures. (2) focuses on an algorithm to create a hierarchical representation of an object, given its string representation, and an algorithm to match unknown objects against objects in a database. And finally, (3) discusses the use of Boost Python to interact with the pre-existing C and C++ code that creates the CDTs and CATS, performs shape feature extraction and syntactic characterization, and normalizes object strings. The paper concludes with a vision of the future use of Python for the CV project.

    15. Projects | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Projects bgclang Compiler Hal Finkel Cobalt Scheduler Bill Allcock, Paul Rich, Brian Toonen, Tom Uram GLEAN: Scalable In Situ Analysis and I/O Acceleration on Leadership Computing Systems Michael E. Papka, Venkat Vishwanath, Mark Hereld, Preeti Malakar, Joe Insley, Silvio Rizzi, Tom Uram Petrel: Data Management and Sharing Pilot Ian Foster, Michael E. Papka, Bill Allcock, Ben Allen, Rachana Ananthakrishnan, Lukasz Lacinski The Swift Parallel Scripting Language for ALCF Systems Michael Wilde,

    16. HPCTW | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] HPCTW Introduction HPCTW is a set of libraries that may be

    17. HPCToolkit | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Tuning MPI on BG/Q Tuning and Analysis Utilities (TAU) HPCToolkit HPCTW mpiP gprof Profiling Tools Darshan PAPI BG/Q Performance Counters BGPM Openspeedshop Scalasca BG/Q DGEMM Performance Automatic Performance Collection (AutoPerf) Software & Libraries IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] HPCToolkit References HPCToolkit Website HPCT Documentation

    18. Computing environment logbook

      DOE Patents [OSTI]

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    19. Mathematical and Computational Epidemiology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

    20. BNL ATLAS Grid Computing

      ScienceCinema (OSTI)

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    1. Computer virus information update CIAC-2301

      SciTech Connect (OSTI)

      Orvis, W.J.

      1994-01-15

      While CIAC periodically issues bulletins about specific computer viruses, these bulletins do not cover all the computer viruses that affect desktop computers. The purpose of this document is to identify most of the known viruses for the MS-DOS and Macintosh platforms and give an overview of the effects of each virus. The authors also include information on some windows, Atari, and Amiga viruses. This document is revised periodically as new virus information becomes available. This document replaces all earlier versions of the CIAC Computer virus Information Update. The date on the front cover indicates date on which the information in this document was extracted from CIAC`s Virus database.

    2. Nuclear Arms Control R&D Consortium includes Los Alamos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nuclear Arms Control R&D Consortium includes Los Alamos Nuclear Arms Control R&D Consortium includes Los Alamos A consortium led by the University of Michigan that includes LANL as ...

    3. computational-hydaulics-march-30

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Aerodynamics using STAR-CCM+ for CFD Analysis March 30-31, 2011 Argonne, Illinois Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small A training course in the use of computational hydraulics and aerodynamics CFD software using CD-adapco's STAR-CCM+ for analysis was held at TRACC from March 30-31, 2011. The course assumes a basic knowledge of fluid mechanics and made extensive use of hands on tutorials.

    4. Natural Gas Delivered to Consumers in Minnesota (Including Vehicle...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Minnesota (Including Vehicle Fuel) (Million Cubic Feet) Natural Gas Delivered to Consumers in Minnesota (Including Vehicle Fuel) (Million Cubic Feet) Year Jan Feb Mar Apr May Jun ...

    5. Trends and challenges when including microstructure in materials...

      Office of Scientific and Technical Information (OSTI)

      Trends and challenges when including microstructure in materials modeling: Examples of ... Title: Trends and challenges when including microstructure in materials modeling: Examples ...

    6. Solar Energy Education. Reader, Part II. Sun story. [Includes...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Reader, Part II. Sun story. Includes glossary Citation Details In-Document Search Title: Solar Energy Education. Reader, Part II. Sun story. Includes glossary You are ...

    7. Natural Gas Delivered to Consumers in California (Including Vehicle...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      California (Including Vehicle Fuel) (Million Cubic Feet) Natural Gas Delivered to Consumers in California (Including Vehicle Fuel) (Million Cubic Feet) Year Jan Feb Mar Apr May Jun ...

    8. Microfluidic devices and methods including porous polymer monoliths...

      Office of Scientific and Technical Information (OSTI)

      Microfluidic devices and methods including porous polymer monoliths Citation Details In-Document Search Title: Microfluidic devices and methods including porous polymer monoliths ...

    9. Microfluidic devices and methods including porous polymer monoliths...

      Office of Scientific and Technical Information (OSTI)

      Microfluidic devices and methods including porous polymer monoliths Title: Microfluidic devices and methods including porous polymer monoliths Microfluidic devices and methods ...

    10. Newport News in Review, ch. 47, segment includes TEDF groundbreaking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      https:www.jlab.orgnewsarticlesnewport-news-review-ch-47-segment-includes-tedf-groundbreaking-event Newport News in Review, ch. 47, segment includes TEDF groundbreaking event...

    11. Property:Number of Plants included in Capacity Estimate | Open...

      Open Energy Info (EERE)

      Plants included in Capacity Estimate Jump to: navigation, search Property Name Number of Plants included in Capacity Estimate Property Type Number Retrieved from "http:...

    12. Property:Number of Plants Included in Planned Estimate | Open...

      Open Energy Info (EERE)

      Number of Plants Included in Planned Estimate Jump to: navigation, search Property Name Number of Plants Included in Planned Estimate Property Type String Description Number of...

    13. FEMP Expands ESPC ENABLE Program to Include More Energy Conservation...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Expands ESPC ENABLE Program to Include More Energy Conservation Measures FEMP Expands ESPC ENABLE Program to Include More Energy Conservation Measures November 13, 2013 - 12:00am...

    14. Should Title 24 Ventilation Requirements Be Amended to include...

      Office of Scientific and Technical Information (OSTI)

      include an Indoor Air Quality Procedure? Citation Details In-Document Search Title: Should Title 24 Ventilation Requirements Be Amended to include an Indoor Air Quality Procedure? ...

    15. Predictive Dynamic Security Assessment through Advanced Computing

      SciTech Connect (OSTI)

      Huang, Zhenyu; Diao, Ruisheng; Jin, Shuangshuang; Chen, Yousu

      2014-11-30

      Abstract— Traditional dynamic security assessment is limited by several factors and thus falls short in providing real-time information to be predictive for power system operation. These factors include the steady-state assumption of current operating points, static transfer limits, and low computational speed. This addresses these factors and frames predictive dynamic security assessment. The primary objective of predictive dynamic security assessment is to enhance the functionality and computational process of dynamic security assessment through the use of high-speed phasor measurements and the application of advanced computing technologies for faster-than-real-time simulation. This paper presents algorithms, computing platforms, and simulation frameworks that constitute the predictive dynamic security assessment capability. Examples of phasor application and fast computation for dynamic security assessment are included to demonstrate the feasibility and speed enhancement for real-time applications.

    16. Adjoints and Large Data Sets in Computational Fluid Dynamics...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Oana Marin Speaker(s) Title: Postdoctoral Appointee, MCS Optimal flow control and stability analysis are some of the fields within Computational Fluid Dynamics (CFD) that...

    17. The National Energy Research Scientific Computing Center: Forty...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The National Energy Research Scientific Computing Center: Forty Years of Supercomputing ... discovery has been evident in both simulation and data analysis for many years. ...

    18. Initial explorations of ARM processors for scientific computing...

      Office of Scientific and Technical Information (OSTI)

      DOE Contract Number: AC02-07CH11359 Resource Type: Conference Resource Relation: Conference: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics ...

    19. A compute-Efficient Bitmap Compression Index for Database Applications

      Energy Science and Technology Software Center (OSTI)

      2006-01-01

      FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index,more » which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.« less

    20. Microsoft PowerPoint - Microbial Genome and Metagenome Analysis Case Study (NERSC Workshop - May 7-8, 2009).ppt [Compatibility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Microbial Genome & Metagenome Analysis: Computational Challenges Natalia N. Ivanova * Nikos C. Kyrpides * Victor M. Markowitz ** * Genome Biology Program, Joint Genome Institute ** Lawrence Berkeley National Lab Microbial genome & metagenome analysis General aims Understand microbial life Apply to agriculture, bioremediation, biofuels, human health Specific aims include Specific aims include Predict biochemistry & physiology of organisms based on genome sequence Explain known

    1. Monitoring system including an electronic sensor platform and an interrogation transceiver

      DOE Patents [OSTI]

      Kinzel, Robert L.; Sheets, Larry R.

      2003-09-23

      A wireless monitoring system suitable for a wide range of remote data collection applications. The system includes at least one Electronic Sensor Platform (ESP), an Interrogator Transceiver (IT) and a general purpose host computer. The ESP functions as a remote data collector from a number of digital and analog sensors located therein. The host computer provides for data logging, testing, demonstration, installation checkout, and troubleshooting of the system. The IT transmits signals from one or more ESP's to the host computer to the ESP's. The IT host computer may be powered by a common power supply, and each ESP is individually powered by a battery. This monitoring system has an extremely low power consumption which allows remote operation of the ESP for long periods; provides authenticated message traffic over a wireless network; utilizes state-of-health and tamper sensors to ensure that the ESP is secure and undamaged; has robust housing of the ESP suitable for use in radiation environments; and is low in cost. With one base station (host computer and interrogator transceiver), multiple ESP's may be controlled at a single monitoring site.

    2. Sandia Energy - High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    3. Computational method and system for modeling, analyzing, and optimizing DNA amplification and synthesis

      DOE Patents [OSTI]

      Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.

      2010-05-04

      A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

    4. Method for transferring data from an unsecured computer to a secured computer

      DOE Patents [OSTI]

      Nilsen, Curt A.

      1997-01-01

      A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

    5. The Magellan Final Report on Cloud Computing

      SciTech Connect (OSTI)

      ,; Coghlan, Susan; Yelick, Katherine

      2011-12-21

      The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.

    6. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or modification. Among NERSC's security goal are: 1. To protect NERSC systems from unauthorized access. 2. To prevent the interruption of services to its users. 3. To prevent misuse or abuse of NERSC resources. Security Incidents If you think there has been a computer security incident you should contact NERSC Security as soon as

    7. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    8. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nucleosynthesis (Technical Report) | SciTech Connect Computational Astrophysics Consortium 3 - Supernovae, Gamma-Ray Bursts and Nucleosynthesis Citation Details In-Document Search Title: Computational Astrophysics Consortium 3 - Supernovae, Gamma-Ray Bursts and Nucleosynthesis Final project report for UCSC's participation in the Computational Astrophysics Consortium - Supernovae, Gamma-Ray Bursts and Nucleosynthesis. As an appendix, the report of the entire Consortium is also appended.

    9. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Exascale Computing » CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in research and development into energy efficient and effective processor and memory architectures for DOE's Exascale program. CAL coordinates hardware architecture R&D activities across the DOE. CAL is a joint NNSA/SC activity involving Sandia National Laboratories (CAL-Sandia) and

    10. Natural Gas Delivered to Consumers in New Mexico (Including Vehicle...

      U.S. Energy Information Administration (EIA) Indexed Site

      Mexico (Including Vehicle Fuel) (Million Cubic Feet) Natural Gas Delivered to Consumers in New Mexico (Including Vehicle Fuel) (Million Cubic Feet) Year Jan Feb Mar Apr May Jun Jul ...

    11. SWS Online Tool now includes Multifamily Content, plus a How...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      SWS Online Tool now includes Multifamily Content, plus a How-To Webinar SWS Online Tool now includes Multifamily Content, plus a How-To Webinar This announcement contains ...

    12. Energy Department Expands Gas Gouging Reporting System to Include...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Expands Gas Gouging Reporting System to Include 1-800 Number: 1-800-244-3301 Energy Department Expands Gas Gouging Reporting System to Include 1-800 Number: 1-800-244-3301 ...

    13. Microfluidic devices and methods including porous polymer monoliths

      Office of Scientific and Technical Information (OSTI)

      (Patent) | DOEPatents Microfluidic devices and methods including porous polymer monoliths Title: Microfluidic devices and methods including porous polymer monoliths Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous polymer monolith may include surfaces

    14. President's FY 2017 Budget Includes $878 Million for Fossil Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Programs | Department of Energy President's FY 2017 Budget Includes $878 Million for Fossil Energy Programs President's FY 2017 Budget Includes $878 Million for Fossil Energy Programs February 9, 2016 - 2:33pm Addthis President Obama's Fiscal Year (FY) 2017 Budget includes a programmatic level of $878 million for the Office of Fossil Energy (FE), including the use of $240 million in prior year funds, to advance technologies related to the reliable, efficient, affordable and environmentally

    15. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and ...

    16. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... for use in Advanced Strategic Computing codes Theory and modeling of dense plasmas in ICF and astrophysics environments Theory and modeling of astrophysics in support of NASA ...

    17. Personal Computer Inventory System

      Energy Science and Technology Software Center (OSTI)

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    18. Excessing of Computers Used for Unclassified Controlled Information...

      Broader source: Energy.gov (indexed) [DOE]

      of approxiinately 800 infomations ystems, including up to 11 5,000 personal computers, many powerful supercomputers, numerous servers, and a broad array of related...

    19. Mira Early Science Program | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC architectures. Together, the 16 projects span a diverse range of scientific fields, numerical methods, programming models, and computational approaches. The latter include...

    20. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      60 Years of Computing 60 Years of Computing

    1. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math science-innovationassetsimagesicon-science.jpg Information Science, Computing, Applied Math National security depends on science ...

    2. Local Orthogonal Cutting Method for Computing Medial Curves and Its

      Office of Scientific and Technical Information (OSTI)

      Biomedical Applications (Journal Article) | SciTech Connect Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications Citation Details In-Document Search Title: Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of

    3. Low latency, high bandwidth data communications between compute nodes in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

      2010-11-02

      Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

    4. Reach and get capability in a computing environment

      DOE Patents [OSTI]

      Bouchard, Ann M.; Osbourn, Gordon C.

      2012-06-05

      A reach and get technique includes invoking a reach command from a reach location within a computing environment. A user can then navigate to an object within the computing environment and invoke a get command on the object. In response to invoking the get command, the computing environment is automatically navigated back to the reach location and the object copied into the reach location.

    5. Theory, Simulation, and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ADTSC Theory, Simulation, and Computation Supporting the Laboratory's overarching strategy to provide cutting-edge tools to guide and interpret experiments and further our fundamental understanding and predictive capabilities for complex systems. Theory, modeling, informatics Suites of experiment data High performance computing, simulation, visualization Contacts Associate Director John Sarrao Deputy Associate Director Paul Dotson Directorate Office (505) 667-6645 Email Applying the Scientific

    6. ELECTRONIC DIGITAL COMPUTER

      DOE Patents [OSTI]

      Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.

      1957-10-01

      The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.

    7. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    8. Traffic information computing platform for big data

      SciTech Connect (OSTI)

      Duan, Zongtao Li, Ying Zheng, Xibin Liu, Yan Dai, Jiting Kang, Jun

      2014-10-06

      Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

    9. Indirection and computer security.

      SciTech Connect (OSTI)

      Berg, Michael J.

      2011-09-01

      The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyze common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions.

    10. Identifying failure in a tree network of a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

      2010-08-24

      Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

    11. Computing contingency statistics in parallel.

      SciTech Connect (OSTI)

      Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

      2010-09-01

      Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

    12. Percentage of Total Natural Gas Industrial Deliveries included in Prices

      U.S. Energy Information Administration (EIA) Indexed Site

      Pipeline and Distribution Use Price City Gate Price Residential Price Percentage of Total Residential Deliveries included in Prices Commercial Price Percentage of Total Commercial Deliveries included in Prices Industrial Price Percentage of Total Industrial Deliveries included in Prices Vehicle Fuel Price Electric Power Price Period: Monthly Annual Download Series History Download Series History Definitions, Sources & Notes Definitions, Sources & Notes Show Data By: Data Series Area 2010

    13. Percentage of Total Natural Gas Industrial Deliveries included in Prices

      U.S. Energy Information Administration (EIA) Indexed Site

      City Gate Price Residential Price Percentage of Total Residential Deliveries included in Prices Commercial Price Percentage of Total Commercial Deliveries included in Prices Industrial Price Percentage of Total Industrial Deliveries included in Prices Electric Power Price Period: Monthly Annual Download Series History Download Series History Definitions, Sources & Notes Definitions, Sources & Notes Show Data By: Data Series Area Sep-15 Oct-15 Nov-15 Dec-15 Jan-16 Feb-16 View History U.S.

    14. Percentage of Total Natural Gas Residential Deliveries included in Prices

      U.S. Energy Information Administration (EIA) Indexed Site

      City Gate Price Residential Price Percentage of Total Residential Deliveries included in Prices Commercial Price Percentage of Total Commercial Deliveries included in Prices Industrial Price Percentage of Total Industrial Deliveries included in Prices Electric Power Price Period: Monthly Annual Download Series History Download Series History Definitions, Sources & Notes Definitions, Sources & Notes Show Data By: Data Series Area Sep-15 Oct-15 Nov-15 Dec-15 Jan-16 Feb-16 View History U.S.

    15. Microfluidic devices and methods including porous polymer monoliths

      Office of Scientific and Technical Information (OSTI)

      (Patent) | SciTech Connect Patent: Microfluidic devices and methods including porous polymer monoliths Citation Details In-Document Search Title: Microfluidic devices and methods including porous polymer monoliths Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting

    16. Microfluidic devices and methods including porous polymer monoliths

      Office of Scientific and Technical Information (OSTI)

      (Patent) | SciTech Connect Microfluidic devices and methods including porous polymer monoliths Citation Details In-Document Search Title: Microfluidic devices and methods including porous polymer monoliths Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous

    17. Percentage of Total Natural Gas Commercial Deliveries included in Prices

      U.S. Energy Information Administration (EIA) Indexed Site

      City Gate Price Residential Price Percentage of Total Residential Deliveries included in Prices Commercial Price Percentage of Total Commercial Deliveries included in Prices Industrial Price Percentage of Total Industrial Deliveries included in Prices Electric Power Price Period: Monthly Annual Download Series History Download Series History Definitions, Sources & Notes Definitions, Sources & Notes Show Data By: Data Series Area Sep-15 Oct-15 Nov-15 Dec-15 Jan-16 Feb-16 View History U.S.

    18. Computing and Computational Sciences Directorate - Information...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      text analysis, data analytics, data fusion, population dynamics, emergent behavior in social systems, agent-based and discrete-event simulations, cyber security, and quantum...

    19. Comparison of Joint Modeling Approaches Including Eulerian Sliding...

      Office of Scientific and Technical Information (OSTI)

      Eulerian Sliding Interfaces Citation Details In-Document Search Title: Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces You are accessing a ...

    20. Systematic expansion of porous crystals to include large molecules | Center

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      for Gas SeparationsRelevant to Clean Energy Technologies | Blandine Jerome Systematic expansion of porous crystals to include large molecules

    1. Natural Gas Deliveries to Commercial Consumers (Including Vehicle...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      California (Million Cubic Feet) Natural Gas Deliveries to Commercial Consumers (Including Vehicle Fuel through 1996) in California (Million Cubic Feet) Year Jan Feb Mar Apr May Jun ...

    2. METHOD OF FABRICATING ELECTRODES INCLUDING HIGH-CAPACITY, BINDER...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      METHOD OF FABRICATING ELECTRODES INCLUDING HIGH-CAPACITY, BINDER-FREE ANODES FOR LITHIUM-I... Binderless Electrodes for Rechargeable Lithium Batteries Abstract: An electrode (110) is ...

    3. Including Retro-Commissioning in Federal Energy Savings Performance Contracts

      Broader source: Energy.gov [DOE]

      Document describes guidance on the importance of (and steps to) including retro-commissioning in federal energy savings performance contracts (ESPCs).

    4. Numerical simulations for low energy nuclear reactions including...

      Office of Scientific and Technical Information (OSTI)

      Numerical simulations for low energy nuclear reactions including direct channels to ... Visit OSTI to utilize additional information resources in energy science and technology. A ...

    5. Numerical simulations for low energy nuclear reactions including...

      Office of Scientific and Technical Information (OSTI)

      energy nuclear reactions including direct channels to validate statistical models Citation Details In-Document Search Title: Numerical simulations for low energy nuclear reactions ...

    6. Introduction to Small-Scale Photovoltaic Systems (Including RETScreen...

      Open Energy Info (EERE)

      Photovoltaic Systems (Including RETScreen Case Study) (Webinar) Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Introduction to Small-Scale Photovoltaic Systems...

    7. Introduction to Small-Scale Wind Energy Systems (Including RETScreen...

      Open Energy Info (EERE)

      Case Study) (Webinar) Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Introduction to Small-Scale Wind Energy Systems (Including RETScreen Case Study) (Webinar) Focus...

    8. DOE Releases Request for Information on Critical Materials, Including...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      sector, including fuel cell platinum group metal catalysts. The RFI is soliciting feedback from industry, academia, research laboratories, government agencies, and other ...

    9. SWS Online Tool now includes Multifamily Content, plus a How...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Future updates will include: a how-to video and a Spanish translation of the Standard Work Specifications for Multifamily Housing. Webinar: Using the Standard Work Specifications ...

    10. U-182: Microsoft Windows Includes Some Invalid Certificates

      Broader source: Energy.gov [DOE]

      The operating system includes some invalid intermediate certificates. The vulnerability is due to the certificate authorities and not the operating system itself.

    11. Mobile computing device configured to compute irradiance, glint, and glare of the sun

      DOE Patents [OSTI]

      Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib

      2014-03-11

      Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.

    12. Building Energy Consumption Analysis

      Energy Science and Technology Software Center (OSTI)

      2005-03-02

      DOE2.1E-121SUNOS is a set of modules for energy analysis in buildings. Modules are included to calculate the heating and cooling loads for each space in a building for each hour of a year (LOADS), to simulate the operation and response of the equipment and systems that control temperature and humidity and distribute heating, cooling and ventilation to the building (SYSTEMS), to model energy conversion equipment that uses fuel or electricity to provide the required heating,more » cooling and electricity (PLANT), and to compute the cost of energy and building operation based on utility rate schedule and economic parameters (ECONOMICS).« less

    13. Sandia Energy - New Project Is the ACME of Computer Science to...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Project Is the ACME of Computer Science to Address Climate Change Home Climate Partnership News Global Climate & Energy News & Events Analysis Modeling Modeling & Analysis New...

    14. Solar Energy Education. Reader, Part II. Sun story. [Includes glossary

      SciTech Connect (OSTI)

      Not Available

      1981-05-01

      Magazine articles which focus on the subject of solar energy are presented. The booklet prepared is the second of a four part series of the Solar Energy Reader. Excerpts from the magazines include the history of solar energy, mythology and tales, and selected poetry on the sun. A glossary of energy related terms is included. (BCS)

    15. Microfluidic devices and methods including porous polymer monoliths

      DOE Patents [OSTI]

      Hatch, Anson V.; Sommer, Gregory J.; Singh, Anup K.; Wang, Ying-Chih; Abhyankar, Vinay

      2015-12-01

      Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous polymer monolith may include surfaces terminated with iniferter species. Capture molecules may then be grafted to the monolith pores.

    16. Microfluidic devices and methods including porous polymer monoliths

      DOE Patents [OSTI]

      Hatch, Anson V; Sommer, Gregory J; Singh, Anup K; Wang, Ying-Chih; Abhyankar, Vinay V

      2014-04-22

      Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous polymer monolith may include surfaces terminated with iniferter species. Capture molecules may then be grafted to the monolith pores.

    17. Method and system for benchmarking computers

      DOE Patents [OSTI]

      Gustafson, John L.

      1993-09-14

      A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

    18. Articles which include chevron film cooling holes, and related processes

      DOE Patents [OSTI]

      Bunker, Ronald Scott; Lacy, Benjamin Paul

      2014-12-09

      An article is described, including an inner surface which can be exposed to a first fluid; an inlet; and an outer surface spaced from the inner surface, which can be exposed to a hotter second fluid. The article further includes at least one row or other pattern of passage holes. Each passage hole includes an inlet bore extending through the substrate from the inlet at the inner surface to a passage hole-exit proximate to the outer surface, with the inlet bore terminating in a chevron outlet adjacent the hole-exit. The chevron outlet includes a pair of wing troughs having a common surface region between them. The common surface region includes a valley which is adjacent the hole-exit; and a plateau adjacent the valley. The article can be an airfoil. Related methods for preparing the passage holes are also described.

    19. Identifying logical planes formed of compute nodes of a subcommunicator in a parallel computer

      DOE Patents [OSTI]

      Davis, Kristan D.; Faraj, Daniel A.

      2016-03-01

      In a parallel computer, a plurality of logical planes formed of compute nodes of a subcommunicator may be identified by: for each compute node of the subcommunicator and for a number of dimensions beginning with a first dimension: establishing, by a plane building node, in a positive direction of the first dimension, all logical planes that include the plane building node and compute nodes of the subcommunicator in a positive direction of a second dimension, where the second dimension is orthogonal to the first dimension; and establishing, by the plane building node, in a negative direction of the first dimension, all logical planes that include the plane building node and compute nodes of the subcommunicator in the positive direction of the second dimension.

    20. Turbomachine injection nozzle including a coolant delivery system

      DOE Patents [OSTI]

      Zuo, Baifang (Simpsonville, SC)

      2012-02-14

      An injection nozzle for a turbomachine includes a main body having a first end portion that extends to a second end portion defining an exterior wall having an outer surface. A plurality of fluid delivery tubes extend through the main body. Each of the plurality of fluid delivery tubes includes a first fluid inlet for receiving a first fluid, a second fluid inlet for receiving a second fluid and an outlet. The injection nozzle further includes a coolant delivery system arranged within the main body. The coolant delivery system guides a coolant along at least one of a portion of the exterior wall and around the plurality of fluid delivery tubes.

    1. An Arbitrary Precision Computation Package

      Energy Science and Technology Software Center (OSTI)

      2003-06-14

      This package permits a scientist to perform computations using an arbitrarily high level of numeric precision (the equivalent of hundreds or even thousands of digits), by making only minor changes to conventional C++ or Fortran-90 soruce code. This software takes advantage of certain properties of IEEE floating-point arithmetic, together with advanced numeric algorithms, custom data types and operator overloading. Also included in this package is the "Experimental Mathematician's Toolkit", which incorporates many of these facilitiesmore » into an easy-to-use interactive program.« less

    2. Present and Future Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Important for DOE Energy Frontier Mission 2 * TH HEP is new ... & PDSF (studies based on usage for end of Sep 2012 - Nov ... framework (Sherpa), and a library for the computation of ...

    3. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      a n n u a l r e p o r t 2 0 1 2 Argonne Leadership Computing Facility Director's Message .............................................................................................................................1 About ALCF ......................................................................................................................................... 2 IntroDuCIng MIrA Introducing Mira

    4. Quantum steady computation

      SciTech Connect (OSTI)

      Castagnoli, G. )

      1991-08-10

      This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

    5. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Deployment of Edison was made possible in part by funding from DOE's Office of Science and the DARPA High Productivity Computing Systems program. DOE's Office of Science is the ...

    6. Natural Gas Delivered to Consumers in Ohio (Including Vehicle...

      U.S. Energy Information Administration (EIA) Indexed Site

      Natural Gas Delivered to Consumers in Ohio (Including Vehicle Fuel) (Million Cubic Feet) Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2001 136,340 110,078 102,451 66,525 ...

    7. Natural Gas Deliveries to Commercial Consumers (Including Vehicle...

      U.S. Energy Information Administration (EIA) Indexed Site

      Mexico (Million Cubic Feet) Natural Gas Deliveries to Commercial Consumers (Including Vehicle Fuel through 1996) in New Mexico (Million Cubic Feet) Year Jan Feb Mar Apr May Jun Jul ...

    8. Removal of mineral matter including pyrite from coal

      DOE Patents [OSTI]

      Reggel, Leslie; Raymond, Raphael; Blaustein, Bernard D.

      1976-11-23

      Mineral matter, including pyrite, is removed from coal by treatment of the coal with aqueous alkali at a temperature of about 175.degree. to 350.degree. C, followed by acidification with strong acid.

    9. T-603: Mac OS X Includes Some Invalid Comodo Certificates

      Broader source: Energy.gov [DOE]

      The operating system includes some invalid certificates. The vulnerability is due to the invalid certificates and not the operating system itself. Other browsers, applications, and operating systems are affected.

    10. What To Include In The Whistleblower Complaint? | National Nuclear...

      National Nuclear Security Administration (NNSA)

      that you have included in your complaint are true and correct to the best of your knowledge and belief; and An affirmation, as described in Sec. 708.13 of this subpart, that...

    11. Including Retro-Commissioning in Federal Energy Savings Performance...

      Energy Savers [EERE]

      the cost of the survey. Developing a detailed scope of work and a fixed price for this work is important to eliminate risk to the Agency and the ESCo. Including a detailed scope...

    12. Example Retro-Commissioning Scope of Work to Include Services...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Services as Part of an ESPC Investment-Grade Audit Example Retro-Commissioning Scope of Work to Include Services as Part of an ESPC Investment-Grade Audit Document offers a ...

    13. Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computing Computing Fun fact: Most systems require air conditioning or chilled water to cool super powerful supercomputers, but the Olympus supercomputer at Pacific Northwest National Laboratory is cooled by the location's 65 degree groundwater. Traditional cooling systems could cost up to $61,000 in electricity each year, but this more efficient setup uses 70 percent less energy. | Photo courtesy of PNNL. Fun fact: Most systems require air conditioning or chilled water to cool super powerful

    14. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Argonne National Laboratory | 9700 South Cass Avenue | Argonne, IL 60439 | www.anl.gov | September 2013 alcf_keyfacts_fs_0913 Key facts about the Argonne Leadership Computing Facility User support and services Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. Catalysts are computational scientist with domain expertise and work directly with project principal investigators to maximize discovery and reduce time-to- solution.

    15. New TRACC Cluster Computer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      TRACC Cluster Computer With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD 16 core, 2.3 GHz, 32 GB processors. See also Computing Resources.

    16. Computational Modeling | Bioenergy | NREL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Modeling NREL uses computational modeling to increase the efficiency of biomass conversion by rational design using multiscale modeling, applying theoretical approaches, and testing scientific hypotheses. model of enzymes wrapping on cellulose; colorful circular structures entwined through blue strands Cellulosomes are complexes of protein scaffolds and enzymes that are highly effective in decomposing biomass. This is a snapshot of a coarse-grain model of complex cellulosome

    17. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    18. Advanced Simulation and Computing

      National Nuclear Security Administration (NNSA)

      NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction

    19. Compute Reservation Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Reservation Request Form Compute Reservation Request Form Users can request a scheduled reservation of machine resources if their jobs have special needs that cannot be accommodated through the regular batch system. A reservation brings some portion of the machine to a specific user or project for an agreed upon duration. Typically this is used for interactive debugging at scale or real time processing linked to some experiment or event. It is not intended to be used to guarantee fast

    20. Applied Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These colors were

    1. Limited Personal Use of Government Office Equipment including Information Technology

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      2005-01-07

      The Order establishes requirements and assigns responsibilities for employees' limited personal use of Government resources (office equipment and other resources including information technology) within DOE, including NNSA. The Order is required to provide guidance on appropriate and inappropriate uses of Government resources. This Order was certified 04/23/2009 as accurate and continues to be relevant and appropriate for use by the Department. Certified 4-23-09. No cancellation.

    2. Numerical simulations for low energy nuclear reactions including direct

      Office of Scientific and Technical Information (OSTI)

      channels to validate statistical models (Conference) | SciTech Connect Numerical simulations for low energy nuclear reactions including direct channels to validate statistical models Citation Details In-Document Search Title: Numerical simulations for low energy nuclear reactions including direct channels to validate statistical models Authors: Kawano, Toshihiko [1] + Show Author Affiliations Los Alamos National Laboratory [Los Alamos National Laboratory Publication Date: 2014-01-08 OSTI

    3. Comparison of Joint Modeling Approaches Including Eulerian Sliding

      Office of Scientific and Technical Information (OSTI)

      Interfaces (Technical Report) | SciTech Connect Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces Citation Details In-Document Search Title: Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian

    4. Hybrid powertrain system including smooth shifting automated transmission

      DOE Patents [OSTI]

      Beaty, Kevin D.; Nellums, Richard A.

      2006-10-24

      A powertrain system is provided that includes a prime mover and a change-gear transmission having an input, at least two gear ratios, and an output. The powertrain system also includes a power shunt configured to route power applied to the transmission by one of the input and the output to the other one of the input and the output. A transmission system and a method for facilitating shifting of a transmission system are also provided.

    5. Prevention of Harassment (Including Sexual Harassment) and Retaliation

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Policy Statement | Department of Energy Prevention of Harassment (Including Sexual Harassment) and Retaliation Policy Statement Prevention of Harassment (Including Sexual Harassment) and Retaliation Policy Statement DOE Policy for Preventing Harassment in the Workplace PDF icon Harassment Policy July 2011.pdf More Documents & Publications Policy Statement on Equal Employment Opportunity, Harassment, and Retaliation Equal Employment Opportunity and Diversity Policy Statement VWA-0039 - In

    6. Solar Energy Education. Reader, Part II. Sun story. [Includes glossary]

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (Technical Report) | SciTech Connect Reader, Part II. Sun story. [Includes glossary] Citation Details In-Document Search Title: Solar Energy Education. Reader, Part II. Sun story. [Includes glossary] × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy science and

    7. Solar Energy Education. Renewable energy: a background text. [Includes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      glossary] (Technical Report) | SciTech Connect energy: a background text. [Includes glossary] Citation Details In-Document Search Title: Solar Energy Education. Renewable energy: a background text. [Includes glossary] × You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is a product of DOE's Office of Scientific and Technical Information (OSTI) and is provided as a public service. Visit OSTI to utilize additional information resources in energy

    8. Annual Technology Baseline (Including Supporting Data); NREL (National

      Office of Scientific and Technical Information (OSTI)

      Renewable Energy Laboratory) (Conference) | SciTech Connect SciTech Connect Search Results Conference: Annual Technology Baseline (Including Supporting Data); NREL (National Renewable Energy Laboratory) Citation Details In-Document Search Title: Annual Technology Baseline (Including Supporting Data); NREL (National Renewable Energy Laboratory) Consistent cost and performance data for various electricity generation technologies can be difficult to find and may change frequently for certain

    9. Trends and challenges when including microstructure in materials modeling:

      Office of Scientific and Technical Information (OSTI)

      Examples of problems studied at Sandia National Laboratories. (Conference) | SciTech Connect Trends and challenges when including microstructure in materials modeling: Examples of problems studied at Sandia National Laboratories. Citation Details In-Document Search Title: Trends and challenges when including microstructure in materials modeling: Examples of problems studied at Sandia National Laboratories. Abstract not provided. Authors: Dingreville, Remi Philippe Michel Publication Date:

    10. Paging memory from random access memory to backing storage in a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

      2013-05-21

      Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

    11. Intro to computer programming, no computer required! | Argonne...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... "Computational thinking requires you to think in abstractions," said Papka, who spoke to computer science and computer-aided design students at Kaneland High School in Maple Park about ...

    12. computing | National Nuclear Security Administration

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computing NNSA Announces Procurement of Penguin Computing Clusters to Support Stockpile Stewardship at National Labs The National Nuclear Security Administration's (NNSA's) Lawrence Livermore National Laboratory today announced the awarding of a subcontract to Penguin Computing - a leading developer of high-performance Linux cluster computing systems based in Silicon Valley - to bolster computing for stockpile

    13. Can Cloud Computing Address the Scientific Computing Requirements for DOE

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe January 30, 2012 Jon Bashor, Jbashor@lbl.gov, +1 510-486-5849 Magellan1.jpg Magellan at NERSC After a two-year study of the feasibility of cloud computing systems for meeting the ever-increasing computational needs of scientists,

    14. Comparison of International Energy Intensities across the G7 and other parts of Europe, including Ukraine

      U.S. Energy Information Administration (EIA) Indexed Site

      Comparison of International Energy Intensities across the G7 and other parts of Europe, including Ukraine Elizabeth Sendich November 2014 Independent Statistics & Analysis www.eia.gov U.S. Energy Information Administration Washington, DC 20585 This paper is released to encourage discussion and critical comment. The analysis and conclusions expressed here are those of the authors and not necessarily those of the U.S. Energy Information Administration. WORKING PAPER SERIES November 2014

    15. CAD-centric Computation Management System for a Virtual TBM

      SciTech Connect (OSTI)

      Ramakanth Munipalli; K.Y. Szema; P.Y. Huang; C.M. Rowell; A.Ying; M. Abdou

      2011-05-03

      HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of the analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.

    16. Applications in Data-Intensive Computing

      SciTech Connect (OSTI)

      Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.; Cannon, William R.; Chavarra-Miranda, Daniel; Choudhury, Sutanay; Gorton, Ian; Gracio, Deborah K.; Halter, Todd D.; Jaitly, Navdeep; Johnson, John R.; Kouzes, Richard T.; Macduff, Matt C.; Marquez, Andres; Monroe, Matthew E.; Oehmen, Christopher S.; Pike, William A.; Scherrer, Chad; Villa, Oreste; Webb-Robertson, Bobbie-Jo M.; Whitney, Paul D.; Zuljevic, Nino

      2010-04-01

      This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications provide timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.

    17. Computational mechanics research and support for aerodynamics and hydraulics at TFHRC. Quarterly report January through March 2011. Year 1 Quarter 2 progress report.

      SciTech Connect (OSTI)

      Lottes, S. A.; Kulak, R. F.; Bojanowski, C.

      2011-05-19

      This project was established with a new interagency agreement between the Department of Energy and the Department of Transportation to provide collaborative research, development, and benchmarking of advanced three-dimensional computational mechanics analysis methods to the aerodynamics and hydraulics laboratories at the Turner-Fairbank Highway Research Center for a period of five years, beginning in October 2010. The analysis methods employ well-benchmarked and supported commercial computational mechanics software. Computational mechanics encompasses the areas of Computational Fluid Dynamics (CFD), Computational Wind Engineering (CWE), Computational Structural Mechanics (CSM), and Computational Multiphysics Mechanics (CMM) applied in Fluid-Structure Interaction (FSI) problems. The major areas of focus of the project are wind and water loads on bridges - superstructure, deck, cables, and substructure (including soil), primarily during storms and flood events - and the risks that these loads pose to structural failure. For flood events at bridges, another major focus of the work is assessment of the risk to bridges caused by scour of stream and riverbed material away from the foundations of a bridge. Other areas of current research include modeling of flow through culverts to assess them for fish passage, modeling of the salt spray transport into bridge girders to address suitability of using weathering steel in bridges, vehicle stability under high wind loading, and the use of electromagnetic shock absorbers to improve vehicle stability under high wind conditions. This quarterly report documents technical progress on the project tasks for the period of January through March 2011.

    18. in High Performance Computing Computer System, Cluster, and Networking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

    19. Scheduling applications for execution on a plurality of compute nodes of a parallel computer to manage temperature of the nodes during execution

      DOE Patents [OSTI]

      Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

      2012-10-16

      Methods, apparatus, and products are disclosed for scheduling applications for execution on a plurality of compute nodes of a parallel computer to manage temperature of the plurality of compute nodes during execution that include: identifying one or more applications for execution on the plurality of compute nodes; creating a plurality of physically discontiguous node partitions in dependence upon temperature characteristics for the compute nodes and a physical topology for the compute nodes, each discontiguous node partition specifying a collection of physically adjacent compute nodes; and assigning, for each application, that application to one or more of the discontiguous node partitions for execution on the compute nodes specified by the assigned discontiguous node partitions.

    20. A model for heterogeneous materials including phase transformations

      SciTech Connect (OSTI)

      Addessio, F.L.; Clements, B.E.; Williams, T.O.

      2005-04-15

      A model is developed for particulate composites, which includes phase transformations in one or all of the constituents. The model is an extension of the method of cells formalism. Representative simulations for a single-phase, brittle particulate (SiC) embedded in a ductile material (Ti), which undergoes a solid-solid phase transformation, are provided. Also, simulations for a tungsten heavy alloy (WHA) are included. In the WHA analyses a particulate composite, composed of tungsten particles embedded in a tungsten-iron-nickel alloy matrix, is modeled. A solid-liquid phase transformation of the matrix material is included in the WHA numerical calculations. The example problems also demonstrate two approaches for generating free energies for the material constituents. Simulations for volumetric compression, uniaxial strain, biaxial strain, and pure shear are used to demonstrate the versatility of the model.

    1. The implications of spatial locality on scientific computing benchmark

      Office of Scientific and Technical Information (OSTI)

      selection and analysis. (Conference) | SciTech Connect spatial locality on scientific computing benchmark selection and analysis. Citation Details In-Document Search Title: The implications of spatial locality on scientific computing benchmark selection and analysis. No abstract prepared. Authors: Kogge, Peter [1] ; Murphy, Richard C. [1] ; Rodrigues, Arun F. [1] ; Underwood, Keith Douglas + Show Author Affiliations (University of Notre Dame, Notre Dame, IN) Publication Date: 2005-08-01 OSTI

    2. Solar Energy Education. Renewable energy: a background text. [Includes glossary

      SciTech Connect (OSTI)

      Not Available

      1985-01-01

      Some of the most common forms of renewable energy are presented in this textbook for students. The topics include solar energy, wind power hydroelectric power, biomass ocean thermal energy, and tidal and geothermal energy. The main emphasis of the text is on the sun and the solar energy that it yields. Discussions on the sun's composition and the relationship between the earth, sun and atmosphere are provided. Insolation, active and passive solar systems, and solar collectors are the subtopics included under solar energy. (BCS)

    3. Metal vapor laser including hot electrodes and integral wick

      DOE Patents [OSTI]

      Ault, E.R.; Alger, T.W.

      1995-03-07

      A metal vapor laser, specifically one utilizing copper vapor, is disclosed herein. This laser utilizes a plasma tube assembly including a thermally insulated plasma tube containing a specific metal, e.g., copper, and a buffer gas therein. The laser also utilizes means including hot electrodes located at opposite ends of the plasma tube for electrically exciting the metal vapor and heating its interior to a sufficiently high temperature to cause the metal contained therein to vaporize and for subjecting the vapor to an electrical discharge excitation in order to lase. The laser also utilizes external wicking arrangements, that is, wicking arrangements located outside the plasma tube. 5 figs.

    4. Metal vapor laser including hot electrodes and integral wick

      DOE Patents [OSTI]

      Ault, Earl R.; Alger, Terry W.

      1995-01-01

      A metal vapor laser, specifically one utilizing copper vapor, is disclosed herein. This laser utilizes a plasma tube assembly including a thermally insulated plasma tube containing a specific metal, e.g., copper, and a buffer gas therein. The laser also utilizes means including hot electrodes located at opposite ends of the plasma tube for electrically exciting the metal vapor and heating its interior to a sufficiently high temperature to cause the metal contained therein to vaporize and for subjecting the vapor to an electrical discharge excitation in order to lase. The laser also utilizes external wicking arrangements, that is, wicking arrangements located outside the plasma tube.

    5. Tunable cavity resonator including a plurality of MEMS beams

      SciTech Connect (OSTI)

      Peroulis, Dimitrios; Fruehling, Adam; Small, Joshua Azariah; Liu, Xiaoguang; Irshad, Wasim; Arif, Muhammad Shoaib

      2015-10-20

      A tunable cavity resonator includes a substrate, a cap structure, and a tuning assembly. The cap structure extends from the substrate, and at least one of the substrate and the cap structure defines a resonator cavity. The tuning assembly is positioned at least partially within the resonator cavity. The tuning assembly includes a plurality of fixed-fixed MEMS beams configured for controllable movement relative to the substrate between an activated position and a deactivated position in order to tune a resonant frequency of the tunable cavity resonator.

    6. DOE Considers Natural Gas Utility Service Options: Proposal Includes

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      30-mile Natural Gas Pipeline from Pasco to Hanford | Department of Energy Considers Natural Gas Utility Service Options: Proposal Includes 30-mile Natural Gas Pipeline from Pasco to Hanford DOE Considers Natural Gas Utility Service Options: Proposal Includes 30-mile Natural Gas Pipeline from Pasco to Hanford January 23, 2012 - 12:00pm Addthis Media Contacts Cameron Hardy, DOE , (509) 376-5365, Cameron.Hardy@rl.doe.gov RICHLAND, WASH. - The U.S. Department of Energy (DOE) is considering

    7. Methods of producing adsorption media including a metal oxide

      DOE Patents [OSTI]

      Mann, Nicholas R; Tranter, Troy J

      2014-03-04

      Methods of producing a metal oxide are disclosed. The method comprises dissolving a metal salt in a reaction solvent to form a metal salt/reaction solvent solution. The metal salt is converted to a metal oxide and a caustic solution is added to the metal oxide/reaction solvent solution to adjust the pH of the metal oxide/reaction solvent solution to less than approximately 7.0. The metal oxide is precipitated and recovered. A method of producing adsorption media including the metal oxide is also disclosed, as is a precursor of an active component including particles of a metal oxide.

    8. Thin film solar cell including a spatially modulated intrinsic layer

      DOE Patents [OSTI]

      Guha, Subhendu; Yang, Chi-Chung; Ovshinsky, Stanford R.

      1989-03-28

      One or more thin film solar cells in which the intrinsic layer of substantially amorphous semiconductor alloy material thereof includes at least a first band gap portion and a narrower band gap portion. The band gap of the intrinsic layer is spatially graded through a portion of the bulk thickness, said graded portion including a region removed from the intrinsic layer-dopant layer interfaces. The band gap of the intrinsic layer is always less than the band gap of the doped layers. The gradation of the intrinsic layer is effected such that the open circuit voltage and/or the fill factor of the one or plural solar cell structure is enhanced.

    9. Transient analysis of vibrations in nonideal multilayered piezoelectric devices

      SciTech Connect (OSTI)

      Hodgdon, M.L.

      1980-11-01

      A numerical method of solving the equations involved in the transient vibration analysis of nonideal multilayered piezoelectric is presented. The use of the computer code WONDY IV in obtaining the solution, a numerical example and experimental data is described. In addition, a method is included for approximating the values of creep or relaxation functions from steady-state attenuation data.

    10. Information Science, Computing, Applied Math

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math /science-innovation/_assets/images/icon-science.jpg Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los Alamos National Laboratory for the best of both. No place on Earth pursues a broader array of world-class scientific endeavors. Computer, Computational, and Statistical Sciences (CCS)» High Performance Computing (HPC)» Extreme Scale Computing, Co-design» supercomputing

    11. cDNA encoding a polypeptide including a hevein sequence

      DOE Patents [OSTI]

      Raikhel, N.V.; Broekaert, W.F.; Namhai Chua; Kush, A.

      1993-02-16

      A cDNA clone (HEV1) encoding hevein was isolated via polymerase chain reaction (PCR) using mixed oligonucleotides corresponding to two regions of hevein as primers and a Hevea brasiliensis latex cDNA library as a template. HEV1 is 1,018 nucleotides long and includes an open reading frame of 204 amino acids.

    12. Method and computer program product for maintenance and modernization backlogging

      DOE Patents [OSTI]

      Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

      2013-02-19

      According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

    13. Generalized Modeling of Enrichment Cascades That Include Minor Isotopes

      SciTech Connect (OSTI)

      Weber, Charles F

      2012-01-01

      The monitoring of enrichment operations may require innovative analysis to allow for imperfect or missing data. The presence of minor isotopes may help or hurt - they can complicate a calculation or provide additional data to corroborate a calculation. However, they must be considered in a rigorous analysis, especially in cases involving reuse. This study considers matched-abundanceratio cascades that involve at least three isotopes and allows generalized input that does not require all feed assays or the enrichment factor to be specified. Calculations are based on the equations developed for the MSTAR code but are generalized to allow input of various combinations of assays, flows, and other cascade properties. Traditional cascade models have required specification of the enrichment factor, all feed assays, and the product and waste assays of the primary enriched component. The calculation would then produce the numbers of stages in the enriching and stripping sections and the remaining assays in waste and product streams. In cases where the enrichment factor or feed assays were not known, analysis was difficult or impossible. However, if other quantities are known (e.g., additional assays in waste or product streams), a reliable calculation is still possible with the new code, but such nonstandard input may introduce additional numerical difficulties into the calculation. Thus, the minimum input requirements for a stable solution are discussed, and a sample problem with a non-unique solution is described. Both heuristic and mathematically required guidelines are given to assist the application of cascade modeling to situations involving such non-standard input. As a result, this work provides both a calculational tool and specific guidance for evaluation of enrichment cascades in which traditional input data are either flawed or unknown. It is useful for cases involving minor isotopes, especially if the minor isotope assays are desired (or required) to be important contributors to the overall analysis.

    14. NREL: Computational Science Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      high-performance computing, computational science, applied mathematics, scientific data management, visualization, and informatics. NREL is home to the largest high performance...

    15. computers | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      Sandia donates 242 computers to northern California schools Sandia National Laboratories electronics technologist Mitch Williams prepares the disassembly of 242 computers for ...

    16. Careers | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      At the Argonne Leadership Computing Facility, we are helping to redefine what's possible in computational science. With some of the most powerful supercomputers in the world and a ...

    17. Computer simulation | Open Energy Information

      Open Energy Info (EERE)

      Computer simulation Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Computer simulation Author wikipedia Published wikipedia, 2013 DOI Not Provided...

    18. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse ...

    19. Human-computer interface

      DOE Patents [OSTI]

      Anderson, Thomas G.

      2004-12-21

      The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

    20. Real time analysis under EDS

      SciTech Connect (OSTI)

      Schneberk, D.

      1985-07-01

      This paper describes the analysis component of the Enrichment Diagnostic System (EDS) developed for the Atomic Vapor Laser Isotope Separation Program (AVLIS) at Lawrence Livermore National Laboratory (LLNL). Four different types of analysis are performed on data acquired through EDS: (1) absorption spectroscopy on laser-generated spectral lines, (2) mass spectrometer analysis, (3) general purpose waveform analysis, and (4) separation performance calculations. The information produced from this data includes: measures of particle density and velocity, partial pressures of residual gases, and overall measures of isotope enrichment. The analysis component supports a variety of real-time modeling tasks, a means for broadcasting data to other nodes, and a great degree of flexibility for tailoring computations to the exact needs of the process. A particular data base structure and program flow is common to all types of analysis. Key elements of the analysis component are: (1) a fast access data base which can configure all types of analysis, (2) a selected set of analysis routines, (3) a general purpose data manipulation and graphics package for the results of real time analysis. Each of these components are described with an emphasis upon how each contributes to overall system capability. 3 figs.

    1. Computers for artificial intelligence a technology assessment and forecast

      SciTech Connect (OSTI)

      Miller, R.K.

      1986-01-01

      This study reviews the development and current state-of-the-art in computers for artificial intelligence, including LISP machines, AI workstations, professional and engineering workstations, minicomputers, mainframes, and supercomputers. Major computer systems for AI applications are reviewed. The use of personal computers for expert system development is discussed, and AI software for the IBM PC, Texas Instrument Professional Computer, and Apple MacIntosh is presented. Current research aimed at developing a new computer for artificial intelligence is described, and future technological developments are discussed.

    2. [Article 1 of 7: Motivates and Includes the Consumer]

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 of 7: Research on the Characteristics of a Modern Grid by the NETL Modern Grid Strategy Team Accommodates All Generation and Storage Options Last month we presented the first Principal Characteristic of a Modern Grid, "Motivates and Includes the Consumer". This month we present a second characteristic, "Accommodates All Generation and Storage Options". This characteristic will fundamentally transition today's grid from a centralized model for generation to one that also has

    3. [Article 1 of 7: Motivates and Includes the Consumer]

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Series on the Seven Principal Characteristics of the Modern Grid [Article 1 of 7: Motivates and Includes the Consumer] In October 2007, Ken Silverstein (Energy Central) wrote an editorial, "Empowering Consumers" that hit a strong, kindred chord with the DOE/National Energy Technology Laboratory (NETL) Modern Grid Strategy team. Through subsequent discussions with Ken and Bill Opalka, Editor- In-Chief, Topics Centers, we decided it would be informative to the industry if the Modern Grid

    4. Evaporative cooler including one or more rotating cooler louvers

      DOE Patents [OSTI]

      Gerlach, David W

      2015-02-03

      An evaporative cooler may include an evaporative cooler housing with a duct extending therethrough, a plurality of cooler louvers with respective porous evaporative cooler pads, and a working fluid source conduit. The cooler louvers are arranged within the duct and rotatably connected to the cooler housing along respective louver axes. The source conduit provides an evaporative cooler working fluid to the cooler pads during at least one mode of operation.

    5. Computation of Wave Loads under Multidirectional Sea States for Floating Offshore Wind Turbines: Preprint

      SciTech Connect (OSTI)

      Duarte, T.; Gueydon, S.; Jonkman, J.; Sarmento, A.

      2014-03-01

      This paper focuses on the analysis of a floating wind turbine under multidirectional wave loading. Special attention is given to the different methods used to synthesize the multidirectional sea state. This analysis includes the double-sum and single-sum methods, as well as an equal-energy discretization of the directional spectrum. These three methods are compared in detail, including the ergodicity of the solution obtained. From the analysis, the equal-energy method proved to be the most computationally efficient while still retaining the ergodicity of the solution. This method was chosen to be implemented in the numerical code FAST. Preliminary results on the influence of these wave loads on a floating wind turbine showed significant additional roll and sway motion of the platform.

    6. Electrolytes including fluorinated solvents for use in electrochemical cells

      DOE Patents [OSTI]

      Tikhonov, Konstantin; Yip, Ka Ki; Lin, Tzu-Yuan

      2015-07-07

      Provided are electrochemical cells and electrolytes used to build such cells. The electrolytes include ion-supplying salts and fluorinated solvents capable of maintaining single phase solutions with the salts at between about -30.degree. C. to about 80.degree. C. The fluorinated solvents, such as fluorinated carbonates, fluorinated esters, and fluorinated esters, are less flammable than their non-fluorinated counterparts and increase safety characteristics of cells containing these solvents. The amount of fluorinated solvents in electrolytes may be between about 30% and 80% by weight not accounting weight of the salts. Fluorinated salts, such as fluoroalkyl-substituted LiPF.sub.6, fluoroalkyl-substituted LiBF.sub.4 salts, linear and cyclic imide salts as well as methide salts including fluorinated alkyl groups, may be used due to their solubility in the fluorinated solvents. In some embodiments, the electrolyte may also include a flame retardant, such as a phosphazene or, more specifically, a cyclic phosphazene and/or one or more ionic liquids.

    7. Conversion of geothermal waste to commercial products including silica

      DOE Patents [OSTI]

      Premuzic, Eugene T.; Lin, Mow S.

      2003-01-01

      A process for the treatment of geothermal residue includes contacting the pigmented amorphous silica-containing component with a depigmenting reagent one or more times to depigment the silica and produce a mixture containing depigmented amorphous silica and depigmenting reagent containing pigment material; separating the depigmented amorphous silica and from the depigmenting reagent to yield depigmented amorphous silica. Before or after the depigmenting contacting, the geothermal residue or depigmented silica can be treated with a metal solubilizing agent to produce another mixture containing pigmented or unpigmented amorphous silica-containing component and a solubilized metal-containing component; separating these components from each other to produce an amorphous silica product substantially devoid of metals and at least partially devoid of pigment. The amorphous silica product can be neutralized and thereafter dried at a temperature from about 25.degree. C. to 300.degree. C. The morphology of the silica product can be varied through the process conditions including sequence contacting steps, pH of depigmenting reagent, neutralization and drying conditions to tailor the amorphous silica for commercial use in products including filler for paint, paper, rubber and polymers, and chromatographic material.

    8. Accelerating Battery Design Using Computer-Aided Engineering Tools: Preprint

      SciTech Connect (OSTI)

      Pesaran, A.; Heon, G. H.; Smith, K.

      2011-01-01

      Computer-aided engineering (CAE) is a proven pathway, especially in the automotive industry, to improve performance by resolving the relevant physics in complex systems, shortening the product development design cycle, thus reducing cost, and providing an efficient way to evaluate parameters for robust designs. Academic models include the relevant physics details, but neglect engineering complexities. Industry models include the relevant macroscopic geometry and system conditions, but simplify the fundamental physics too much. Most of the CAE battery tools for in-house use are custom model codes and require expert users. There is a need to make these battery modeling and design tools more accessible to end users such as battery developers, pack integrators, and vehicle makers. Developing integrated and physics-based CAE battery tools can reduce the design, build, test, break, re-design, re-build, and re-test cycle and help lower costs. NREL has been involved in developing various models to predict the thermal and electrochemical performance of large-format cells and has used in commercial three-dimensional finite-element analysis and computational fluid dynamics to study battery pack thermal issues. These NREL cell and pack design tools can be integrated to help support the automotive industry and to accelerate battery design.

    9. MHD computations for stellarators

      SciTech Connect (OSTI)

      Johnson, J.L.

      1985-12-01

      Considerable progress has been made in the development of computational techniques for studying the magnetohydrodynamic equilibrium and stability properties of three-dimensional configurations. Several different approaches have evolved to the point where comparison of results determined with different techniques shows good agreement. 55 refs., 7 figs.

    10. Including environmental concerns in management strategies for depleted uranium hexafluoride

      SciTech Connect (OSTI)

      Goldberg, M.; Avci, H.I.; Bradley, C.E.

      1995-12-31

      One of the major programs within the Office of Nuclear Energy, Science, and Technology of the US Department of Energy (DOE) is the depleted uranium hexafluoride (DUF{sub 6}) management program. The program is intended to find a long-term management strategy for the DUF{sub 6} that is currently stored in approximately 46,400 cylinders at Paducah, KY; Portsmouth, OH; and Oak Ridge, TN, USA. The program has four major components: technology assessment, engineering analysis, cost analysis, and the environmental impact statement (EIS). From the beginning of the program, the DOE has incorporated the environmental considerations into the process of strategy selection. Currently, the DOE has no preferred alternative. The results of the environmental impacts assessment from the EIS, as well as the results from the other components of the program, will be factored into the strategy selection process. In addition to the DOE`s current management plan, other alternatives continued storage, reuse, or disposal of depleted uranium, will be considered in the EIS. The EIS is expected to be completed and issued in its final form in the fall of 1997.

    11. A coke oven model including thermal decomposition kinetics of tar

      SciTech Connect (OSTI)

      Munekane, Fuminori; Yamaguchi, Yukio; Tanioka, Seiichi

      1997-12-31

      A new one-dimensional coke oven model has been developed for simulating the amount and the characteristics of by-products such as tar and gas as well as coke. This model consists of both heat transfer and chemical kinetics including thermal decomposition of coal and tar. The chemical kinetics constants are obtained by estimation based on the results of experiments conducted to investigate the thermal decomposition of both coal and tar. The calculation results using the new model are in good agreement with experimental ones.

    12. Composite material including nanocrystals and methods of making

      DOE Patents [OSTI]

      Bawendi, Moungi G.; Sundar, Vikram C.

      2008-02-05

      Temperature-sensing compositions can include an inorganic material, such as a semiconductor nanocrystal. The nanocrystal can be a dependable and accurate indicator of temperature. The intensity of emission of the nanocrystal varies with temperature and can be highly sensitive to surface temperature. The nanocrystals can be processed with a binder to form a matrix, which can be varied by altering the chemical nature of the surface of the nanocrystal. A nanocrystal with a compatibilizing outer layer can be incorporated into a coating formulation and retain its temperature sensitive emissive properties

    13. Composite material including nanocrystals and methods of making

      DOE Patents [OSTI]

      Bawendi, Moungi G.; Sundar, Vikram C.

      2010-04-06

      Temperature-sensing compositions can include an inorganic material, such as a semiconductor nanocrystal. The nanocrystal can be a dependable and accurate indicator of temperature. The intensity of emission of the nanocrystal varies with temperature and can be highly sensitive to surface temperature. The nanocrystals can be processed with a binder to form a matrix, which can be varied by altering the chemical nature of the surface of the nanocrystal. A nanocrystal with a compatibilizing outer layer can be incorporated into a coating formulation and retain its temperature sensitive emissive properties.

    14. What To Include In The Whistleblower Complaint? | National Nuclear Security

      National Nuclear Security Administration (NNSA)

      Administration To Include In The Whistleblower Complaint? Your complaint does not need to be in any specific form but must be signed by you and contain the following: A statement specifically describing 1. The alleged retaliation taken against you and 2. The disclosure, participation, or refusal that you believe gave rise to the retaliation; A statement that you are not currently pursuing a remedy under State or other applicable law, as described in Sec. 708.15 of this subpart; A statement

    15. cDNA encoding a polypeptide including a hevein sequence

      DOE Patents [OSTI]

      Raikhel, Natasha V.; Broekaert, Willem F.; Chua, Nam-Hai; Kush, Anil

      1993-02-16

      A cDNA clone (HEV1) encoding hevein was isolated via polymerase chain reaction (PCR) using mixed oligonucleotides corresponding to two regions of hevein as primers and a Hevea brasiliensis latex cDNA library as a template. HEV1 is 1018 nucleotides long and includes an open reading frame of 204 amino acids. The deduced amino acid sequence contains a pu GOVERNMENT RIGHTS This application was funded under Department of Energy Contract DE-AC02-76ER01338. The U.S. Government has certain rights under this application and any patent issuing thereon.

    16. Composite armor, armor system and vehicle including armor system

      DOE Patents [OSTI]

      Chu, Henry S.; Jones, Warren F.; Lacy, Jeffrey M.; Thinnes, Gary L.

      2013-01-01

      Composite armor panels are disclosed. Each panel comprises a plurality of functional layers comprising at least an outermost layer, an intermediate layer and a base layer. An armor system incorporating armor panels is also disclosed. Armor panels are mounted on carriages movably secured to adjacent rails of a rail system. Each panel may be moved on its associated rail and into partially overlapping relationship with another panel on an adjacent rail for protection against incoming ordnance from various directions. The rail system may be configured as at least a part of a ring, and be disposed about a hatch on a vehicle. Vehicles including an armor system are also disclosed.

    17. Sandia National Laboratories: Careers: Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced software research & development Collaborative technologies Computational science and mathematics High-performance computing Visualization and scientific computing Advanced ...

    18. Community Assessment Tool for Public Health Emergencies Including Pandemic Influenza

      SciTech Connect (OSTI)

      ORAU's Oak Ridge Institute for Science Education (HCTT-CHE)

      2011-04-14

      The Community Assessment Tool (CAT) for Public Health Emergencies Including Pandemic Influenza (hereafter referred to as the CAT) was developed as a result of feedback received from several communities. These communities participated in workshops focused on influenza pandemic planning and response. The 2008 through 2011 workshops were sponsored by the Centers for Disease Control and Prevention (CDC). Feedback during those workshops indicated the need for a tool that a community can use to assess its readiness for a disaster - readiness from a total healthcare perspective, not just hospitals, but the whole healthcare system. The CAT intends to do just that - help strengthen existing preparedness plans by allowing the healthcare system and other agencies to work together during an influenza pandemic. It helps reveal each core agency partners (sectors) capabilities and resources, and highlights cases of the same vendors being used for resource supplies (e.g., personal protective equipment [PPE] and oxygen) by the partners (e.g., public health departments, clinics, or hospitals). The CAT also addresses gaps in the community's capabilities or potential shortages in resources. This tool has been reviewed by a variety of key subject matter experts from federal, state, and local agencies and organizations. It also has been piloted with various communities that consist of different population sizes, to include large urban to small rural communities.

    19. Foundational Tools for Petascale Computing

      SciTech Connect (OSTI)

      Miller, Barton

      2014-05-19

      The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “High-Performance Energy Applications and Systems”, SC0004061/FG02-10ER25972, UW PRJ36WV.

    20. Locating hardware faults in a data communications network of a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-01-12

      Hardware faults location in a data communications network of a parallel computer. Such a parallel computer includes a plurality of compute nodes and a data communications network that couples the compute nodes for data communications and organizes the compute node as a tree. Locating hardware faults includes identifying a next compute node as a parent node and a root of a parent test tree, identifying for each child compute node of the parent node a child test tree having the child compute node as root, running a same test suite on the parent test tree and each child test tree, and identifying the parent compute node as having a defective link connected from the parent compute node to a child compute node if the test suite fails on the parent test tree and succeeds on all the child test trees.

    1. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs ...

    2. Computational Age Dating of Special Nuclear Materials

      SciTech Connect (OSTI)

      2012-06-30

      This slide-show presented an overview of the Constrained Progressive Reversal (CPR) method for computing decays, age dating, and spoof detecting. The CPR method is: Capable of temporal profiling a SNM sample; Precise (compared with known decay code, such a ORIGEN); Easy (for computer implementation and analysis).  We have illustrated with real SNM data using CPR for age dating and spoof detection. If SNM is pure, may use CPR to derive its age. If SNM is mixed, CPR will indicate that it is mixed or spoofed.

    3. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J; Faraj, Ahmad A; Inglett, Todd A; Ratterman, Joseph D

      2013-04-16

      Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

    4. Providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

      DOE Patents [OSTI]

      Archer, Charles J.; Faraj, Ahmad A.; Inglett, Todd A.; Ratterman, Joseph D.

      2012-10-23

      Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier.

    5. SCC: The Strategic Computing Complex

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SCC: The Strategic Computing Complex SCC: The Strategic Computing Complex The Strategic Computing Complex (SCC) is a secured supercomputing facility that supports the calculation, modeling, simulation, and visualization of complex nuclear weapons data in support of the Stockpile Stewardship Program. The 300,000-square-foot, vault-type building features an unobstructed 43,500-square-foot computer room, which is an open room about three-fourths the size of a football field. The Strategic Computing

    6. Magellan: A Cloud Computing Testbed

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Magellan News & Announcements Archive Petascale Initiative Exascale Computing APEX Home » R & D » Archive » Magellan: A Cloud Computing Testbed Magellan: A Cloud Computing Testbed Cloud computing is gaining a foothold in the business world, but can clouds meet the specialized needs of scientists? That was one of the questions NERSC's Magellan cloud computing testbed explored between 2009 and 2011. The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Oce

    7. Software and High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software and High Performance Computing Software and High Performance Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest Contact thumbnail of Kathleen McDonald Head of Intellectual Property, Business Development Executive Kathleen McDonald Richard P. Feynman Center for Innovation (505) 667-5844 Email Software Computational physics, computer science, applied mathematics, statistics and the

    8. Data aNd Computation Reordering package using temporal and spatial hypergraphs

      Energy Science and Technology Software Center (OSTI)

      2004-08-01

      A package for experimentation with data and computation reordering algorithms. One can input various file formats representing sparse matrices, reorder data, and computation through the specification of command line parameters, and time benchmark computations that use the new data and computation ordering. The package includes existing reordering algorithms and new ones introduced by the authors based on the temporal and spatial locality hypergraph model.

    9. Fuel cell repeater unit including frame and separator plate

      DOE Patents [OSTI]

      Yamanis, Jean; Hawkes, Justin R; Chiapetta, Jr., Louis; Bird, Connie E; Sun, Ellen Y; Croteau, Paul F

      2013-11-05

      An example fuel cell repeater includes a separator plate and a frame establishing at least a portion of a flow path that is operative to communicate fuel to or from at least one fuel cell held by the frame relative to the separator plate. The flow path has a perimeter and any fuel within the perimeter flow across the at least one fuel cell in a first direction. The separator plate, the frame, or both establish at least one conduit positioned outside the flow path perimeter. The conduit is outside of the flow path perimeter and is configured to direct flow in a second, different direction. The conduit is fluidly coupled with the flow path.

    10. Nijmegen soft-core potential including two-meson exchange

      SciTech Connect (OSTI)

      Stoks, V.G.J.; Rijken, T.A.

      1995-05-10

      We report on the progress of the construction of the extended soft-core (ESC) Nijmegen potential. Next to the standard one-boson-exchange parts, the model includes the pion-meson-exchange potentials due to the parallel and crossed-box diagrams, as well as the one-pair and two-pair diagrams, vertices for which can be identified with similar interactions appearing in chiral-symmetric Lagrangians. Although the ESC potential is still under construction, it already gives an excellent description of all {ital NN} scattering data below 350 MeV with {chi}{sup 2}/datum=1.3. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.

    11. Electra-optical device including a nitrogen containing electrolyte

      DOE Patents [OSTI]

      Bates, John B.; Dudney, Nancy J.; Gruzalski, Greg R.; Luck, Christopher F.

      1995-01-01

      Described is a thin-film battery, especially a thin-film microbattery, and a method for making same having application as a backup or primary integrated power source for electronic devices. The battery includes a novel electrolyte which is electrochemically stable and does not react with the lithium anode and a novel vanadium oxide cathode Configured as a microbattery, the battery can be fabricated directly onto a semiconductor chip, onto the semiconductor die or onto any portion of the chip carrier. The battery can be fabricated to any specified size or shape to meet the requirements of a particular application. The battery is fabricated of solid state materials and is capable of operation between -15.degree. C. and 150.degree. C.

    12. Electra-optical device including a nitrogen containing electrolyte

      DOE Patents [OSTI]

      Bates, J.B.; Dudney, N.J.; Gruzalski, G.R.; Luck, C.F.

      1995-10-03

      Described is a thin-film battery, especially a thin-film microbattery, and a method for making same having application as a backup or primary integrated power source for electronic devices. The battery includes a novel electrolyte which is electrochemically stable and does not react with the lithium anode and a novel vanadium oxide cathode. Configured as a microbattery, the battery can be fabricated directly onto a semiconductor chip, onto the semiconductor die or onto any portion of the chip carrier. The battery can be fabricated to any specified size or shape to meet the requirements of a particular application. The battery is fabricated of solid state materials and is capable of operation between {minus}15 C and 150 C.

    13. Dye laser amplifier including a specifically designed diffuser assembly

      DOE Patents [OSTI]

      Davin, James; Johnston, James P.

      1992-01-01

      A large (high flow rate) dye laser amplifier in which a continuous replened supply of dye is excited by a first light beam, specifically a copper vapor laser beam, in order to amplify the intensity of a second different light beam, specifically a dye beam, passing through the dye is disclosed herein. This amplifier includes a dye cell defining a dye chamber through which a continuous stream of dye is caused to pass at a relatively high flow rate and a specifically designed diffuser assembly for slowing down the flow of dye while, at the same time, assuring that as the dye stream flows through the diffuser assembly it does so in a stable manner.

    14. Hydraulic engine valve actuation system including independent feedback control

      DOE Patents [OSTI]

      Marriott, Craig D

      2013-06-04

      A hydraulic valve actuation assembly may include a housing, a piston, a supply control valve, a closing control valve, and an opening control valve. The housing may define a first fluid chamber, a second fluid chamber, and a third fluid chamber. The piston may be axially secured to an engine valve and located within the first, second and third fluid chambers. The supply control valve may control a hydraulic fluid supply to the piston. The closing control valve may be located between the supply control valve and the second fluid chamber and may control fluid flow from the second fluid chamber to the supply control valve. The opening control valve may be located between the supply control valve and the second fluid chamber and may control fluid flow from the supply control valve to the second fluid chamber.

    15. Actuator assembly including a single axis of rotation locking member

      DOE Patents [OSTI]

      Quitmeyer, James N.; Benson, Dwayne M.; Geck, Kellan P.

      2009-12-08

      An actuator assembly including an actuator housing assembly and a single axis of rotation locking member fixedly attached to a portion of the actuator housing assembly and an external mounting structure. The single axis of rotation locking member restricting rotational movement of the actuator housing assembly about at least one axis. The single axis of rotation locking member is coupled at a first end to the actuator housing assembly about a Y axis and at a 90.degree. angle to an X and Z axis providing rotation of the actuator housing assembly about the Y axis. The single axis of rotation locking member is coupled at a second end to a mounting structure, and more particularly a mounting pin, about an X axis and at a 90.degree. angle to a Y and Z axis providing rotation of the actuator housing assembly about the X axis. The actuator assembly is thereby restricted from rotation about the Z axis.

    16. Copper laser modulator driving assembly including a magnetic compression laser

      DOE Patents [OSTI]

      Cook, Edward G.; Birx, Daniel L.; Ball, Don G.

      1994-01-01

      A laser modulator (10) having a low voltage assembly (12) with a plurality of low voltage modules (14) with first stage magnetic compression circuits (20) and magnetic assist inductors (28) with a common core (91), such that timing of the first stage magnetic switches (30b) is thereby synchronized. A bipolar second stage of magnetic compression (42) is coupled to the low voltage modules (14) through a bipolar pulse transformer (36) and a third stage of magnetic compression (44) is directly coupled to the second stage of magnetic compression (42). The low voltage assembly (12) includes pressurized boxes (117) for improving voltage standoff between the primary winding assemblies (34) and secondary winding (40) contained therein.

    17. Pulse transmission transmitter including a higher order time derivate filter

      DOE Patents [OSTI]

      Dress, Jr., William B.; Smith, Stephen F.

      2003-09-23

      Systems and methods for pulse-transmission low-power communication modes are disclosed. A pulse transmission transmitter includes: a clock; a pseudorandom polynomial generator coupled to the clock, the pseudorandom polynomial generator having a polynomial load input; an exclusive-OR gate coupled to the pseudorandom polynomial generator, the exclusive-OR gate having a serial data input; a programmable delay circuit coupled to both the clock and the exclusive-OR gate; a pulse generator coupled to the programmable delay circuit; and a higher order time derivative filter coupled to the pulse generator. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.

    18. High-performance computing for airborne applications

      SciTech Connect (OSTI)

      Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

      2010-06-28

      Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

    19. Exploratory Experimentation and Computation

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2010-02-25

      We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

    20. Computer Algebra System

      Energy Science and Technology Software Center (OSTI)

      1992-05-04

      DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less